WorldWideScience

Sample records for modeling approach finally

  1. A Final Approach Trajectory Model for Current Operations

    Science.gov (United States)

    Gong, Chester; Sadovsky, Alexander

    2010-01-01

    Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.

  2. A novel approach to modeling unstable EOR displacements. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Peters, E.J.

    1994-04-01

    Most enhanced oil recovery schemes involve the displacement of a more dense and more viscous oil by a less dense and less viscous fluid in a heterogeneous porous medium. The interaction of heterogeneity with the several competing forces, namely, viscous, capillary, gravitational, and dispersive forces, can conspire to make the displacements unstable and difficult to model and to predict. The objective of this research was to develop a systematic methodology for modeling unstable fluid displacements in heterogeneous media. Flow visualization experiments were conducted using X-ray computed tomography imaging and a video imaging workstation to gain insights into the dynamics of unstable displacements, acquire detailed quantitative experimental image data for calibrating numerical models of unstable displacements, and image and characterize heterogeneities in laboratory cores geostatistically. High-resolution numerical models modified for use on vector-architecture supercomputers were used to replicate the image data. Geostatistical models of reservoir heterogeneity were incorporated in order to study the interaction of hydrodynamic instability and heterogeneity in reservoir displacements. Finally, a systematic methodology for matching the experimental data with the numerical models and scaling the laboratory results to other systems were developed. The result is a new method for predicting the performance of unstable EOR displacements in the field based on small-scale displacements in the laboratory. The methodology is general and can be applied to forecast the performance of most processes that involve fluid flow and transport in porous media. Therefore, this research should be of interest to those involved in forecasting the performance of enhanced oil recovery processes and the spreading of contaminants in heterogeneous aquifers.

  3. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

  4. Final Technical Report: "Representing Endogenous Technological Change in Climate Policy Models: General Equilibrium Approaches"

    Energy Technology Data Exchange (ETDEWEB)

    Ian Sue Wing

    2006-04-18

    The research supported by this award pursued three lines of inquiry: (1) The construction of dynamic general equilibrium models to simulate the accumulation and substitution of knowledge, which has resulted in the preparation and submission of several papers: (a) A submitted pedagogic paper which clarifies the structure and operation of computable general equilibrium (CGE) models (C.2), and a review article in press which develops a taxonomy for understanding the representation of technical change in economic and engineering models for climate policy analysis (B.3). (b) A paper which models knowledge directly as a homogeneous factor, and demonstrates that inter-sectoral reallocation of knowledge is the key margin of adjustment which enables induced technical change to lower the costs of climate policy (C.1). (c) An empirical paper which estimates the contribution of embodied knowledge to aggregate energy intensity in the U.S. (C.3), followed by a companion article which embeds these results within a CGE model to understand the degree to which autonomous energy efficiency improvement (AEEI) is attributable to technical change as opposed to sub-sectoral shifts in industrial composition (C.4) (d) Finally, ongoing theoretical work to characterize the precursors and implications of the response of innovation to emission limits (E.2). (2) Data development and simulation modeling to understand how the characteristics of discrete energy supply technologies determine their succession in response to emission limits when they are embedded within a general equilibrium framework. This work has produced two peer-reviewed articles which are currently in press (B.1 and B.2). (3) Empirical investigation of trade as an avenue for the transmission of technological change to developing countries, and its implications for leakage, which has resulted in an econometric study which is being revised for submission to a journal (E.1). As work commenced on this topic, the U.S. withdrawal

  5. New Approaches to Final Cooling

    Energy Technology Data Exchange (ETDEWEB)

    Neuffer, David [Fermilab

    2014-11-10

    A high-energy muon collider scenario require a “final cooling” system that reduces transverse emittances by a factor of ~10 while allowing longitudinal emittance increase. The baseline approach has low-energy transverse cooling within high-field solenoids, with strong longitudinal heating. This approach and its recent simulation are discussed. Alternative approaches which more explicitly include emittance exchange are also presented. Round-to-flat beam transform, transverse slicing, and longitudinal bunch coalescence are possible components of the alternative approach. A more explicit understanding of solenoidal cooling beam dynamics is introduced.

  6. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  7. Evaluation of Resource Acquisition Approaches : Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    O`Neill, Maura L.; Mortimer, Tom; Palermini, Debbi; Nelson, Kari

    1991-09-12

    Over the last few years, Bonneville has been addressing this need and has developed numerous ways of acquiring resources. Four of these Approaches, the Competitive Acquisition, Billing Credits, and Targeted Acquisition Programs, and the Cowlitz Falls Hydroelectric Project, were the subject of this evaluation project. Each Approach is currently in different stages of a process, and Bonneville felt it was an appropriate time that an evaluation be conducted. The purpose of this evaluation is to analyze the various Approaches` processes, to learn what`s working and what`s not, and to offer recommendations as to how Bonneville might improve their resources acquisition efforts. The evaluation was conducted with no preconceived biases.

  8. Physics modeling support contract: Final report

    Energy Technology Data Exchange (ETDEWEB)

    1987-09-30

    This document is the final report for the Physics Modeling Support contract between TRW, Inc. and the Lawrence Livermore National Laboratory for fiscal year 1987. It consists of following projects: TIBER physics modeling and systems code development; advanced blanket modeling task; time dependent modeling; and free electron maser for TIBER II.

  9. Physics modeling support contract: Final report

    International Nuclear Information System (INIS)

    1987-01-01

    This document is the final report for the Physics Modeling Support contract between TRW, Inc. and the Lawrence Livermore National Laboratory for fiscal year 1987. It consists of following projects: TIBER physics modeling and systems code development; advanced blanket modeling task; time dependent modeling; and free electron maser for TIBER II

  10. A coupled approach to spatially derived parameters necessary for ecosystem modeling on the North Slope of Alaska: Appendix A. Final report, March 1, 1989--February 28, 1993

    Energy Technology Data Exchange (ETDEWEB)

    Petersen, G.W.; Day, R.L.; Pollack, J. [Pennsylvania State Univ., University Park, PA (United States). Dept. of Agronomy

    1993-12-01

    This study concerned an investigation of ecosystem dynamics in several small study sites in the North Slope region of Alaska. The scope of the proposed research is to quantitatively determine spatial interrelationships between landform geometry within these study areas and such ecologically important factors as vegetation type, depth-to-permafrost, hydraulic conductivity and incoming solar radiation. Extrapolation techniques developed and terrain-related data generated as a result of this research will augment R4D Phase II goals which relate to running the General Arctic Simulac (GAS) model (and associated ecosystem models) at different locations on the North Slope. In particular, Penn State has contributed significantly to extrapolation efforts by developing techniques which can be used to initialize conditions for model input either through direct measurement (e.g., slope and aspect) and GIS-based simulation models (e.g., drainage basin characterization). As stated in the R4D Phase II Research Plan, the long-term objectives of this program are: (1) to determine effects and to develop models based on ecosystem disturbances commonly created by energy development so that appropriate, cost-effective measures can be utilized to minimize deleterious disturbances; and (2) to extend the results to other arctic and alpine areas which are important because of likely impact from energy development. It is this second long-term objective which relates most directly to Penn State`s work.

  11. Physics of low-lying hadrons in quark model and effective hadronic approaches. Final report, September 1, 1996 - March 31, 2000

    International Nuclear Information System (INIS)

    Mizutani, T.

    2000-01-01

    There were basically three theoretical projects supported by this grant: (1) Use of confined quark models to study low energy hadronic processes; (2) Production of strangeness by Electromagnetic Probes; and (3) Diffractive dissociative production of vector mesons by virtual photons on nucleons. Each of them is summarized in the paper

  12. HEDR modeling approach: Revision 1

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies

  13. HEDR modeling approach: Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies.

  14. Final Report: A Transport Phenomena Based Approach to Probe Evolution of Weld Macro and Microstructures and A Smart Bi-directional Model of Fusion Welding

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Tarasankar DebRoy

    2009-12-11

    In recent years, applications of numerical heat transfer and fluid flow models of fusion welding have resulted in improved understanding of both the welding processes and welded materials. They have been used to accurately calculate thermal cycles and fusion zone geometry in many cases. Here we report the following three major advancements from this project. First, we show how microstructures, grain size distribution and topology of welds of several important engineering alloys can be computed starting from better understanding of the fusion welding process through numerical heat transfer and fluid flow calculations. Second, we provide a conclusive proof that the reliability of numerical heat transfer and fluid flow calculations can be significantly improved by optimizing several uncertain model parameters. Third, we demonstrate how the numerical heat transfer and fluid flow models can be combined with a suitable global optimization program such as a genetic algorithm for the tailoring of weld attributes such as attaining a specified weld geometry or a weld thermal cycle. The results of the project have been published in many papers and a listing of these are included together with a list of the graduate thesis that resulted from this project. The work supported by the DOE award has resulted in several important national and international awards. A listing of these awards and the status of the graduate students are also presented in this report.

  15. Final Report, 2011-2014. Forecasting Carbon Storage as Eastern Forests Age. Joining Experimental and Modeling Approaches at the UMBS AmeriFlux Site

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, Peter [The Ohio State Univ., Columbus, OH (United States); Bohrer, Gil [The Ohio State Univ., Columbus, OH (United States); Gough, Christopher [Virginia Commonwealth Univ., Richmond, VA (United States); Nadelhoffer, Knute [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-03-12

    At the University of Michigan Biological Station (UMBS) AmeriFlux sites (US-UMB and US-UMd), long-term C cycling measurements and a novel ecosystem-scale experiment are revealing physical, biological, and ecological mechanisms driving long-term trajectories of C cycling, providing new data for improving modeling forecasts of C storage in eastern forests. Our findings provide support for previously untested hypotheses that stand-level structural and biological properties constrain long-term trajectories of C storage, and that remotely sensed canopy structural parameters can substantially improve model forecasts of forest C storage. Through the Forest Accelerated Succession ExperimenT (FASET), we are directly testing the hypothesis that forest C storage will increase due to increasing structural and biological complexity of the emerging tree communities. Support from this project, 2011-2014, enabled us to incorporate novel physical and ecological mechanisms into ecological, meteorological, and hydrological models to improve forecasts of future forest C storage in response to disturbance, succession, and current and long-term climate variation

  16. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  17. Modeling prosody: Different approaches

    Science.gov (United States)

    Carmichael, Lesley M.

    2002-11-01

    Prosody pervades all aspects of a speech signal, both in terms of raw acoustic outcomes and linguistically meaningful units, from the phoneme to the discourse unit. It is carried in the suprasegmental features of fundamental frequency, loudness, and duration. Several models have been developed to account for the way prosody organizes speech, and they vary widely in terms of their theoretical assumptions, organizational primitives, actual procedures of application to speech, and intended use (e.g., to generate speech from text vs. to model the prosodic phonology of a language). In many cases, these models overtly contradict one another with regard to their fundamental premises or their identification of the perceptible objects of linguistic prosody. These competing models are directly compared. Each model is applied to the same speech samples. This parallel analysis allows for a critical inspection of each model and its efficacy in assessing the suprasegmental behavior of the speech. The analyses illustrate how different approaches are better equipped to account for different aspects of prosody. Viewing the models and their successes from an objective perspective allows for creative possibilities in terms of combining strengths from models which might otherwise be considered fundamentally incompatible.

  18. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  19. Temperature Buffer Test. Final THM modelling

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel

    2012-01-01

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  20. BUBBLES: an Automated Decision Support System for Final Approach Controllers

    Science.gov (United States)

    Chi, Zhizang

    1990-01-01

    With the assumptions that an explicit schedule exists for landings (and takeoffs) at each runway, that each aircraft has declared an IAS for final approach and will be obligated to fly it as accurately as possible, and that there is a continuous estimate of average windspeed on approach, the objective was to provide automated cues to assist controllers in the spacing of landing aircraft. The cues have two characteristics. First, they are adaptive to estimation errors in position and speed by the radar tracking process and piloting errors in the execution of turns and commanded speed reductions. Second, the cues are responsive to the desires of the human controller. Several diagrams are used to help explain the system.

  1. Material Modelling - Composite Approach

    DEFF Research Database (Denmark)

    Nielsen, Lauge Fuglsang

    1997-01-01

    This report is part of a research project on "Control of Early Age Cracking" - which, in turn, is part of the major research programme, "High Performance Concrete - The Contractor's Technology (HETEK)", coordinated by the Danish Road Directorate, Copenhagen, Denmark, 1997.A composite-rheological ......This report is part of a research project on "Control of Early Age Cracking" - which, in turn, is part of the major research programme, "High Performance Concrete - The Contractor's Technology (HETEK)", coordinated by the Danish Road Directorate, Copenhagen, Denmark, 1997.A composite......-rheological model of concrete is presented by which consistent predictions of creep, relaxation, and internal stresses can be made from known concrete composition, age at loading, and climatic conditions. No other existing "creep prediction method" offers these possibilities in one approach.The model...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...

  2. Innovative approaches to inertial confinement fusion reactors: Final report

    International Nuclear Information System (INIS)

    Bourque, R.F.; Schultz, K.R.

    1986-11-01

    Three areas of innovative approaches to inertial confinement fusion (ICF) reactor design are given. First, issues pertaining to the Cascade reactor concept are discussed. Then, several innovative concepts are presented which attempt to directly recover the blast energy from a fusion target. Finally, the Turbostar concept for direct recovery of that energy is evaluated. The Cascade issues discussed are combustion of the carbon granules in the event of air ingress, the use of alternate granule materials, and the effect of changes in carbon flow on details of the heat exchanger. Carbon combustion turns out to be a minor problem. Four ICF innovative concepts were considered: a turbine with ablating surfaces, a liquid piston system, a wave generator, and a resonating pump. In the final analysis, none show any real promise. The Turbostar concept of direct recovery is a very interesting idea and appeared technically viable. However, it shows no efficiency gain or any decrease in capital cost compared to reactors with conventional thermal conversion systems. Attempts to improve it by placing a close-in lithium sphere around the target to increase gas generation increased efficiency only slightly. It is concluded that these direct conversion techniques require thermalization of the x-ray and debris energy, and are Carnot limited. They therefore offer no advantage over existing and proposed methods of thermal energy conversion or direct electrical conversion

  3. A Simulation-Based Model for Final Price Prediction in Online Auctions

    OpenAIRE

    Shihyu Chou; Chin-Shien Lin; Chi-hong Chen; Tai-Ru Ho; Yu-Chen Hsieh

    2007-01-01

    Online auctions, a profitable, exciting, and dynamic part of e-commerce, have enjoyed increasing public interest. However, there is still a paucity of literature on final price prediction for online auctions. Although Markov process models provide a mathematical approach to predicting online auction prices, estimating parameters of a Markov process model in practice is a challenging task. In this paper we propose a simulation-based model as an alternative approach to predicting the final pric...

  4. Fleet replacement modeling : final report, July 2009.

    Science.gov (United States)

    2009-07-01

    This project focused on two interrelated areas in equipment replacement modeling for fleets. The first area was research-oriented and addressed a fundamental assumption in engineering economic replacement modeling that all assets providing a similar ...

  5. An Intelligent Systems Approach to Reservoir Characterization. Final Report

    International Nuclear Information System (INIS)

    Shahab D. Mohaghegh; Jaime Toro; Thomas H. Wilson; Emre Artun; Alejandro Sanchez; Sandeep Pyakurel

    2005-01-01

    Today, the major challenge in reservoir characterization is integrating data coming from different sources in varying scales, in order to obtain an accurate and high-resolution reservoir model. The role of seismic data in this integration is often limited to providing a structural model for the reservoir. Its relatively low resolution usually limits its further use. However, its areal coverage and availability suggest that it has the potential of providing valuable data for more detailed reservoir characterization studies through the process of seismic inversion. In this paper, a novel intelligent seismic inversion methodology is presented to achieve a desirable correlation between relatively low-frequency seismic signals, and the much higher frequency wireline-log data. Vertical seismic profile (VSP) is used as an intermediate step between the well logs and the surface seismic. A synthetic seismic model is developed by using real data and seismic interpretation. In the example presented here, the model represents the Atoka and Morrow formations, and the overlying Pennsylvanian sequence of the Buffalo Valley Field in New Mexico. Generalized regression neural network (GRNN) is used to build two independent correlation models between; (1) Surface seismic and VSP, (2) VSP and well logs. After generating virtual VSP's from the surface seismic, well logs are predicted by using the correlation between VSP and well logs. The values of the density log, which is a surrogate for reservoir porosity, are predicted for each seismic trace through the seismic line with a classification approach having a correlation coefficient of 0.81. The same methodology is then applied to real data taken from the Buffalo Valley Field, to predict inter-well gamma ray and neutron porosity logs through the seismic line of interest. The same procedure can be applied to a complete 3D seismic block to obtain 3D distributions of reservoir properties with less uncertainty than the geostatistical

  6. Final Project Report Load Modeling Transmission Research

    Energy Technology Data Exchange (ETDEWEB)

    Lesieutre, Bernard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bravo, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yinger, Robert [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chassin, Dave [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Huang, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lu, Ning [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hiskens, Ian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Venkataramanan, Giri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-03-31

    The research presented in this report primarily focuses on improving power system load models to better represent their impact on system behavior. The previous standard load model fails to capture the delayed voltage recovery events that are observed in the Southwest and elsewhere. These events are attributed to stalled air conditioner units after a fault. To gain a better understanding of their role in these events and to guide modeling efforts, typical air conditioner units were testing in laboratories. Using data obtained from these extensive tests, new load models were developed to match air conditioner behavior. An air conditioner model is incorporated in the new WECC composite load model. These models are used in dynamic studies of the West and can impact power transfer limits for California. Unit-level and systemlevel solutions are proposed as potential solutions to the delayed voltage recovery problem.

  7. Mathematical models for atmospheric pollutants. Final report

    International Nuclear Information System (INIS)

    Drake, R.L.; Barrager, S.M.

    1979-08-01

    The present and likely future roles of mathematical modeling in air quality decisions are described. The discussion emphasizes models and air pathway processes rather than the chemical and physical behavior of specific anthropogenic emissions. Summarized are the characteristics of various types of models used in the decision-making processes. Specific model subclasses are recommended for use in making air quality decisions that have site-specific, regional, national, or global impacts. The types of exposure and damage models that are currently used to predict the effects of air pollutants on humans, other animals, plants, ecosystems, property, and materials are described. The aesthetic effects of odor and visibility and the impact of pollutants on weather and climate are also addressed. Technical details of air pollution meteorology, chemical and physical properties of air pollutants, solution techniques, and air quality models are discussed in four appendices bound in separate volumes

  8. iFlorida model deployment final evaluation report

    Science.gov (United States)

    2009-01-01

    This document is the final report for the evaluation of the USDOT-sponsored Surface Transportation Security and Reliability Information System Model Deployment, or iFlorida Model Deployment. This report discusses findings in the following areas: ITS ...

  9. Respiratory trace deposition models. Final report

    International Nuclear Information System (INIS)

    Yeh, H.C.

    1980-03-01

    Respiratory tract characteristics of four mammalian species (human, dog, rat and Syrian hamster) were studied, using replica lung casts. An in situ casting techniques was developed for making the casts. Based on an idealized branch model, over 38,000 records of airway segment diameters, lengths, branching angles and gravity angles were obtained from measurements of two humans, two Beagle dogs, two rats and one Syrian hamster. From examination of the trimmed casts and morphometric data, it appeared that the structure of the human airway is closer to a dichotomous structure, whereas for dog, rat and hamster, it is monopodial. Flow velocity in the trachea and major bronchi in living Beagle dogs was measured using an implanted, subminiaturized, heated film anemometer. A physical model was developed to simulate the regional deposition characteristics proposed by the Task Group on Lung Dynamics of the ICRP. Various simulation modules for the nasopharyngeal (NP), tracheobronchial (TB) and pulmonary (P) compartments were designed and tested. Three types of monodisperse aerosols were developed for animal inhalation studies. Fifty Syrian hamsters and 50 rats were exposed to five different sizes of monodisperse fused aluminosilicate particles labeled with 169 Yb. Anatomical lung models were developed for four species (human, Beagle dog, rat and Syrian hamster) that were based on detailed morphometric measurements of replica lung casts. Emphasis was placed on developing a lobar typical-path lung model and on developing a modeling technique which could be applied to various mammalian species. A set of particle deposition equations for deposition caused by inertial impaction, sedimentation, and diffusion were developed. Theoretical models of particle deposition were developed based on these equations and on the anatomical lung models

  10. Final model of multicriterionevaluation of animal welfare

    DEFF Research Database (Denmark)

    Bonde, Marianne; Botreau, R; Bracke, MBM

    One major objective of Welfare Quality® is to propose harmonized methods for the overall assessment of animal welfare on farm and at slaughter that are science based and meet societal concerns. Welfare is a multidimensional concept and its assessment requires measures of different aspects. Welfare...... Quality® proposes a formal evaluation model whereby the data on animals or their environment are transformed into value scores that reflect compliance with 12 subcriteria and 4 criteria of good welfare. Each animal unit is then allocated to one of four categories: excellent welfare, enhanced welfare......, acceptable welfare and not classified. This evaluation model is tuned according to the views of experts from animal and social sciences, and stakeholders....

  11. Applied approach slab settlement research, design/construction : final report.

    Science.gov (United States)

    2013-08-01

    Approach embankment settlement is a pervasive problem in Oklahoma and many other states. The bump and/or abrupt slope change poses a danger to traffic and can cause increased dynamic loads on the bridge. Frequent and costly maintenance may be needed ...

  12. A Conceptual Modeling Approach for OLAP Personalization

    Science.gov (United States)

    Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan

    Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.

  13. Constructing Realistic Szekeres Models from Initial and Final Data

    OpenAIRE

    Walters, Anthony; Hellaby, Charles

    2012-01-01

    The Szekeres family of inhomogeneous solutions, which are defined by six arbitrary metric functions, offers a wide range of possibilities for modelling cosmic structure. Here we present a model construction procedure for the quasispherical case using given data at initial and final times. Of the six arbitrary metric functions, the three which are common to both Szekeres and Lema\\^itre-Tolman models are determined by the model construction procedure of Krasinski & Hellaby. For the remaining th...

  14. Model validation studies of solar systems, Phase III. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lantz, L.J.; Winn, C.B.

    1978-12-01

    Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.

  15. Final Report Fermionic Symmetries and Self consistent Shell Model

    International Nuclear Information System (INIS)

    Zamick, Larry

    2008-01-01

    In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.

  16. Photovoltaic subsystem marketing and distribution model: programming manual. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-07-01

    Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.

  17. A molecular-genetic approach to studying source-sink interactions in Arabidopsis thalian. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, S. I.

    2000-06-01

    This is a final report describing the results of the research funded by the DOE Energy Biosciences Program grant entitled ''A Molecular-Genetic Approach to Studying Source-Sink Interactions in Arabidiopsis thaliana''.

  18. Final Report on the Fuel Saving Effectiveness of Various Driver Feedback Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Gonder, J.; Earleywine, M.; Sparks, W.

    2011-03-01

    This final report quantifies the fuel-savings opportunities from specific driving behavior changes, identifies factors that influence drivers' receptiveness to adopting fuel-saving behaviors, and assesses various driver feedback approaches.

  19. Modeling Approaches in Planetary Seismology

    Science.gov (United States)

    Weber, Renee; Knapmeyer, Martin; Panning, Mark; Schmerr, Nick

    2014-01-01

    Of the many geophysical means that can be used to probe a planet's interior, seismology remains the most direct. Given that the seismic data gathered on the Moon over 40 years ago revolutionized our understanding of the Moon and are still being used today to produce new insight into the state of the lunar interior, it is no wonder that many future missions, both real and conceptual, plan to take seismometers to other planets. To best facilitate the return of high-quality data from these instruments, as well as to further our understanding of the dynamic processes that modify a planet's interior, various modeling approaches are used to quantify parameters such as the amount and distribution of seismicity, tidal deformation, and seismic structure on and of the terrestrial planets. In addition, recent advances in wavefield modeling have permitted a renewed look at seismic energy transmission and the effects of attenuation and scattering, as well as the presence and effect of a core, on recorded seismograms. In this chapter, we will review these approaches.

  20. Constructing realistic Szekeres models from initial and final data

    Energy Technology Data Exchange (ETDEWEB)

    Walters, Anthony; Hellaby, Charles, E-mail: tony.walters@uct.ac.za, E-mail: charles.hellaby@uct.ac.za [Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch, 7701 (South Africa)

    2012-12-01

    The Szekeres family of inhomogeneous solutions, which are defined by six arbitrary metric functions, offers a wide range of possibilities for modelling cosmic structure. Here we present a model construction procedure for the quasispherical case using given data at initial and final times. Of the six arbitrary metric functions, the three which are common to both Szekeres and Lemaître-Tolman models are determined by the model construction procedure of Krasinski and Hellaby. For the remaining three functions, which are unique to Szekeres models, we derive exact analytic expressions in terms of more physically intuitive quantities — density profiles and dipole orientation angles. Using MATLAB, we implement the model construction procedure and simulate the time evolution.

  1. Branding approach and valuation models

    Directory of Open Access Journals (Sweden)

    Mamula Tatjana

    2006-01-01

    Full Text Available Much of the skill of marketing and branding nowadays is concerned with building equity for products whose characteristics, pricing, distribution and availability are really quite close to each other. Brands allow the consumer to shop with confidence. The real power of successful brands is that they meet the expectations of those that buy them or, to put it another way, they represent a promise kept. As such they are a contract between a seller and a buyer: if the seller keeps to its side of the bargain, the buyer will be satisfied; if not, the buyer will in future look elsewhere. Understanding consumer perceptions and associations is an important first step to understanding brand preferences and choices. In this paper, we discuss different models to measure value of brand according to couple of well known approaches according to request by companies. We rely upon several empirical examples.

  2. Exploitation of parallelism in climate models. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Baer, Ferdinand; Tribbia, Joseph J.; Williamson, David L.

    2001-02-05

    This final report includes details on the research accomplished by the grant entitled 'Exploitation of Parallelism in Climate Models' to the University of Maryland. The purpose of the grant was to shed light on (a) how to reconfigure the atmospheric prediction equations such that the time iteration process could be compressed by use of MPP architecture; (b) how to develop local subgrid scale models which can provide time and space dependent parameterization for a state-of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics; and (c) how to capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. In the process of addressing these issues, we created parallel algorithms with spectral accuracy; we developed a process for concurrent climate simulations; we established suitable model reconstructions to speed up computation; we identified and tested optimum realization statistics; we undertook a number of parameterization studies to better understand model physics; and we studied the impact of subgrid scale motions and their parameterization in atmospheric models.

  3. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  4. The Role of Model Configuration in Simulating Spring Final Warming

    Science.gov (United States)

    McDaniel, B.

    2017-12-01

    The author performs a study of the relation between stratospheric final warmings (SFWs) and the climatological flow of the stratosphere and boreal extratropical tropospheric circulation. In contrast to the climatological seasonal cycle, SFW events noticeably sharpen the annual weakening of high-latitude circumpolar westerlies in both the stratosphere and troposphere. SFW events provide a strong organizing influence upon the large-scale circulation of the stratosphere and troposphere during the period of spring onset and are an important feature to assess the ability of a model to accurately reproduce observed variability. The ability of state-of-the-art climate models to accurately represent the transition of the stratosphere is crucial in accurately simulating variability in the stratosphere and stratosphere-troposphere interactions. To assess the veracity of stratospheric simulations in current models, a suite of runs from different members of the CMIP5 experiment are analyzed. For each model, the average date of spring onset as well as other descriptive statistics are calculated as well as the composite evolution of zonal wind anomalies and temperature and geopotential heights as they propagate from the stratosphere down to the surface. These composite patterns are then compared with the canonical evolution as based on observations. These results are binned separately based on the stratospheric resolution of the model (so called high-top and low-top models) as well as the strength of the climatological wintertime polar vortex to identify biases present in different classes of the models.

  5. Repository documentation rethought. A comprehensive approach from untreated waste to waste packages for final disposal

    Energy Technology Data Exchange (ETDEWEB)

    Anthofer, Anton Philipp; Schubert, Johannes [VPC GmbH, Dresden (Germany)

    2017-11-15

    The German Act on Reorganization of Responsibility for Nuclear Disposal (Entsorgungsuebergangsgesetz (EntsorgUebG)) adopted in June 2017 provides the energy utilities with the new option of transferring responsibility for their waste packages to the Federal Government. This is conditional on the waste packages being approved for delivery to the Konrad final repository. A comprehensive approach starts with the dismantling of nuclear facilities and extends from waste disposal and packaging planning to final repository documentation. Waste package quality control measures are planned and implemented as early as in the process qualification stage so that the production of waste packages that are suitable for final deposition can be ensured. Optimization of cask and loading configuration can save container and repository volume. Workflow planning also saves time, expenditure and exposure time for personnel at the facilities. VPC has evaluated this experience and developed it into a comprehensive approach.

  6. Learning Action Models: Qualitative Approach

    NARCIS (Netherlands)

    Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite

  7. modeling, observation and control, a multi-model approach

    OpenAIRE

    Elkhalil, Mansoura

    2011-01-01

    This thesis is devoted to the control of systems which dynamics can be suitably described by a multimodel approach from an investigation study of a model reference adaptative control performance enhancement. Four multimodel control approaches have been proposed. The first approach is based on an output reference model control design. A successful experimental validation involving a chemical reactor has been carried out. The second approach is based on a suitable partial state model reference ...

  8. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  9. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  10. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    —they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...... methods suited for finite identifiability of particular types of deterministic actions....

  11. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  12. A new approach to modeling aviation accidents

    Science.gov (United States)

    Rao, Arjun Harsha

    views aviation accidents as a set of hazardous states of a system (pilot and aircraft), and triggers that cause the system to move between hazardous states. I used the NTSB's accident coding manual (that contains nearly 4000 different codes) to develop a "dictionary" of hazardous states, triggers, and information codes. Then, I created the "grammar", or a set of rules, that: (1) orders the hazardous states in each accident; and, (2) links the hazardous states using the appropriate triggers. This approach: (1) provides a more correct count of the causes for accidents in the NTSB database; and, (2) checks for gaps or omissions in NTSB accident data, and fills in some of these gaps using logic-based rules. These rules also help identify and count causes for accidents that were not discernable from previous analyses of historical accident data. I apply the model to 6200 helicopter accidents that occurred in the US between 1982 and 2015. First, I identify the states and triggers that are most likely to be associated with fatal and non-fatal accidents. The results suggest that non-fatal accidents, which account for approximately 84% of the accidents, provide valuable opportunities to learn about the causes for accidents. Next, I investigate the causes of inflight loss of control using both a conventional approach and using the state-based approach. The conventional analysis provides little insight into the causal mechanism for LOC. For instance, the top cause of LOC is "aircraft control/directional control not maintained", which does not provide any insight. In contrast, the state-based analysis showed that pilots' tendency to clip objects frequently triggered LOC (16.7% of LOC accidents)--this finding was not directly discernable from conventional analyses. Finally, I investigate the causes for improper autorotations using both a conventional approach and the state-based approach. The conventional approach uses modifiers (e.g., "improper", "misjudged") associated with "24520

  13. A Multivariate Approach to Functional Neuro Modeling

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.

    1998-01-01

    by the application of linear and more flexible, nonlinear microscopic regression models to a real-world dataset. The dependency of model performance, as quantified by generalization error, on model flexibility and training set size is demonstrated, leading to the important realization that no uniformly optimal model......, provides the basis for a generalization theoretical framework relating model performance to model complexity and dataset size. Briefly summarized the major topics discussed in the thesis include: - An introduction of the representation of functional datasets by pairs of neuronal activity patterns...... exists. - Model visualization and interpretation techniques. The simplicity of this task for linear models contrasts the difficulties involved when dealing with nonlinear models. Finally, a visualization technique for nonlinear models is proposed. A single observation emerges from the thesis...

  14. Szekeres models: a covariant approach

    Science.gov (United States)

    Apostolopoulos, Pantelis S.

    2017-05-01

    We exploit the 1  +  1  +  2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an average scale length can be defined covariantly which satisfies a 2d equation of motion driven from the effective gravitational mass (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field E ab . We show that the quasi-symmetric property of the Szekeres models is justified through the existence of 3 independent intrinsic Killing vector fields (IKVFs). In addition the notions of the apparent and absolute apparent horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express Sachs’ optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.

  15. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    appeal in building systems which operate robustly over a wide range of operating conditions by decomposing them into a number of simplerlinear modelling or control problems, even for nonlinear modelling or control problems. This appeal has been a factor in the development of increasinglypopular `local...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning....... The underlying question is `How should we partition the system - what is `local'?'. This book presents alternative ways of bringing submodels together,which lead to varying levels of performance and insight. Some are further developed for autonomous learning of parameters from data, while others havefocused...

  16. Modeling software behavior a craftsman's approach

    CERN Document Server

    Jorgensen, Paul C

    2009-01-01

    A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth

  17. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    the Petri model allowed a quick assessment of all potential states but was more cumbersome to build than the MP model. A comparison of approaches...identical state space results. The combined state space graph of the Petri model allowed a quick assessment of all potential states but was more...59 INITIAL DISTRIBUTION LIST ...................................................................................65 ix LIST

  18. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  19. Characterize and Model Final Waste Formulations and Offgas Solids from Thermal Treatment Processes - FY-98 Final Report for LDRD 2349

    Energy Technology Data Exchange (ETDEWEB)

    Kessinger, Glen Frank; Nelson, Lee Orville; Grandy, Jon Drue; Zuck, Larry Douglas; Kong, Peter Chuen Sun; Anderson, Gail

    1999-08-01

    The purpose of LDRD #2349, Characterize and Model Final Waste Formulations and Offgas Solids from Thermal Treatment Processes, was to develop a set of tools that would allow the user to, based on the chemical composition of a waste stream to be immobilized, predict the durability (leach behavior) of the final waste form and the phase assemblages present in the final waste form. The objectives of the project were: • investigation, testing and selection of thermochemical code • development of auxiliary thermochemical database • synthesis of materials for leach testing • collection of leach data • using leach data for leach model development • thermochemical modeling The progress toward completion of these objectives and a discussion of work that needs to be completed to arrive at a logical finishing point for this project will be presented.

  20. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  1. Validation of Modeling Flow Approaching Navigation Locks

    Science.gov (United States)

    2013-08-01

    USACE, Pittsburgh District ( LRP ) requested that the US Army Engineer Research and Development Center, Coastal and ERDC/CHL TR-13-9 2 Hydraulics...approaching the lock and dam. The second set of experiments considered a design, referred to as Plan B lock approach, which contained the weir field in...conditions and model parameters A discharge of 1.35 cfs was set as the inflow boundary condition at the upstream end of the model. The outflow boundary was

  2. Development of an international safeguards approach to the final disposal of spent fuel in geological repositories

    International Nuclear Information System (INIS)

    Murphey, W.M.; Moran, B.W.; Fattah, A.

    1996-01-01

    The International Atomic Energy Agency (IAEA) is currently pursuing development of an international safeguards approach for the final disposal of spent fuel in geological repositories through consultants meetings and through the Program for Development of Safeguards for Final Disposal of Spent Fuel in Geological Repositories (SAGOR). The consultants meetings provide policy guidance to IAEA; SAGOR recommends effective approaches that can be efficiently implemented by IAEA. The SAGOR program, which is a collaboration of eight Member State Support Programs (MSSPs), was initiated in July 1994 and has identified 15 activities in each of three areas (i.e. conditioning facilities, active repositories, and closed repositories) that must be performed to ensure an efficient, yet effective safeguards approach. Two consultants meetings have been held: the first in May 1991 and the last in November 1995. For nuclear materials emplaced in a geological repository, the safeguards objectives were defined to be (1) to detect the diversion of spent fuel, whether concealed or unconcealed, from the repository and (2) to detect undeclared activities of safeguards concern (e.g., tunneling, underground reprocessing, or substitution in containers)

  3. Technical approach to finalizing sensible soil cleanup levels at the Fernald Environmental Management Project

    International Nuclear Information System (INIS)

    Carr, D.; Hertel, B.; Jewett, M.; Janke, R.; Conner, B.

    1996-01-01

    The remedial strategy for addressing contaminated environmental media was recently finalized for the US Department of Energy's (DOE) Fernald Environmental Management Project (FEMP) following almost 10 years of detailed technical analysis. The FEMP represents one of the first major nuclear facilities to successfully complete the Remedial Investigation/Feasibility Study (RI/FS) phase of the environmental restoration process. A critical element of this success was the establishment of sensible cleanup levels for contaminated soil and groundwater both on and off the FEMP property. These cleanup levels were derived based upon a strict application of Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) regulations and guidance, coupled with positive input from the regulatory agencies and the local community regarding projected future land uses for the site. The approach for establishing the cleanup levels was based upon a Feasibility Study (FS) strategy that examined a bounding range of viable future land uses for the site. Within each land use, the cost and technical implications of a range of health-protective cleanup levels for the environmental media were analyzed. Technical considerations in driving these cleanup levels included: direct exposure routes to viable human receptors; cross- media impacts to air, surface water, and groundwater; technical practicality of attaining the levels; volume of affected media; impact to sensitive environmental receptors or ecosystems; and cost. This paper will discuss the technical approach used to support the finalization of the cleanup levels for the site. The final cleanup levels provide the last remaining significant piece to the puzzle of establishing a final site-wide remedial strategy for the FEMP, and positions the facility for the expedient completion of site-wide remedial activities

  4. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  5. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    Energy Technology Data Exchange (ETDEWEB)

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  6. Risk Modelling for Passages in Approach Channel

    Directory of Open Access Journals (Sweden)

    Leszek Smolarek

    2013-01-01

    Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.

  7. Modeling the Pan-Arctic terrestrial and atmospheric water cycle. Final report; FINAL

    International Nuclear Information System (INIS)

    Gutowski, W.J. Jr.

    2001-01-01

    This report describes results of DOE grant DE-FG02-96ER61473 to Iowa State University (ISU). Work on this grant was performed at Iowa State University and at the University of New Hampshire in collaboration with Dr. Charles Vorosmarty and fellow scientists at the University of New Hampshire's (UNH's) Institute for the Study of the Earth, Oceans, and Space, a subcontractor to the project. Research performed for the project included development, calibration and validation of a regional climate model for the pan-Arctic, modeling river networks, extensive hydrologic database development, and analyses of the water cycle, based in part on the assembled databases and models. Details appear in publications produced from the grant

  8. Towards new approaches in phenological modelling

    Science.gov (United States)

    Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas

    2014-05-01

    Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.

  9. Ecotoxicological modelling of cosmetics for aquatic organisms: A QSTR approach.

    Science.gov (United States)

    Khan, K; Roy, K

    2017-07-01

    In this study, externally validated quantitative structure-toxicity relationship (QSTR) models were developed for toxicity of cosmetic ingredients on three different ecotoxicologically relevant organisms, namely Pseudokirchneriella subcapitata, Daphnia magna and Pimephales promelas following the OECD guidelines. The final models were developed by partial least squares (PLS) regression technique, which is more robust than multiple linear regression. The obtained model for P. subcapitata shows that molecular size and complexity have significant impacts on the toxicity of cosmetics. In case of P. promelas and D. magna, we found that the largest contribution to the toxicity was shown by hydrophobicity and van der Waals surface area, respectively. All models were validated using both internal and test compounds employing multiple strategies. For each QSTR model, applicability domain studies were also performed using the "Distance to Model in X-space" method. A comparison was made with the ECOSAR predictions in order to prove the good predictive performances of our developed models. Finally, individual models were applied to predict toxicity for an external set of 596 personal care products having no experimental data for at least one of the endpoints, and the compounds were ranked based on a decreasing order of toxicity using a scaling approach.

  10. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  11. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  12. Microgravity experiments of nano-satellite docking mechanism for final rendezvous approach and docking phase

    Science.gov (United States)

    Ui, Kyoichi; Matunaga, Saburo; Satori, Shin; Ishikawa, Tomohiro

    2005-09-01

    Laboratory for Space Systems (LSS), Tokyo Institute of Technology (Tokyo Tech) conducted three-dimensional microgravity environment experiments about a docking mechanism for mothership-daughtership (MS-DS) nano-satellite using the facility of Japan Micro Gravity Center (JAMIC) with Hokkaido Institute of Technology (HIT). LSS has studied and developed a docking mechanism for MS-DS nano-satellite system in final rendezvous approach and docking phase since 2000. Consideration of the docking mechanism is to mate a nano-satellite stably while remaining control error of relative velocity and attitude because it is difficult for nano-satellite to have complicated attitude control and mating systems. Objective of the experiments is to verify fundamental grasping function based on our proposed docking methodology. The proposed docking sequence is divided between approach/grasping phase and guiding phase. In the approach/grasping phase, the docking mechanism grasps the nano-satellite even though the nano-satellite has relative position and attitude control errors as well as relative velocity in a docking space. In the guiding function, the docking mechanism guides the nano-satellite to a docking port while adjusting its attitude in order to transfer electrical power and fuel to the nano-satellite. In the paper, we describe the experimental system including the docking mechanism, control system, the daughtership system and the release mechanism, and describe results of microgravity experiments in JAMIC.

  13. Collision With Trees on Final Approach Federal Express Flight 1478 Boeing 727-232, N497FE, Tallahassee, Florida

    National Research Council Canada - National Science Library

    2002-01-01

    This report explains the accident involving Federal Express flight 1478, a Boeing 727-232F, N497FE, which struck trees on short final approach and crashed short of runway 9 at the Tallahassee Regional...

  14. VALMET: a valley air pollution model. Final report. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Whiteman, C.D.; Allwine, K.J.

    1985-04-01

    An air quality model is described for predicting air pollution concentrations in deep mountain valleys arising from nocturnal down-valley transport and diffusion of an elevated pollutant plume, and the fumigation of the plume on the valley floor and sidewalls after sunrise. Included is a technical description of the model, a discussion of the model's applications, the required model inputs, sample calculations and model outputs, and a full listing of the FORTRAN computer program. 55 refs., 27 figs., 6 tabs.

  15. Neural network approaches for noisy language modeling.

    Science.gov (United States)

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  16. Dynamic process model of a plutonium oxalate precipitator. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.L.; Hammelman, J.E.; Borgonovi, G.M.

    1977-11-01

    In support of LLL material safeguards program, a dynamic process model was developed which simulates the performance of a plutonium (IV) oxalate precipitator. The plutonium oxalate precipitator is a component in the plutonium oxalate process for making plutonium oxide powder from plutonium nitrate. The model is based on state-of-the-art crystallization descriptive equations, the parameters of which are quantified through the use of batch experimental data. The dynamic model predicts performance very similar to general Hanford oxalate process experience. The utilization of such a process model in an actual plant operation could promote both process control and material safeguards control by serving as a baseline predictor which could give early warning of process upsets or material diversion. The model has been incorporated into a FORTRAN computer program and is also compatible with the DYNSYS 2 computer code which is being used at LLL for process modeling efforts.

  17. Dynamic process model of a plutonium oxalate precipitator. Final report

    International Nuclear Information System (INIS)

    Miller, C.L.; Hammelman, J.E.; Borgonovi, G.M.

    1977-11-01

    In support of LLL material safeguards program, a dynamic process model was developed which simulates the performance of a plutonium (IV) oxalate precipitator. The plutonium oxalate precipitator is a component in the plutonium oxalate process for making plutonium oxide powder from plutonium nitrate. The model is based on state-of-the-art crystallization descriptive equations, the parameters of which are quantified through the use of batch experimental data. The dynamic model predicts performance very similar to general Hanford oxalate process experience. The utilization of such a process model in an actual plant operation could promote both process control and material safeguards control by serving as a baseline predictor which could give early warning of process upsets or material diversion. The model has been incorporated into a FORTRAN computer program and is also compatible with the DYNSYS 2 computer code which is being used at LLL for process modeling efforts

  18. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  19. Cupola modeling research: Phase 2 (Year one), Final report

    Energy Technology Data Exchange (ETDEWEB)

    1991-11-20

    Objective was to develop a mathematical model of the cupola furnace (cast iron production) in on-line and off-line process control and optimization. In Phase I, the general structure of the heat transfer, fluid flow, and chemical models were laid out, providing reasonable descriptions of cupola behavior with a one-dimensional representation. Work was also initiated on a two-dimensional model. Phase II was focused on perfecting the one-dimensional model. The contributions include these from MIT, Michigan University, and GM.

  20. New Heat Flow Models in Fractured Geothermal Reservoirs - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Reis, John

    2001-03-31

    This study developed new analytical models for predicting the temperature distribution within a geothermal reservoir following reinjection of water having a temperature different from that of the reservoir. The study consisted of two parts: developing new analytical models for the heat conduction rate into multi-dimensional, parallelepiped matrix blocks and developing new analytical models for the advance of the thermal front through the geothermal reservoir. In the first part of the study, a number of semi-empirical models for the multi-dimensional heat conduction were developed to overcome the limitations to the exact solutions. The exact solution based on a similarity solution to the heat diffusion equation is the best model for the early-time period, but fails when thermal conduction fronts from opposing sides of the matrix block merge. The exact solution based on an infinite series solution was found not to be useful because it required tens of thousands of terms to be include d for accuracy. The best overall model for the entire conduction time was a semi-empirical model based on an exponential conduction rate. In the second part of the study, the early-time period exact solution based on similarity methods and the semi-empirical exponential model were used to develop new analytical models for the location of the thermal front within the reservoir during injection. These equations were based on an energy balance on the water in the fractured network. These convective models allowed for both dual and triple porosity reservoirs, i.e., one or two independent matrix domains. A method for incorporating measured fracture spacing distributions into these convective models was developed. It was found that there were only minor differences in the predicted areal extent of the heated zone between the dual and triple porosity models. Because of its simplicity, the dual porosity model is recommended. These new models can be used for preliminary reservoir studies

  1. Modelling of Biomass Combustor : Final assignment energy from biomass

    NARCIS (Netherlands)

    Ammerlaan, R.; Van den Hill, E.J.; Kaas, A.W.S.; Verburg, M.W.

    2010-01-01

    In this study a 1.1 MW fluidized bed combustor is modeled. A literature study is performed on aspects which determine the characteristics of the combustor. A model is set up and calculations for the design of the Fluidized Bed Combustor (FBC) are performed. Characteristics are calculated for the

  2. Mathematical Models of Elementary Mathematics Learning and Performance. Final Report.

    Science.gov (United States)

    Suppes, Patrick

    This project was concerned with the development of mathematical models of elementary mathematics learning and performance. Probabilistic finite automata and register machines with a finite number of registers were developed as models and extensively tested with data arising from the elementary-mathematics strand curriculum developed by the…

  3. Regional forecasting with global atmospheric models; Final report

    Energy Technology Data Exchange (ETDEWEB)

    Crowley, T.J.; Smith, N.R. [Applied Research Corp., College Station, TX (United States)

    1994-05-01

    The purpose of the project was to conduct model simulations for past and future climate change with respect to the proposed Yucca Mtn. repository. The authors report on three main topics, one of which is boundary conditions for paleo-hindcast studies. These conditions are necessary for the conduction of three to four model simulations. The boundary conditions have been prepared for future runs. The second topic is (a) comparing the atmospheric general circulation model (GCM) with observations and other GCMs; and (b) development of a better precipitation data base for the Yucca Mtn. region for comparisons with models. These tasks have been completed. The third topic is preliminary assessments of future climate change. Energy balance model (EBM) simulations suggest that the greenhouse effect will likely dominate climate change at Yucca Mtn. for the next 10,000 years. The EBM study should improve rational choice of GCM CO{sub 2} scenarios for future climate change.

  4. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  5. Quasirelativistic quark model in quasipotential approach

    CERN Document Server

    Matveev, V A; Savrin, V I; Sissakian, A N

    2002-01-01

    The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons

  6. Time dependent modeling of non-LTE plasmas: Final report

    International Nuclear Information System (INIS)

    1988-06-01

    During the period of performance of this contract Science Applications International Corporation (SAIC) has aided Lawrence Livermore National Laboratory (LLNL) in the development of an unclassified modeling tool for studying time evolution of high temperature ionizing and recombining plasmas. This report covers the numerical code developed, (D)ynamic (D)etailed (C)onfiguration (A)ccounting (DDCA), which was written to run on the National Magnetic Fusion Energy Computing Center (NMFECC) network as well as the classified Livermore Computer Center (OCTOPUS) network. DDCA is a One-Dimensional (1D) time dependent hydrodynamic model which makes use of the non-LTE detailed atomic physics ionization model DCA. 5 refs

  7. A new approach for developing adjoint models

    Science.gov (United States)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and

  8. Modeling future power plant location patterns. Final report

    International Nuclear Information System (INIS)

    Eagles, T.W.; Cohon, J.L.; ReVelle, C.

    1979-04-01

    The locations of future energy facilities must be specified to assess the potential environmental impact of those facilities. A computer model was developed to generate probable locations for the energy facilities needed to meet postulated future energy requirements. The model is designed to cover a very large geographical region. The regional demand for baseload electric generating capacity associated with a postulated demand growth rate over any desired time horizon is specified by the user as an input to the model. The model uses linear programming to select the most probable locations within the region, based on physical and political factors. The linear program is multi-objective, with four objective functions based on transmission, coal supply, population proximity, and water supply considerations. Minimizing each objective function leads to a distinct set of locations. The user can select the objective function or weighted combination of objective functions most appropriate to his interest. Users with disparate interests can use the model to see the locational changes which result from varying weighting of the objective functions. The model has been implemented in a six-state mid-Atlantic region. The year 2000 was chosen as the study year, and a test scenario postulating 2.25% growth in baseload generating capacity between 1977 and 2000 was chosen. The scenario stipulatedthat this capacity be 50% nuclear and 50% coal-fired. Initial utility reaction indicates the objective based on transmission costs is most important for such a large-scale analysis

  9. Calculation of extreme wind atlases using mesoscale modeling. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, X.G..; Badger, J.

    2012-06-15

    The objective of this project is to develop new methodologies for extreme wind atlases using mesoscale modeling. Three independent methodologies have been developed. All three methodologies are targeted at confronting and solving the problems and drawbacks in existing methods for extreme wind estimation regarding the use of modeled data (coarse resolution, limited representation of storms) and measurements (short period and technical issues). The first methodology is called the selective dynamical downscaling method. For a chosen area, we identify the yearly strongest storms through global reanalysis data at each model grid point and run a mesoscale model, here the Weather Research and Forecasting (WRF) model, for all storms identified. Annual maximum winds and corresponding directions from each mesoscale grid point are then collected, post-processed and used in Gumbel distribution to obtain the 50-year wind. The second methodology is called the statistical-dynamical downscaling method. For a chosen area, the geostrophic winds at a representative grid point from the global reanalysis data are used to obtain the annual maximum winds in 12 sectors for a period of 30 years. This results in 360 extreme geostrophic winds. Each of the 360 winds is used as a stationary forcing in a mesoscale model, here KAMM. For each mesoscale grid point the annual maximum winds are post-processed and used to a Gumbel fit to obtain the 50-year wind. For the above two methods, the post-processing is an essential part. It calculates the speedup effects using a linear computation model (LINCOM) and corrects the winds from the mesoscale modeling to a standard condition, i.e. 10 m above a homogeneous surface with a roughness length 5 cm. Winds of the standard condition can then be put into a microscale model to resolve the local terrain and roughness effects around particular turbine sites. By converting both the measured and modeled winds to the same surface conditions through the post

  10. An equilibrium approach to modelling social interaction

    Science.gov (United States)

    Gallo, Ignacio

    2009-07-01

    The aim of this work is to put forward a statistical mechanics theory of social interaction, generalizing econometric discrete choice models. After showing the formal equivalence linking econometric multinomial logit models to equilibrium statical mechanics, a multi-population generalization of the Curie-Weiss model for ferromagnets is considered as a starting point in developing a model capable of describing sudden shifts in aggregate human behaviour. Existence of the thermodynamic limit for the model is shown by an asymptotic sub-additivity method and factorization of correlation functions is proved almost everywhere. The exact solution of the model is provided in the thermodynamical limit by finding converging upper and lower bounds for the system's pressure, and the solution is used to prove an analytic result regarding the number of possible equilibrium states of a two-population system. The work stresses the importance of linking regimes predicted by the model to real phenomena, and to this end it proposes two possible procedures to estimate the model's parameters starting from micro-level data. These are applied to three case studies based on census type data: though these studies are found to be ultimately inconclusive on an empirical level, considerations are drawn that encourage further refinements of the chosen modelling approach.

  11. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    Science.gov (United States)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  12. Alligator Rivers Analogue project. Hydrogeological modelling. Final Report - Volume 6

    Energy Technology Data Exchange (ETDEWEB)

    Townley, L.R.; Trefry, M.G.; Barr, A.D. [CSIRO Div of Water Resources, PO Wembley, WA (Australia); Braumiller, S. [Univ of Arizona, Tucson, AZ (United States). Dept of Hydrology and Water Resources; Kawanishi, M. [Central Research Institute of Electric Power Industry, Abiko-Shi, Chiba-Ken (Japan)] [and others

    1992-12-31

    This volume describes hydrogeological modelling carried out as part of the Alligator Rivers Analogue Project. Hydrogeology has played a key integrating role in the Project, largely because water movement is believed to have controlled the evolution of the Koongarra uranium Orebody and therefore affects field observations of all types at all scales. Aquifer testing described uses the concept of transmissivity in its interpretation of aquifer response to pumping. The concept of an aquifer, a layer transmitting significant quantities of water in a mainly horizontal direction, seems hard to accept in an environment as heterogeneous as that at Koongarra. But modelling of aquifers both in one dimension and two dimensionally in plan has contributed significantly to our understanding of the site. A one-dimensional model with three layers (often described as a quasi two dimensional model) was applied to flow between the Fault and Koongarra Creek. Being a transient model, this model was able to show that reverse flows can indeed occur back towards the Fault, but only if there is distributed recharge over the orebody as well as a mechanism for the Fault, or a region near the Fault, to remove water from the simulated cross-section. The model also showed clearly that the response of the three-layered system, consisting of a highly weathered zone, a fractured transmissive zone and a less conductive lower schist zone, is governed mainly by the transmissivity and storage coefficient of the middle layer. The storage coefficient of the higher layer has little effect. A two-dimensional model in plan used a description of anisotropy to show that reverse flows can also occur even without a conducting Fault. Modelling of a three-dimensional region using discrete fractures showed that it is certainly possible to simulate systems like that observed at Koongarra, but that large amounts of data are probably needed to obtain realistic descriptions of the fracture networks. Inverse modelling

  13. Alligator Rivers Analogue project. Hydrogeological modelling. Final Report - Volume 6

    International Nuclear Information System (INIS)

    Townley, L.R.; Trefry, M.G.; Barr, A.D.; Braumiller, S.

    1992-01-01

    This volume describes hydrogeological modelling carried out as part of the Alligator Rivers Analogue Project. Hydrogeology has played a key integrating role in the Project, largely because water movement is believed to have controlled the evolution of the Koongarra uranium Orebody and therefore affects field observations of all types at all scales. Aquifer testing described uses the concept of transmissivity in its interpretation of aquifer response to pumping. The concept of an aquifer, a layer transmitting significant quantities of water in a mainly horizontal direction, seems hard to accept in an environment as heterogeneous as that at Koongarra. But modelling of aquifers both in one dimension and two dimensionally in plan has contributed significantly to our understanding of the site. A one-dimensional model with three layers (often described as a quasi two dimensional model) was applied to flow between the Fault and Koongarra Creek. Being a transient model, this model was able to show that reverse flows can indeed occur back towards the Fault, but only if there is distributed recharge over the orebody as well as a mechanism for the Fault, or a region near the Fault, to remove water from the simulated cross-section. The model also showed clearly that the response of the three-layered system, consisting of a highly weathered zone, a fractured transmissive zone and a less conductive lower schist zone, is governed mainly by the transmissivity and storage coefficient of the middle layer. The storage coefficient of the higher layer has little effect. A two-dimensional model in plan used a description of anisotropy to show that reverse flows can also occur even without a conducting Fault. Modelling of a three-dimensional region using discrete fractures showed that it is certainly possible to simulate systems like that observed at Koongarra, but that large amounts of data are probably needed to obtain realistic descriptions of the fracture networks. Inverse modelling

  14. A new emergency response model for MACCS. Final report

    International Nuclear Information System (INIS)

    Chanin, D.I.

    1992-01-01

    Under DOE sponsorship, as directed by the Los Alamos National Laboratory (LANL), the MACCS code (version 1.5.11.1) [Ch92] was modified to implement a series of improvements in its modeling of emergency response actions. The purpose of this effort has been to aid the Westinghouse Savannah River Company (WSRC) in its performance of the Level III analysis for the Savannah River Site (SRS) probabilistic risk analysis (PRA) of K Reactor [Wo90]. To ensure its usefulness to WSRC, and facilitate the new model's eventual merger with other MACCS enhancements, close cooperation with WSRC and the MACCS development team at Sandia National Laboratories (SNL) was maintained throughout the project. These improvements are intended to allow a greater degree of flexibility in modeling the mitigative actions of evacuation and sheltering. The emergency response model in MACCS version 1.5.11.1 was developed to support NRC analyses of consequences from severe accidents at commercial nuclear power plants. The NRC code imposes unnecessary constraints on DOE safety analyses, particularly for consequences to onsite worker populations, and it has therefore been revamped. The changes to the code have been implemented in a manner that preserves previous modeling capabilities and therefore prior analyses can be repeated with the new code

  15. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  16. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  17. MODELS OF TECHNOLOGY ADOPTION: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    Andrei OGREZEANU

    2015-06-01

    Full Text Available The interdisciplinary study of information technology adoption has developed rapidly over the last 30 years. Various theoretical models have been developed and applied such as: the Technology Acceptance Model (TAM, Innovation Diffusion Theory (IDT, Theory of Planned Behavior (TPB, etc. The result of these many years of research is thousands of contributions to the field, which, however, remain highly fragmented. This paper develops a theoretical model of technology adoption by integrating major theories in the field: primarily IDT, TAM, and TPB. To do so while avoiding mess, an approach that goes back to basics in independent variable type’s development is proposed; emphasizing: 1 the logic of classification, and 2 psychological mechanisms behind variable types. Once developed these types are then populated with variables originating in empirical research. Conclusions are developed on which types are underpopulated and present potential for future research. I end with a set of methodological recommendations for future application of the model.

  18. Interfacial Fluid Mechanics A Mathematical Modeling Approach

    CERN Document Server

    Ajaev, Vladimir S

    2012-01-01

    Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail.  Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also:  Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...

  19. Energy-supply planning-model documentation. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1981-09-01

    The Energy Supply Planning Model (ESPM) provides a systematic means of calculating, for any candidate energy development strategy, the total direct resources (capital, labor, materials, equipment, land, water, and energy) required to build and operate the energy-related supply facilities needed for the strategy. The model is used to analyze the feasibility and impacts of proposed strategies. The model has been modified to specifically address issues of importance to energy planning for developing countries and has been adapted for use in studies for Egypt, Indonesia, Peru, Portugal, and South Korea. Computer programs have been developed to assist in the adaptation process. The characteristics of the ESPM model are given. The results of the model may be used to analyze the economic and other related impacts of energy strategies. By virtue of the extent of detail in its results, the ESPM can help identify labor, materials, and equipment categories for which shortages may occur unless remedial actions are taken. These results can be used to investigate their impacts on specific sectors of the economy in order to determine manufacturing capacity and labor force expansion needs. This manual contains the information required to make ESPM applications once the model has been made operational on a computer system. It is designed to guide an ESPM user through input data preparation, program execution, and output evaluation. Section 2 of this report provides an overview of all the ESPM subprograms, including descriptions of the functions and application of each subprogram. The remaining sections of the report provide data input forms, a detailed description of each input, and sample run decks and reports for each subprogram. (MCW)

  20. Continuum modeling an approach through practical examples

    CERN Document Server

    Muntean, Adrian

    2015-01-01

    This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.

  1. Project MAP: Model Accounting Plan for Special Education. Final Report.

    Science.gov (United States)

    Rossi, Robert J.

    The Model Accounting Plan (MAP) is a demographic accounting system designed to meet three major goals related to improving planning, evaluation, and monitoring of special education programs. First, MAP provides local-level data for administrators and parents to monitor the progress, transition patterns, expected attainments, and associated costs…

  2. Ambient Weather Model Research and Development: Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Stel Nathan; Wade, John Edward

    1990-08-31

    Ratings for Bonneville Power Administration (BPA) transmission lines are based upon the IEEE Standard for Calculation of Bare Overhead Conductor Temperatures and Ampacity under Steady-State Conditions (1985). This steady-state model is very sensitive to the ambient weather conditions of temperature and wind speed. The model does not account for wind yaw, turbulence, or conductor roughness as proposed by Davis (1976) for a real time rating system. The objective of this research has been to determine (1) how conservative the present rating system is for typical ambient weather conditions, (2) develop a probability-based methodology, (3) compile available weather data into a compatible format, and (4) apply the rating methodology to a hypothetical line. The potential benefit from this research is to rate transmission lines statistically which will allow BPA to take advantage of any unknown thermal capacity. The present deterministic weather model is conservative overall and studies suggest a refined model will uncover additional unknown capacity. 14 refs., 40 figs., 7 tabs.

  3. Efforts - Final technical report on task 4. Physical modelling calidation

    DEFF Research Database (Denmark)

    Andreasen, Jan Lasson; Olsson, David Dam; Christensen, T. W.

    The present report is documentation for the work carried out in Task 4 at DTU Physical modelling-validation on the Brite/Euram project No. BE96-3340, contract No. BRPR-CT97-0398, with the title Enhanced Framework for forging design using reliable three-dimensional simulation (EFFORTS). The report...

  4. Diagnostic modeling of the ARM experimental configuration. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Somerville, R.C.J.

    1998-04-01

    A major accomplishment of this work was to demonstrate the viability of using in-situ data in both mid-continent North America (SGP CART site) and Tropical Western Pacific (TOGA-COARE) locations to provide the horizontal advective flux convergences which force and constrain the Single-Column Model (SCM) which was the main theoretical tool of this work. The author has used TOGA-COARE as a prototype for the ARM TWP site. Results show that SCMs can produce realistic budgets over the ARM sites without relying on parameterization-dependent operational numerical weather prediction objective analyses. The single-column model is diagnostic rather than prognostic. It is numerically integrated in time as an initial value problem which is forced and constrained by observational data. The input is an observed initial state, plus observationally derived estimates of the time-dependent advection terms in the conservation equations, provided at all model layers. Its output is a complete heat and water budget, including temperature and moisture profiles, clouds and their radiative properties, diabatic heating terms, surface energy balance components, and hydrologic cycle elements, all specified as functions of time. These SCM results should be interpreted in light of the original motivation and purpose of ARM and its goal to improve the treatment of cloud-radiation interactions in climate models.

  5. Information-preserving models of physics and computation: Final report

    International Nuclear Information System (INIS)

    1986-01-01

    This research pertains to discrete dynamical systems, as embodied by cellular automata, reversible finite-difference equations, and reversible computation. The research has strengthened the cross-fertilization between physics, computer science and discrete mathematics. It has shown that methods and concepts of physics can be exported to computation. Conversely, fully discrete dynamical systems have been shown to be fruitful for representing physical phenomena usually described with differential equations - cellular automata for fluid dynamics has been the most noted example of such a representation. At the practical level, the fully discrete representation approach suggests innovative uses of computers for scientific computing. The originality of these uses lies in their non-numerical nature: they avoid the inaccuracies of floating-point arithmetic and bypass the need for numerical analysis. 38 refs

  6. Computational modeling of drug-resistant bacteria. Final report

    International Nuclear Information System (INIS)

    2015-01-01

    Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.

  7. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  8. Computational modeling of drug-resistant bacteria. Final report

    Energy Technology Data Exchange (ETDEWEB)

    MacDougall, Preston [Middle Tennessee State Univ., Murfreesboro, TN (United States)

    2015-03-12

    Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.

  9. Advanced numerical modelling of a fire. Final report

    International Nuclear Information System (INIS)

    Heikkilae, L.; Keski-Rahkonen, O.

    1996-03-01

    Experience and probabilistic risk assessments show that fires present a major hazard in a nuclear power plant (NPP). The PALOME project (1988-92) improved the quality of numerical simulation of fires to make it a useful tool for fire safety analysis. Some of the most advanced zone model fire simulation codes were acquired. The performance of the codes was studied through literature and personal interviews in earlier studies and BRI2 code from the Japanese Building Research Institute was selected for further use. In PALOME 2 project this work was continued. Information obtained from large-scale fire tests at the German HDR facility allowed reliable prediction of the rate of heat release and was used for code validation. BRI2 code was validated particularly by participation in the CEC standard problem 'Prediction of effects caused by a cable fire experiment within the HDR-facility'. Participation in the development of a new field model code SOFIE specifically for fire applications as British-Swedish-Finnish cooperation was one of the goals of the project. SOFIE code was implemented at VTT and the first results of validation simulations were obtained. Well instrumented fire tests on electronic cabinets were carried out to determine source terms for simulation of room fires and to estimate fire spread to adjacent cabinets. The particular aim of this study was to measure the rate of heat release from a fire in an electronic cabinet. From the three tests, differing mainly in the amount of the fire load, data was obtained for source terms in numerical modelling of fires in rooms containing electronic cabinets. On the basis of these tests also a simple natural ventilation model was derived. (19 refs.)

  10. Experimental Benchmarking of Fire Modeling Simulations. Final Report

    International Nuclear Information System (INIS)

    Greiner, Miles; Lopez, Carlos

    2003-01-01

    A series of large-scale fire tests were performed at Sandia National Laboratories to simulate a nuclear waste transport package under severe accident conditions. The test data were used to benchmark and adjust the Container Analysis Fire Environment (CAFE) computer code. CAFE is a computational fluid dynamics fire model that accurately calculates the heat transfer from a large fire to a massive engulfed transport package. CAFE will be used in transport package design studies and risk analyses

  11. Datamining approaches for modeling tumor control probability.

    Science.gov (United States)

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  12. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  13. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  14. Mathematical modeling of the voloxidation process. Final report

    International Nuclear Information System (INIS)

    Stanford, T.G.

    1979-06-01

    A mathematical model of the voloxidation process, a head-end reprocessing step for the removal of volatile fission products from spent nuclear fuel, has been developed. Three types of voloxidizer operation have been considered; co-current operation in which the gas and solid streams flow in the same direction, countercurrent operation in which the gas and solid streams flow in opposite directions, and semi-batch operation in which the gas stream passes through the reactor while the solids remain in it and are processed batch wise. Because of the complexity of the physical ahd chemical processes which occur during the voloxidation process and the lack of currently available kinetic data, a global kinetic model has been adapted for this study. Test cases for each mode of operation have been simulated using representative values of the model parameters. To process 714 kgm/day of spent nuclear fuel, using an oxidizing atmosphere containing 20 mole percent oxygen, it was found that a reactor 0.7 m in diameter and 2.49 m in length would be required for both cocurrent and countercurrent modes of operation while for semibatch operation a 0.3 m 3 reactor and an 88200 sec batch processing time would be required

  15. [Development of model communities (Cool Communities)]. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    This report covers progress in the Cool Communities program and is intended to detail specific accomplishments during the year and to provide a limited amount of background information about the program and its progress over the past three years. The Cool Communities project is driven by local partnerships among business, citizens, government, and guided by a Local Advisory Committee of representatives from these organizations. A national overview of the program is given in the first section. The second section describes specific accomplishments in each of the model communities in Dade County, Atlanta, Frederick, Tucson, Springfield, Austin, and the Davis Monthan Air Force Base.

  16. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  17. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    of it application on a social media maturity data-set. Specifically, we employ Necessary Condition Analysis (NCA) to identify maturity stage boundaries as necessary conditions and Qualitative Comparative Analysis (QCA) to arrive at multiple configurations that can be equally effective in progressing to higher......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...... characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration...

  18. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  19. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  20. Metabolic network modeling approaches for investigating the "hungry cancer".

    Science.gov (United States)

    Sharma, Ashwini Kumar; König, Rainer

    2013-08-01

    Metabolism is the functional phenotype of a cell, at a given condition, resulting from an intricate interplay of various regulatory processes. The study of these dynamic metabolic processes and their capabilities help to identify the fundamental properties of living systems. Metabolic deregulation is an emerging hallmark of cancer cells. This deregulation results in rewiring of the metabolic circuitry conferring an exploitative metabolic advantage for the tumor cells which leads to a distinct benefit in survival and lays the basis for unbound progression. Metabolism can be considered as a thermodynamic open-system in which source substrates of high value are being processed through a well established interconnected biochemical conversion system, strictly obeying physiochemical principles, generating useful intermediates and finally resulting in the release of byproducts. Based on this basic principle of an input-output balance, various models have been developed to interrogate metabolism elucidating its underlying functional properties. However, only a few modeling approaches have proved computationally feasible in elucidating the metabolic nature of cancer at a systems level. Besides this, statistical approaches have been set up to identify biochemical pathways being more relevant for specific types of tumor cells. In this review, we are briefly introducing the basic statistical approaches followed by the major modeling concepts. We have put an emphasis on the methods and their applications that have been used to a greater extent in understanding the metabolic remodeling of cancer. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. A Modeling Approach for Marine Observatory

    Directory of Open Access Journals (Sweden)

    Charbel Geryes Aoun

    2015-02-01

    Full Text Available Infrastructure of Marine Observatory (MO is an UnderWater Sensor Networks (UW-SN to perform collaborative monitoring tasks over a given area. This observation should take into consideration the environmental constraints since it may require specific tools, materials and devices (cables, servers, etc.. The logical and physical components that are used in these observatories provide data exchanged between the various devices of the environment (Smart Sensor, Data Fusion. These components provide new functionalities or services due to the long period running of the network. In this paper, we present our approach in extending the modeling languages to include new domain- specific concepts and constraints. Thus, we propose a meta-model that is used to generate a new design tool (ArchiMO. We illustrate our proposal with an example from the MO domain on object localization with several acoustics sensors. Additionally, we generate the corresponding simulation code for a standard network simulator using our self-developed domain-specific model compiler. Our approach helps to reduce the complexity and time of the design activity of a Marine Observatory. It provides a way to share the different viewpoints of the designers in the MO domain and obtain simulation results to estimate the network capabilities.

  2. A nationwide modelling approach to decommissioning - 16182

    International Nuclear Information System (INIS)

    Kelly, Bernard; Lowe, Andy; Mort, Paul

    2009-01-01

    In this paper we describe a proposed UK national approach to modelling decommissioning. For the first time, we shall have an insight into optimizing the safety and efficiency of a national decommissioning strategy. To do this we use the General Case Integrated Waste Algorithm (GIA), a universal model of decommissioning nuclear plant, power plant, waste arisings and the associated knowledge capture. The model scales from individual items of plant through cells, groups of cells, buildings, whole sites and then on up to a national scale. We describe the national vision for GIA which can be broken down into three levels: 1) the capture of the chronological order of activities that an experienced decommissioner would use to decommission any nuclear facility anywhere in the world - this is Level 1 of GIA; 2) the construction of an Operational Research (OR) model based on Level 1 to allow rapid what if scenarios to be tested quickly (Level 2); 3) the construction of a state of the art knowledge capture capability that allows future generations to learn from our current decommissioning experience (Level 3). We show the progress to date in developing GIA in levels 1 and 2. As part of level 1, GIA has assisted in the development of an IMechE professional decommissioning qualification. Furthermore, we describe GIA as the basis of a UK-Owned database of decommissioning norms for such things as costs, productivity, durations etc. From level 2, we report on a pilot study that has successfully tested the basic principles for the OR numerical simulation of the algorithm. We then highlight the advantages of applying the OR modelling approach nationally. In essence, a series of 'what if...' scenarios can be tested that will improve the safety and efficiency of decommissioning. (authors)

  3. Numerical model of massive hydraulic fracture. Final report. [SYMFRAC1

    Energy Technology Data Exchange (ETDEWEB)

    Palmer, I.D.; Craig, H.R.; Luiskutty, C.T.

    1985-03-01

    This project has involved development of a hydraulic fracture simulator which calculates fracture height as a function of distance from the wellbore in a situation in which a payzone is bounded by two zones in which the minimum in-situ stress is higher (the fracture is vertical). The fracture must be highly elongated (length/height ratio approximately greater than 4) and variations in elastic modulus across zones are ignored. First, we describe the leakoff and spurt loss calculations employed in the modeling. Second, we discuss a revised version of the vertically symmetric simulator (bounding zone stresses equal). The addition of non-Newtonian flow and leakoff (including spurt loss) is described in detail. An illustrative result is given. Third, we describe in detail the vertically asymmetric simulator (bounding zone stresses not equal). To illustrate the last results, we present design calculations for a 30,000 gallon fracture, which was the first stimulation in the Multi-Well Experiment. The 80 ft fracture interval in the Paludal zone has at its upper edge a 520 psi stress contrast, and at its lower edge a 1195 psi contrast. Computed fracture height growth above and below the perforated interval, bottomhole pressure, and width profiles in vertical sections are displayed. Comparison is made with diagnostic measurements of fracture length, height, and bottomhole pressure. The appropriate computer codes are included in this report. 21 references, 11 figures, 4 tables.

  4. Community Earth System Model (CESM) Tutorial 2016 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Lamarque, Jean-Francois [Univ. Corporation for Atmospheric Research (UCAR) and National Center for Atmospheric Research (NCAR) and Climate and Global Dynamics Laboratory (CGD), Boulder, CO (United States)

    2017-05-09

    For the 2016 tutorial, NCAR/CGD requested a total budget of $70,000 split equally between DOE and NSF. The funds were used to support student participation (travel, lodging, per diem, etc.). Lectures and practical session support was primarily provided by local participants at no additional cost (see list below). The seventh annual Community Earth System Model (CESM) tutorial (2016) for students and early career scientists was held 8 – 12 August 2016. As has been the case over the last few years, this event was extremely successful and there was greater demand than could be met. There was continued interest in support of the NSF’s EaSM Infrastructure awards, to train these awardees in the application of the CESM. Based on suggestions from previous tutorial participants, the 2016 tutorial experience again provided direct connection to Yellowstone for each individual participant (rather than pairs), and used the NCAR Mesa Library. The 2016 tutorial included lectures on simulating the climate system and practical sessions on running CESM, modifying components, and analyzing data. These were targeted to the graduate student level. In addition, specific talks (“Application” talks) were introduced this year to provide participants with some in-depth knowledge of some specific aspects of CESM.

  5. Final model independent result of DAMA/LIBRA-phase1

    Energy Technology Data Exchange (ETDEWEB)

    Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)

    2013-12-15

    The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)

  6. A hierarchy of thermohaline circulation models. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Cessi, P.; Young, W.R.

    1998-04-01

    The objectives of this effort were: (1) to understand the variability caused by the competitive roles of salt and heat in the ocean circulation; (2) to understand the effect of differential advection of active tracers, such as temperature, salinity and angular momentum; and (3) to improve the parametrization of convection in models of the ocean circulation. One result of the project is the discovery that the characteristics of the quasi-periodic centennial and millennial oscillations found in OGCM`s, associated with alternating suppression and activation of high latitude convection, are extremely sensitive to the salinity flux and specific choice of convective adjustment scheme. In particular, the period of the oscillation depends crucially on the salinity fluxes (whether deterministic or with a stochastic component) and can be arbitrarily long. This result has clarified that these long-period oscillations (termed flushes) are not the result of the excitation of an intrinsic linear eigenmode of the system, but rather are relaxation-oscillations towards one of the several equilibria available to the system. This implies that it is the amplitude, rather than the period, of the oscillation which is almost independent of the salinity flux.

  7. A multiscale approach for modeling atherosclerosis progression.

    Science.gov (United States)

    Exarchos, Konstantinos P; Carpegianni, Clara; Rigas, Georgios; Exarchos, Themis P; Vozzi, Federico; Sakellarios, Antonis; Marraccini, Paolo; Naka, Katerina; Michalis, Lambros; Parodi, Oberdan; Fotiadis, Dimitrios I

    2015-03-01

    Progression of atherosclerotic process constitutes a serious and quite common condition due to accumulation of fatty materials in the arterial wall, consequently posing serious cardiovascular complications. In this paper, we assemble and analyze a multitude of heterogeneous data in order to model the progression of atherosclerosis (ATS) in coronary vessels. The patient's medical record, biochemical analytes, monocyte information, adhesion molecules, and therapy-related data comprise the input for the subsequent analysis. As indicator of coronary lesion progression, two consecutive coronary computed tomography angiographies have been evaluated in the same patient. To this end, a set of 39 patients is studied using a twofold approach, namely, baseline analysis and temporal analysis. The former approach employs baseline information in order to predict the future state of the patient (in terms of progression of ATS). The latter is based on an approach encompassing dynamic Bayesian networks whereby snapshots of the patient's status over the follow-up are analyzed in order to model the evolvement of ATS, taking into account the temporal dimension of the disease. The quantitative assessment of our work has resulted in 93.3% accuracy for the case of baseline analysis, and 83% overall accuracy for the temporal analysis, in terms of modeling and predicting the evolvement of ATS. It should be noted that the application of the SMOTE algorithm for handling class imbalance and the subsequent evaluation procedure might have introduced an overestimation of the performance metrics, due to the employment of synthesized instances. The most prominent features found to play a substantial role in the progression of the disease are: diabetes, cholesterol and cholesterol/HDL. Among novel markers, the CD11b marker of leukocyte integrin complex is associated with coronary plaque progression.

  8. Final Technical Report - Use of Systems Biology Approaches to Develop Advanced Biofuel-Synthesizing Cyanobacterial Strains

    Energy Technology Data Exchange (ETDEWEB)

    Pakrasi, Himadri [Washington Univ., St. Louis, MO (United States)

    2016-09-01

    The overall objective of this project was to use a systems biology approach to evaluate the potentials of a number of cyanobacterial strains for photobiological production of advanced biofuels and/or their chemical precursors. Cyanobacteria are oxygen evolving photosynthetic prokaryotes. Among them, certain unicellular species such as Cyanothece can also fix N2, a process that is exquisitely sensitive to oxygen. To accommodate such incompatible processes in a single cell, Cyanothece produces oxygen during the day, and creates an O2-limited intracellular environment during the night to perform O2-sensitive processes such as N2-fixation. Thus, Cyanothece cells are natural bioreactors for the storage of captured solar energy with subsequent utilization at a different time during a diurnal cycle. Our studies include the identification of a novel, fast-growing, mixotrophic, transformable cyanobacterium. This strain has been sequenced and will be made available to the community. In addition, we have developed genome-scale models for a family of cyanobacteria to assess their metabolic repertoire. Furthermore, we developed a method for rapid construction of metabolic models using multiple annotation sources and a metabolic model of a related organism. This method will allow rapid annotation and screening of potential phenotypes based on the newly available genome sequences of many organisms.

  9. Modeling in transport phenomena a conceptual approach

    CERN Document Server

    Tosun, Ismail

    2007-01-01

    Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to

  10. Model approach brings multi-level success.

    Science.gov (United States)

    Howell, Mark

    2012-08-01

    n an article that first appeared in US magazine, Medical Construction & Design, Mark Howell, senior vice-president of Skanska USA Building, based in Seattle, describes the design and construction of a new nine-storey, 350,000 ft2 extension to the Good Samaritan Hospital in Puyallup, Washington state. He explains how the use of an Integrated Project Delivery (IPD) approach by the key players, and extensive use of building information modelling (BIM), combined to deliver a healthcare facility that he believes should meet the needs of patients, families, and the clinical care team, 'well into the future'.

  11. Development of a risk-analysis model. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1979-10-01

    This report consists of a main body, which provides a presentation of risk analysis and its general and specific application to the needs of the Office of Buildings and Community Systems of the Department of Energy; and several case studies employing the risk-analysis model developed. The highlights include a discussion of how risk analysis is currently used in the private, regulated, and public sectors and how this methodology can be employed to meet the policy-analysis needs of the Office of Buildings and Community Systems of the Department of Energy (BCS/DOE). After a review of the primary methodologies available for risk analysis, it was determined that Monte Carlo simulation techniques provide the greatest degree of visibility into uncertainty in the decision-making process. Although the data-collection requirements can be demanding, the benefits, when compared to other methods, are substantial. The data-collection problem can be significantly reduced, without sacrificing proprietary-information rights, if prior arrangements are made with RD and D contractors to provide responses to reasonable requests for base-case data. A total of three case studies were performed on BCS technologies: a gas-fired heat pump; a 1000 ton/day anaerobic digestion plant; and a district heating and cooling system. The three case studies plus the risk-analysis methodology were issued as separate reports. It is concluded that, based on the overall research of risk analysis and the case-study experience, that the risk-analysis methodology has significant potential as a policy-evaluation tool within BCS.

  12. A final size relation for epidemic models of vector-transmitted diseases

    Directory of Open Access Journals (Sweden)

    Fred Brauer

    2017-02-01

    Full Text Available We formulate and analyze an age of infection model for epidemics of diseases transmitted by a vector, including the possibility of direct transmission as well. We show how to determine a basic reproduction number. While there is no explicit final size relation as for diseases transmitted directly, we are able to obtain estimates for the final size of the epidemic.

  13. Final Report on Models, Periodic Progress, Report No D1.3, Globeman21, ESPRIT 26509

    DEFF Research Database (Denmark)

    Pedersen, Jens Dahl; Tølle, Martin; Vesterager, Johan

    1999-01-01

    This deliverable D1.3 is the third and final deliverable of WP1 - Global Manufacturing Concept of the European part of the Globeman21 project. The report essentially presents the final models on generic Extended Enterprise Management (EEM) and generic Product Life Cycle Management (PLCM), a colle...

  14. The place of quantitative energy models in a prospective approach

    International Nuclear Information System (INIS)

    Taverdet-Popiolek, N.

    2009-01-01

    Futurology above all depends on having the right mind set. Gaston Berger summarizes the prospective approach in 5 five main thrusts: prepare for the distant future, be open-minded (have a systems and multidisciplinary approach), carry out in-depth analyzes (draw out actors which are really determinant or the future, as well as established shed trends), take risks (imagine risky but flexible projects) and finally think about humanity, futurology being a technique at the service of man to help him build a desirable future. On the other hand, forecasting is based on quantified models so as to deduce 'conclusions' about the future. In the field of energy, models are used to draw up scenarios which allow, for instance, measuring medium or long term effects of energy policies on greenhouse gas emissions or global welfare. Scenarios are shaped by the model's inputs (parameters, sets of assumptions) and outputs. Resorting to a model or projecting by scenario is useful in a prospective approach as it ensures coherence for most of the variables that have been identified through systems analysis and that the mind on its own has difficulty to grasp. Interpretation of each scenario must be carried out in the light o the underlying framework of assumptions (the backdrop), developed during the prospective stage. When the horizon is far away (very long-term), the worlds imagined by the futurologist contain breaks (technological, behavioural and organizational) which are hard to integrate into the models. It is here that the main limit for the use of models in futurology is located. (author)

  15. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  16. Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.

    Science.gov (United States)

    Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J

    2016-01-01

    Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  17. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Model Orlando regionally efficient travel management coordination center (MORE TMCC), phase II : final report.

    Science.gov (United States)

    2012-09-01

    The final report for the Model Orlando Regionally Efficient Travel Management Coordination Center (MORE TMCC) presents the details of : the 2-year process of the partial deployment of the original MORE TMCC design created in Phase I of this project...

  19. Agribusiness model approach to territorial food development

    Directory of Open Access Journals (Sweden)

    Murcia Hector Horacio

    2011-04-01

    Full Text Available

    Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.

  20. Energy and Development. A Modelling Approach

    International Nuclear Information System (INIS)

    Van Ruijven, B.J.

    2008-01-01

    policies have an important role. For instance, low energy taxes and subsidies in developing countries limit the opportunities to promote alternative energy options. A final issue in this thesis is the impact of the changing development context - depletion of fossil fuels and climate change - on the economic development of low-income regions. We developed a stylized population-economy-energy-climate model (SUSCLIME) in which automated agents can take policy-decisions and develop strategies to cope with resource depletion and climate change. From preliminary model experiments it appears that developing countries are more vulnerable to both resource depletion and climate change. A co-benefit of a long-term focus on avoiding climate change is that it also slows down fossil resource depletion. A short-term focus to reduce impacts from depletion of endogenous fossil resources has probably not much synergy with climate policy because imported fossil energy (or coal) is more attractive than developing alternatives.

  1. Energy and Development. A Modelling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Van Ruijven, B.J.

    2008-12-17

    policies have an important role. For instance, low energy taxes and subsidies in developing countries limit the opportunities to promote alternative energy options. A final issue in this thesis is the impact of the changing development context - depletion of fossil fuels and climate change - on the economic development of low-income regions. We developed a stylized population-economy-energy-climate model (SUSCLIME) in which automated agents can take policy-decisions and develop strategies to cope with resource depletion and climate change. From preliminary model experiments it appears that developing countries are more vulnerable to both resource depletion and climate change. A co-benefit of a long-term focus on avoiding climate change is that it also slows down fossil resource depletion. A short-term focus to reduce impacts from depletion of endogenous fossil resources has probably not much synergy with climate policy because imported fossil energy (or coal) is more attractive than developing alternatives.

  2. An Approach to Developing Independent Learning and Non-Technical Skills Amongst Final Year Mining Engineering Students

    Science.gov (United States)

    Knobbs, C. G.; Grayson, D. J.

    2012-01-01

    There is mounting evidence to show that engineers need more than technical skills to succeed in industry. This paper describes a curriculum innovation in which so-called "soft" skills, specifically inter-personal and intra-personal skills, were integrated into a final year mining engineering course. The instructional approach was…

  3. Final Report for Harvesting a New Wind Crop: Innovative Economic Approaches for Rural America

    Energy Technology Data Exchange (ETDEWEB)

    Susan Innis; Randy Udall; Project Officer - Keith Bennett

    2005-09-30

    Final Report for ''Harvesting a New Wind Crop: Innovative Economic Approaches for Rural America'': This project, ''Harvesting a New Wind Crop'', helped stimulate wind development by rural electric cooperatives and municipal utilities in Colorado. To date most of the wind power development in the United States has been driven by large investor-owned utilities serving major metropolitan areas. To meet the 5% by 2020 goal of the Wind Powering America program the 2,000 municipal and 900 rural electric cooperatives in the country must get involved in wind power development. Public power typically serves rural and suburban areas and can play a role in revitalizing communities by tapping into the economic development potential of wind power. One barrier to the involvement of public power in wind development has been the perception that wind power is more expensive than other generation sources. This project focused on two ways to reduce the costs of wind power to make it more attractive to public power entities. The first way was to develop a revenue stream from the sale of green tags. By selling green tags to entities that voluntarily support wind power, rural coops and munis can effectively reduce their cost of wind power. Western Resource Advocates (WRA) and the Community Office for Resource Efficiency (CORE) worked with Lamar Light and Power and Arkansas River Power Authority to develop a strategy to use green tags to help finance their wind project. These utilities are now selling their green tags to Community Energy, Inc., an independent for-profit marketer who in turn sells the tags to consumers around Colorado. The Lamar tags allow the University of Colorado-Boulder, the City of Boulder, NREL and other businesses to support wind power development and make the claim that they are ''wind-powered''. This urban-rural partnership is an important development for the state of Colorado's rural communities

  4. Two Different Approaches to Teaching Final-Year Projects for Mechanical Engineers and Biotechnologists at Ngee Ann Polytechnic--Case Studies Approach.

    Science.gov (United States)

    Walsh, Kath; Rebaczonok-Padulo, Michael

    1993-01-01

    Ngee Ann Polytechnic, a leading postsecondary technical institution in Singapore, offers English for academic and occupational purposes to prepare students for writing their final year projects. This article discusses the approaches used in Mechanical Engineering and Biotechnology projects. A sample exercise is appended. (Contains two references.)…

  5. Development of generalised model for grate combustion of biomass. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Rosendahl, L.

    2007-02-15

    This project has been divided into two main parts, one of which has focused on modelling and one on designing and constructing a grate fired biomass test rig. The modelling effort has been defined due to a need for improved knowledge of the transport and conversion processes within the bed layer for two reasons: 1) to improve emission understanding and reduction measures and 2) to improve boundary conditions for CFD-based furnace modelling. The selected approach has been based on a diffusion coefficient formulation, where conservation equations for the concentration of fuel are solved in a spatially resolved grid, much in the same manner as in a finite volume CFD code. Within this porous layer of fuel, gas flows according to the Ergun equation. The diffusion coefficient links the properties of the fuel to the grate type and vibration mode, and is determined for each combination of fuel, grate and vibration mode. In this work, 3 grates have been tested as well as 4) types of fuel, drinking straw, wood beads, straw pellets and wood pellets. Although much useful information and knowledge has been obtained on transport processes in fuel layers, the model has proved to be less than perfect, and the recommendation is not to continue along this path. New visual data on the motion of straw on vibrating grates indicate that a diffusion governed motion does not very well represent the transport. Furthermore, it is very difficult to obtain the diffusion coefficient in other places than the surface layer of the grate, and it is not likely that this is representative for the motion within the layer. Finally, as the model complexity grows, model turnover time increases to a level where it is comparable to that of the full furnace model. In order to proceed and address the goals of the first paragraph, it is recommended to return to either a walking column approach or even some other, relatively simple method of prediction, and combine this with a form of randomness, to mimic the

  6. Approaches and models of intercultural education

    Directory of Open Access Journals (Sweden)

    Iván Manuel Sánchez Fontalvo

    2013-10-01

    Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.   

  7. Final Report for Bio-Inspired Approaches to Moving-Target Defense Strategies

    Energy Technology Data Exchange (ETDEWEB)

    Fink, Glenn A.; Oehmen, Christopher S.

    2012-09-01

    This report records the work and contributions of the NITRD-funded Bio-Inspired Approaches to Moving-Target Defense Strategies project performed by Pacific Northwest National Laboratory under the technical guidance of the National Security Agency’s R6 division. The project has incorporated a number of bio-inspired cyber defensive technologies within an elastic framework provided by the Digital Ants. This project has created the first scalable, real-world prototype of the Digital Ants Framework (DAF)[11] and integrated five technologies into this flexible, decentralized framework: (1) Ant-Based Cyber Defense (ABCD), (2) Behavioral Indicators, (3) Bioinformatic Clas- sification, (4) Moving-Target Reconfiguration, and (5) Ambient Collaboration. The DAF can be used operationally to decentralize many such data intensive applications that normally rely on collection of large amounts of data in a central repository. In this work, we have shown how these component applications may be decentralized and may perform analysis at the edge. Operationally, this will enable analytics to scale far beyond current limitations while not suffering from the bandwidth or computational limitations of centralized analysis. This effort has advanced the R6 Cyber Security research program to secure digital infrastructures by developing a dynamic means to adaptively defend complex cyber systems. We hope that this work will benefit both our client’s efforts in system behavior modeling and cyber security to the overall benefit of the nation.

  8. Development Of Robust IFE Laser Mirrors and Multi-Scale Modeling Of Pulsed Radiation Effects. Final Report

    International Nuclear Information System (INIS)

    Ghoniem, Nasr M.

    2009-01-01

    The following has been achieved: (1) Final design of a Deformable Grazing Incidence Mirror, (2) Formulation of a new approach to model surface roughening under laser illumination, and (3) Modeling of radiation hardening under IFE conditions. We discuss here progress made in each one of these areas. The objectives of the Grazing Incidence Metal Mirror (GIMM) are: (1) to reflect the incident laser beam into the direction of the target; (2) to focus the incident beam directly onto the target (3) to withstand the thermomechanical and damage induced by laser beams; (4) to correct the reflective surface so that the focus is permanently on the target; (5) to have a full range of motion so it can be placed anywhere relative to the target. The design was described in our progress report of the period August 15, 2003 through April 15, 2004. In the following, we describe further improvements of the final design.

  9. Systems Approaches to Modeling Chronic Mucosal Inflammation

    Science.gov (United States)

    Gao, Boning; Choudhary, Sanjeev; Wood, Thomas G.; Carmical, Joseph R.; Boldogh, Istvan; Mitra, Sankar; Minna, John D.; Brasier, Allan R.

    2013-01-01

    The respiratory mucosa is a major coordinator of the inflammatory response in chronic airway diseases, including asthma and chronic obstructive pulmonary disease (COPD). Signals produced by the chronic inflammatory process induce epithelial mesenchymal transition (EMT) that dramatically alters the epithelial cell phenotype. The effects of EMT on epigenetic reprogramming and the activation of transcriptional networks are known, its effects on the innate inflammatory response are underexplored. We used a multiplex gene expression profiling platform to investigate the perturbations of the innate pathways induced by TGFβ in a primary airway epithelial cell model of EMT. EMT had dramatic effects on the induction of the innate pathway and the coupling interval of the canonical and noncanonical NF-κB pathways. Simulation experiments demonstrate that rapid, coordinated cap-independent translation of TRAF-1 and NF-κB2 is required to reduce the noncanonical pathway coupling interval. Experiments using amantadine confirmed the prediction that TRAF-1 and NF-κB2/p100 production is mediated by an IRES-dependent mechanism. These data indicate that the epigenetic changes produced by EMT induce dynamic state changes of the innate signaling pathway. Further applications of systems approaches will provide understanding of this complex phenotype through deterministic modeling and multidimensional (genomic and proteomic) profiling. PMID:24228254

  10. ECOMOD - An ecological approach to radioecological modelling

    International Nuclear Information System (INIS)

    Sazykina, Tatiana G.

    2000-01-01

    A unified methodology is proposed to simulate the dynamic processes of radionuclide migration in aquatic food chains in parallel with their stable analogue elements. The distinguishing feature of the unified radioecological/ecological approach is the description of radionuclide migration along with dynamic equations for the ecosystem. The ability of the methodology to predict the results of radioecological experiments is demonstrated by an example of radionuclide (iron group) accumulation by a laboratory culture of the algae Platymonas viridis. Based on the unified methodology, the 'ECOMOD' radioecological model was developed to simulate dynamic radioecological processes in aquatic ecosystems. It comprises three basic modules, which are operated as a set of inter-related programs. The 'ECOSYSTEM' module solves non-linear ecological equations, describing the biomass dynamics of essential ecosystem components. The 'RADIONUCLIDE DISTRIBUTION' module calculates the radionuclide distribution in abiotic and biotic components of the aquatic ecosystem. The 'DOSE ASSESSMENT' module calculates doses to aquatic biota and doses to man from aquatic food chains. The application of the ECOMOD model to reconstruct the radionuclide distribution in the Chernobyl Cooling Pond ecosystem in the early period after the accident shows good agreement with observations

  11. Modelling public risk evaluation of natural hazards: a conceptual approach

    Science.gov (United States)

    Plattner, Th.

    2005-04-01

    In recent years, the dealing with natural hazards in Switzerland has shifted away from being hazard-oriented towards a risk-based approach. Decreasing societal acceptance of risk, accompanied by increasing marginal costs of protective measures and decreasing financial resources cause an optimization problem. Therefore, the new focus lies on the mitigation of the hazard's risk in accordance with economical, ecological and social considerations. This modern proceeding requires an approach in which not only technological, engineering or scientific aspects of the definition of the hazard or the computation of the risk are considered, but also the public concerns about the acceptance of these risks. These aspects of a modern risk approach enable a comprehensive assessment of the (risk) situation and, thus, sound risk management decisions. In Switzerland, however, the competent authorities suffer from a lack of decision criteria, as they don't know what risk level the public is willing to accept. Consequently, there exists a need for the authorities to know what the society thinks about risks. A formalized model that allows at least a crude simulation of the public risk evaluation could therefore be a useful tool to support effective and efficient risk mitigation measures. This paper presents a conceptual approach of such an evaluation model using perception affecting factors PAF, evaluation criteria EC and several factors without any immediate relation to the risk itself, but to the evaluating person. Finally, the decision about the acceptance Acc of a certain risk i is made by a comparison of the perceived risk Ri,perc with the acceptable risk Ri,acc.

  12. Final report of the TRUE Block Scale project. 3. Modelling of flow and transport

    Energy Technology Data Exchange (ETDEWEB)

    Poteri, Antti [VTT Processes, Helsinki (Finland); Billaux, Daniel [Itasca Consultants SA, Ecully (France); Dershowitz, William [Golder Associates Inc., Redmond, WA (United States); Gomez-Hernandez, J. Jaime [Univ. Politecnica de Valencia (Spain). Dept. of Hydrahulic and Environmental Engineering; Cvetkovic, Vladimir [Royal Inst. of Tech., Stockholm (Sweden). Div. of Water Resources Engineering; Hautojaervi, Aimo [Posiva Oy, Olkiluoto (Finland); Holton, David [Serco Assurance, Harwell (United Kingdom); Medina, Agustin [UPC, Barcelona (Spain); Winberg, Anders (ed.) [Conterra AB, Uppsala (Sweden)

    2002-12-01

    A series of tracer experiments were performed as part of the TRUE Block Scale experiment over length scales ranging from 10 to 100 m. The in situ experimentation was preceded by a comprehensive iterative characterisation campaign - the results from one borehole was used to update descriptive models and provide the basis for continued characterisation. Apart from core drilling, various types of laboratory investigations, core logging, borehole TV imaging and various types of hydraulic tests (single hole and cross-hole) were performed. Based on the characterisation data a hydro structural model of the investigated rock volume was constructed including deterministic structures and a stochastic background fracture population, and their material properties. In addition, a generic microstructure conceptual model of the investigated structures was developed. Tracer tests with radioactive sorbing tracers performed in three flow paths were preceded by various pre-tests including tracer dilution tests, which were used to select suitable configurations of tracer injection and pumping in the established borehole array. The in situ experimentation was preceded by formulation of basic questions and associated hypotheses to be addressed by the tracer tests and the subsequent evaluation. The hypotheses included address of the validity of the hydro structural model, the effects of heterogeneity and block scale retention. Model predictions and subsequent evaluation modelling was performed using a wide variety of model concepts. These included stochastic continuum, discrete feature network and channel network models formulated in 3D, which also solved the flow problem. In addition, two 'single channel' approaches (Posiva Streamtube and LaSAR extended to the block scale) were employed. A common basis for transport was formulated. The difference between the approaches was found in how heterogeneity is accounted for, both in terms of number of different types of immobile zones

  13. DECOVALEX III PROJECT. Modelling of FEBEX In-Situ Test. Task1 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, E.E.; Alcoverro, J. [Univ. Politecnica de Catalunya, Barcelona (Spain)] (comps.)

    2005-02-15

    Task 1 of DECOVALEX III was conceived as a benchmark exercise supported by all field and laboratory data generated during the performance of the FEBEX experiment designed to study thermo-hydro-mechanical and thermo-hydro-geochemical processes of the buffer and rock in the near field. The task was defined as a series of three successive blind prediction exercises (Parts A, B and C), which cover the behaviour of both the rock and bentonite barrier. Research teams participating in the FEBEX task were given, for each of the three parts, a set of field and laboratory data theoretically sufficient to generate a proper model and were asked to submit predictions, at given locations and time, for some of the measured variables. The merits and limitations of different modeling approaches were therefore established. The teams could perform additional calculations, once the actual 'solution' was disclosed. Final calculations represented the best approximation that a given team could provide, always within the general time constraints imposed by the General DECOVALEX III Organization. This report presents the works performed for Task 1. It contains the case definitions and evaluations of modelling results for Part A, B and C, and the overall evaluation of the works performed. The report is completed by a CD-ROM containing a set of final reports provided by the modeling teams participating in each of the three parts defined. These reports provide the necessary details to better understand the nature of the blind or final predictions included in this report. The report closes with a set of conclusions, which provides a summary of the main findings and highlights the lessons learned, some of which were summarized below. The best predictions of the water inflow into the excavated tunnel are found when the hydro geological model is properly calibrated on the basis of other known flow measurements in the same area. The particular idealization of the rock mass (equivalent

  14. Final Report: Optimal Model Complexity in Geological Carbon Sequestration: A Response Surface Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Ye [Univ. of Wyoming, Laramie, WY (United States)

    2018-01-17

    The critical component of a risk assessment study in evaluating GCS is an analysis of uncertainty in CO2 modeling. In such analyses, direct numerical simulation of CO2 flow and leakage requires many time-consuming model runs. Alternatively, analytical methods have been developed which allow fast and efficient estimation of CO2 storage and leakage, although restrictive assumptions on formation rock and fluid properties are employed. In this study, an intermediate approach is proposed based on the Design of Experiment and Response Surface methodology, which consists of using a limited number of numerical simulations to estimate a prediction outcome as a combination of the most influential uncertain site properties. The methodology can be implemented within a Monte Carlo framework to efficiently assess parameter and prediction uncertainty while honoring the accuracy of numerical simulations. The choice of the uncertain properties is flexible and can include geologic parameters that influence reservoir heterogeneity, engineering parameters that influence gas trapping and migration, and reactive parameters that influence the extent of fluid/rock reactions. The method was tested and verified on modeling long-term CO2 flow, non-isothermal heat transport, and CO2 dissolution storage by coupling two-phase flow with explicit miscibility calculation using an accurate equation of state that gives rise to convective mixing of formation brine variably saturated with CO2. All simulations were performed using three-dimensional high-resolution models including a target deep saline aquifer, overlying caprock, and a shallow aquifer. To evaluate the uncertainty in representing reservoir permeability, sediment hierarchy of a heterogeneous digital stratigraphy was mapped to create multiple irregularly shape stratigraphic models of decreasing geologic resolutions: heterogeneous (reference), lithofacies, depositional environment, and a (homogeneous) geologic formation. To ensure model

  15. FINAL REPORT:Observation and Simulations of Transport of Molecules and Ions Across Model Membranes

    Energy Technology Data Exchange (ETDEWEB)

    MURAD, SOHAIL [University of Illinois at Chicago; JAMESON, CYNTHIA J [University of Illinois at Chicago

    2013-10-22

    During the this new grant we developed a robust methodology for investigating a wide range of properties of phospho-lipid bilayers. The approach developed is unique because despite using periodic boundary conditions, we can simulate an entire experiment or process in detail. For example, we can follow the entire permeation process in a lipid-membrane. This includes transport from the bulk aqueous phase to the lipid surface; permeation into the lipid; transport inside the lipid; and transport out of the lipid to the bulk aqueous phase again. We studied the transport of small gases in both the lipid itself and in model protein channels. In addition, we have examined the transport of nanocrystals through the lipid membrane, with the main goal of understanding the mechanical behavior of lipids under stress including water and ion leakage and lipid flip flop. Finally we have also examined in detail the deformation of lipids when under the influence of external fields, both mechanical and electrostatic (currently in progress). The important observations and conclusions from our studies are described in the main text of the report

  16. Evaluating Urban Resilience to Climate Change: A Multi-Sector Approach (Final Report)

    Science.gov (United States)

    EPA is announcing the availability of this final report prepared by the Air, Climate, and Energy (ACE) Research Program, located within the Office of Research and Development, with support from Cadmus. One of the goals of the ACE research program is to provide scientific informat...

  17. Risk communication: a mental models approach

    National Research Council Canada - National Science Library

    Morgan, M. Granger (Millett Granger)

    2002-01-01

    ... information about risks. The procedure uses approaches from risk and decision analysis to identify the most relevant information; it also uses approaches from psychology and communication theory to ensure that its message is understood. This book is written in nontechnical terms, designed to make the approach feasible for anyone willing to try it. It is illustrat...

  18. Analytical approach to chromatic correction in the final focus system of circular colliders

    Directory of Open Access Journals (Sweden)

    Yunhai Cai

    2016-11-01

    Full Text Available A conventional final focus system in particle accelerators is systematically analyzed. We find simple relations between the parameters of two focus modules in the final telescope. Using the relations, we derive the chromatic Courant-Snyder parameters for the telescope. The parameters are scaled approximately according to (L^{*}/β_{y}^{*}δ, where L^{*} is the distance from the interaction point to the first quadrupole, β_{y}^{*} the vertical beta function at the interaction point, and δ the relative momentum deviation. Most importantly, we show how to compensate its chromaticity order by order in δ by a traditional correction module flanked by an asymmetric pair of harmonic multipoles. The method enables a circular Higgs collider with 2% momentum aperture and illuminates a path forward to 4% in the future.

  19. A tantalum strength model using a multiscale approach: version 2

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R; Arsenlis, A; Hommes, G; Marian, J; Rhee, M; Yang, L H

    2009-09-21

    A continuum strength model for tantalum was developed in 2007 using a multiscale approach. This was our first attempt at connecting simulation results from atomistic to continuum length scales, and much was learned that we were not able to incorporate into the model at that time. The tantalum model described in this report represents a second cut at pulling together multiscale simulation results into a continuum model. Insight gained in creating previous multiscale models for tantalum and vanadium was used to guide the model construction and functional relations for the present model. While the basic approach follows that of the vanadium model, there are significant departures. Some of the recommendations from the vanadium report were followed, but not all. Results from several new analysis techniques have not yet been incorporated due to technical difficulties. Molecular dynamics simulations of single dislocation motion at several temperatures suggested that the thermal activation barrier was temperature dependent. This dependency required additional temperature functions be included within the assumed Arrhenius relation. The combination of temperature dependent functions created a complex model with a non unique parameterization and extra model constants. The added complexity had no tangible benefits. The recommendation was to abandon the strict Arrhenius form and create a simpler curve fit to the molecular dynamics data for shear stress versus dislocation velocity. Functions relating dislocation velocity and applied shear stress were constructed vor vanadium for both edge and screw dislocations. However, an attempt to formulate a robust continuum constitutive model for vanadium using both dislocation populations was unsuccessful; the level of coupling achieved was inadequate to constrain the dislocation evolution properly. Since the behavior of BCC materials is typically assumed to be dominated by screw dislocations, the constitutive relations were ultimately

  20. Validation of a Parametric Approach for 3d Fortification Modelling: Application to Scale Models

    Science.gov (United States)

    Jacquot, K.; Chevrier, C.; Halin, G.

    2013-02-01

    Parametric modelling approach applied to cultural heritage virtual representation is a field of research explored for years since it can address many limitations of digitising tools. For example, essential historical sources for fortification virtual reconstructions like plans-reliefs have several shortcomings when they are scanned. To overcome those problems, knowledge based-modelling can be used: knowledge models based on the analysis of theoretical literature of a specific domain such as bastioned fortification treatises can be the cornerstone of the creation of a parametric library of fortification components. Implemented in Grasshopper, these components are manually adjusted on the data available (i.e. 3D surveys of plans-reliefs or scanned maps). Most of the fortification area is now modelled and the question of accuracy assessment is raised. A specific method is used to evaluate the accuracy of the parametric components. The results of the assessment process will allow us to validate the parametric approach. The automation of the adjustment process can finally be planned. The virtual model of fortification is part of a larger project aimed at valorising and diffusing a very unique cultural heritage item: the collection of plans-reliefs. As such, knowledge models are precious assets when automation and semantic enhancements will be considered.

  1. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  2. A Systems Approach to Bio-Oil Stabilization - Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Robert C; Meyer, Terrence; Fox, Rodney; Submramaniam, Shankar; Shanks, Brent; Smith, Ryan G

    2011-12-23

    CFD model at all flow speeds. This study shows that fully-resolved direct numerical simulation (DNS) is successful in calculating the filter efficiency at all speeds. Aldehydes and acids are thought to play key roles in the stability of bio-oils, so the catalytic stabilization of bio-oils was focused on whether a reaction approach could be employed that simultaneously addressed these two types of molecules in bio-oil. Our approach to post treatment was simultaneous hydrogenation and esterification using bifunctional metal/acidic heterogeneous catalyst in which reactive aldehydes were reduced to alcohols, creating a high enough alcohol concentration so that the carboxylic acids could be esterified.

  3. A Dynamic Approach to Modeling Dependence Between Human Failure Events

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory

    2015-09-01

    In practice, most HRA methods use direct dependence from THERP—the notion that error be- gets error, and one human failure event (HFE) may increase the likelihood of subsequent HFEs. In this paper, we approach dependence from a simulation perspective in which the effects of human errors are dynamically modeled. There are three key concepts that play into this modeling: (1) Errors are driven by performance shaping factors (PSFs). In this context, the error propagation is not a result of the presence of an HFE yielding overall increases in subsequent HFEs. Rather, it is shared PSFs that cause dependence. (2) PSFs have qualities of lag and latency. These two qualities are not currently considered in HRA methods that use PSFs. Yet, to model the effects of PSFs, it is not simply a matter of identifying the discrete effects of a particular PSF on performance. The effects of PSFs must be considered temporally, as the PSFs will have a range of effects across the event sequence. (3) Finally, there is the concept of error spilling. When PSFs are activated, they not only have temporal effects but also lateral effects on other PSFs, leading to emergent errors. This paper presents the framework for tying together these dynamic dependence concepts.

  4. Mathematical models for atmospheric pollutants. Appendix D. Available air quality models. Final report

    International Nuclear Information System (INIS)

    Drake, R.L.; McNaughton, D.J.; Huang, C.

    1979-08-01

    Models that are available for the analysis of airborne pollutants are summarized. In addition, recommendations are given concerning the use of particular models to aid in particular air quality decision making processes. The air quality models are characterized in terms of time and space scales, steady state or time dependent processes, reference frames, reaction mechanisms, treatment of turbulence and topography, and model uncertainty. Using these characteristics, the models are classified in the following manner: simple deterministic models, such as air pollution indices, simple area source models and rollback models; statistical models, such as averaging time models, time series analysis and multivariate analysis; local plume and puff models; box and multibox models; finite difference or grid models; particle models; physical models, such as wind tunnels and liquid flumes; regional models; and global models

  5. A Discrete Monetary Economic Growth Model with the MIU Approach

    Directory of Open Access Journals (Sweden)

    Wei-Bin Zhang

    2008-01-01

    Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.

  6. Integrating operational watershed and coastal models for the Iberian Coast: Watershed model implementation - A first approach

    Science.gov (United States)

    Brito, David; Campuzano, F. J.; Sobrinho, J.; Fernandes, R.; Neves, R.

    2015-12-01

    River discharges and loads are essential inputs to coastal seas, and thus for coastal seas modelling, and their properties are the result of all activities and policies carried inland. For these reasons main rivers were object of intense monitoring programs having been generated some important amount of historical data. Due to the decline in the Portuguese hydrometric network and in order to quantify and forecast surface water streamflow and nutrients to coastal areas, the MOHID Land model was applied to the Western Iberia Region with a 2 km horizontal resolution and to the Iberian Peninsula with 10 km horizontal resolution. The domains were populated with land use and soil properties and forced with existing meteorological models. This approach also permits to understand how the flows and loads are generated and to forecast their values which are of utmost importance to perform coastal ocean and estuarine forecasts. The final purpose of the implementation is to obtain fresh water quantity and quality that could be used to support management decisions in the watershed, reservoirs and also to estuaries and coastal areas. A process oriented model as MOHID Land is essential to perform this type of simulations, as the model is independent of the number of river catchments. In this work, the Mohid Land model equations and parameterisations were described and an innovative methodology for watershed modelling is presented and validated for a large international river, the Tagus River, and the largest national river of Portugal, the Mondego River. Precipitation, streamflow and nutrients modelling results for these two rivers were compared with observations near their coastal outlet in order to evaluate the model capacity to represent the main watershed trends. Finally, an annual budget of fresh water and nutrient transported by the main twenty five rivers discharging in the Portuguese coast is presented.

  7. Severe accident approach - final report. Evaluation of design measures for severe accident prevention and consequence mitigation.

    Energy Technology Data Exchange (ETDEWEB)

    Tentner, A. M.; Parma, E.; Wei, T.; Wigeland, R.; Nuclear Engineering Division; SNL; INL

    2010-03-01

    An important goal of the US DOE reactor development program is to conceptualize advanced safety design features for a demonstration Sodium Fast Reactor (SFR). The treatment of severe accidents is one of the key safety issues in the design approach for advanced SFR systems. It is necessary to develop an in-depth understanding of the risk of severe accidents for the SFR so that appropriate risk management measures can be implemented early in the design process. This report presents the results of a review of the SFR features and phenomena that directly influence the sequence of events during a postulated severe accident. The report identifies the safety features used or proposed for various SFR designs in the US and worldwide for the prevention and/or mitigation of Core Disruptive Accidents (CDA). The report provides an overview of the current SFR safety approaches and the role of severe accidents. Mutual understanding of these design features and safety approaches is necessary for future collaborations between the US and its international partners as part of the GEN IV program. The report also reviews the basis for an integrated safety approach to severe accidents for the SFR that reflects the safety design knowledge gained in the US during the Advanced Liquid Metal Reactor (ALMR) and Integral Fast Reactor (IFR) programs. This approach relies on inherent reactor and plant safety performance characteristics to provide additional safety margins. The goal of this approach is to prevent development of severe accident conditions, even in the event of initiators with safety system failures previously recognized to lead directly to reactor damage.

  8. ADVANCED BIOMASS REBURNING FOR HIGH EFFICIENCY NOx CONTROL AND BIOMASS REBURNING - MODELING/ENGINEERING STUDIES JOINT FINAL REPORT; FINAL

    International Nuclear Information System (INIS)

    Vladimir M Zamansky; Mark S. Sheldon; Vitali V. Lissianski; Peter M. Maly; David K. Moyeda; Antonio Marquez; W. Randall Seeker

    2000-01-01

    high efficiency of biomass in reburning are low fuel-N content and high content of alkali metals in ash. These results indicate that the efficiency of biomass as a reburning fuel may be predicted based on its ultimate, proximate, and ash analyses. The results of experimental and kinetic modeling studies were utilized in applying a validated methodology for reburning system design to biomass reburning in a typical coal-fired boiler. Based on the trends in biomass reburning performance and the characteristics of the boiler under study, a preliminary process design for biomass reburning was developed. Physical flow models were applied to specific injection parameters and operating scenarios, to assess the mixing performance of reburning fuel and overfire air jets which is of paramount importance in achieving target NO(sub x) control performance. The two preliminary cases studied showed potential as candidate reburning designs, and demonstrated that similar mixing performance could be achieved in operation with different quantities of reburning fuel. Based upon this preliminary evaluation, EER has determined that reburning and advanced reburning technologies can be successfully applied using biomass. Pilot-scale studies on biomass reburning conducted by EER have indicated that biomass is an excellent reburning fuel. This generic design study provides a template approach for future demonstrations in specific installations

  9. 300 Area dangerous waste tank management system: Compliance plan approach. Final report

    International Nuclear Information System (INIS)

    1996-03-01

    In its Dec. 5, 1989 letter to DOE-Richland (DOE-RL) Operations, the Washington State Dept. of Ecology requested that DOE-RL prepare ''a plant evaluating alternatives for storage and/or treatment of hazardous waste in the 300 Area...''. This document, prepared in response to that letter, presents the proposed approach to compliance of the 300 Area with the federal Resource Conservation and Recovery Act and Washington State's Chapter 173-303 WAC, Dangerous Waste Regulations. It also contains 10 appendices which were developed as bases for preparing the compliance plan approach. It refers to the Radioactive Liquid Waste System facilities and to the radioactive mixed waste

  10. Investigation of tt in the full hadronic final state at CDF with a neural network approach

    CERN Document Server

    Sidoti, A; Busetto, G; Castro, A; Dusini, S; Lazzizzera, I; Wyss, J

    2001-01-01

    In this work we present the results of a neural network (NN) approach to the measurement of the tt production cross-section and top mass in the all-hadronic channel, analyzing data collected at the Collider Detector at Fermilab (CDF) experiment. We have used a hardware implementation of a feedforward neural network, TOTEM, the product of a collaboration of INFN (Istituto Nazionale Fisica Nucleare)-IRST (Istituto per la Ricerca Scientifica e Tecnologica)-University of Trento, Italy. Particular attention has been paid to the evaluation of the systematics specifically related to the NN approach. The results are consistent with those obtained at CDF by conventional data selection techniques. (38 refs).

  11. 300 Area dangerous waste tank management system: Compliance plan approach. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    In its Dec. 5, 1989 letter to DOE-Richland (DOE-RL) Operations, the Washington State Dept. of Ecology requested that DOE-RL prepare ``a plant evaluating alternatives for storage and/or treatment of hazardous waste in the 300 Area...``. This document, prepared in response to that letter, presents the proposed approach to compliance of the 300 Area with the federal Resource Conservation and Recovery Act and Washington State`s Chapter 173-303 WAC, Dangerous Waste Regulations. It also contains 10 appendices which were developed as bases for preparing the compliance plan approach. It refers to the Radioactive Liquid Waste System facilities and to the radioactive mixed waste.

  12. A discrete element modelling approach for block impacts on trees

    Science.gov (United States)

    Toe, David; Bourrier, Franck; Olmedo, Ignatio; Berger, Frederic

    2015-04-01

    These past few year rockfall models explicitly accounting for block shape, especially those using the Discrete Element Method (DEM), have shown a good ability to predict rockfall trajectories. Integrating forest effects into those models still remain challenging. This study aims at using a DEM approach to model impacts of blocks on trees and identify the key parameters controlling the block kinematics after the impact on a tree. A DEM impact model of a block on a tree was developed and validated using laboratory experiments. Then, key parameters were assessed using a global sensitivity analyse. Modelling the impact of a block on a tree using DEM allows taking into account large displacements, material non-linearities and contacts between the block and the tree. Tree stems are represented by flexible cylinders model as plastic beams sustaining normal, shearing, bending, and twisting loading. Root soil interactions are modelled using a rotation stiffness acting on the bending moment at the bottom of the tree and a limit bending moment to account for tree overturning. The crown is taken into account using an additional mass distribute uniformly on the upper part of the tree. The block is represented by a sphere. The contact model between the block and the stem consists of an elastic frictional model. The DEM model was validated using laboratory impact tests carried out on 41 fresh beech (Fagus Sylvatica) stems. Each stem was 1,3 m long with a diameter between 3 to 7 cm. Wood stems were clamped on a rigid structure and impacted by a 149 kg charpy pendulum. Finally an intensive simulation campaign of blocks impacting trees was done to identify the input parameters controlling the block kinematics after the impact on a tree. 20 input parameters were considered in the DEM simulation model : 12 parameters were related to the tree and 8 parameters to the block. The results highlight that the impact velocity, the stem diameter, and the block volume are the three input

  13. Mathematical Modelling Approach in Mathematics Education

    Science.gov (United States)

    Arseven, Ayla

    2015-01-01

    The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…

  14. Eco-approach and Eco-departure planning study : final report.

    Science.gov (United States)

    2016-01-31

    A long term (10 year) research roadmap is proposed to guide the development and potential deployment of Eco-Approach and Departure (Eco A/D) functionality at signalized intersections, with a focus on commercialization of initial system concepts in 5+...

  15. Learner-Centered Instruction (LCI): Volume 7. Evaluation of the LCI Approach. Final Report.

    Science.gov (United States)

    Pieper, William J.; And Others

    An evaluation of the learner-centered instruction (LCI) approach to training was conducted by comparing the LCI F-111A weapons control systems mechanic/technician course with the conventional Air Force course for the same Air Force specialty code (AFSC) on the following dimensions; job performance of course graduates, man-hour and dollar costs of…

  16. An Exploratory Study: Assessment of Modeled Dioxin Exposure in Ceramic Art Studios (Final Report, 2008)

    Science.gov (United States)

    EPA announced the availability of the final report, An Exploratory Study: Assessment of Modeled Dioxin Exposure in Ceramic Art Studios. This report investigates the potential dioxin exposure to artists/hobbyists who use ball clay to make pottery and related products. Derm...

  17. Search for the standard model Higgs boson in tau final states

    NARCIS (Netherlands)

    Abazov, V.M.; et al., [Unknown; Ancu, L.S.; de Jong, S.J.; Filthaut, F.; Galea, C.F.; Hegeman, J.G.; Houben, P.; Meijer, M.M.; Svoisky, P.; van den Berg, P.J.; van Leeuwen, W.M.

    2009-01-01

    We present a search for the standard model Higgs boson using hadronically decaying tau leptons, in 1 fb(-1) of data collected with the D0 detector at the Fermilab Tevatron p(p)over-bar collider. We select two final states: tau(+/-) plus missing transverse energy and b jets, and tau(+)tau(-) plus

  18. Definition, development, and demonstration of analytical procedures for the structured assessment approach. Final report

    International Nuclear Information System (INIS)

    1979-01-01

    Analytical procedures were refined for the Structural Assessment Approach for assessing the Material Control and Accounting systems at facilities that contain special nuclear material. Requirements were established for an efficient, feasible algorithm to be used in evaluating system performance measures that involve the probability of detection. Algorithm requirements to calculate the probability of detection for a given type of adversary and the target set are described

  19. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  20. Computational Approaches for Modeling the Multiphysics in Pultrusion Process

    Directory of Open Access Journals (Sweden)

    P. Carlone

    2013-01-01

    Full Text Available Pultrusion is a continuous manufacturing process used to produce high strength composite profiles with constant cross section. The mutual interactions between heat transfer, resin flow and cure reaction, variation in the material properties, and stress/distortion evolutions strongly affect the process dynamics together with the mechanical properties and the geometrical precision of the final product. In the present work, pultrusion process simulations are performed for a unidirectional (UD graphite/epoxy composite rod including several processing physics, such as fluid flow, heat transfer, chemical reaction, and solid mechanics. The pressure increase and the resin flow at the tapered inlet of the die are calculated by means of a computational fluid dynamics (CFD finite volume model. Several models, based on different homogenization levels and solution schemes, are proposed and compared for the evaluation of the temperature and the degree of cure distributions inside the heating die and at the postdie region. The transient stresses, distortions, and pull force are predicted using a sequentially coupled three-dimensional (3D thermochemical analysis together with a 2D plane strain mechanical analysis using the finite element method and compared with results obtained from a semianalytical approach.

  1. Do recommender systems benefit users? a modeling approach

    Science.gov (United States)

    Yeung, Chi Ho

    2016-04-01

    Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.

  2. Relaxed memory models: an operational approach

    OpenAIRE

    Boudol , Gérard; Petri , Gustavo

    2009-01-01

    International audience; Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allow...

  3. Numerical modelling approach for mine backfill

    Indian Academy of Sciences (India)

    ... of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry.

  4. Comparison of tree types of models for the prediction of final academic achievement

    Directory of Open Access Journals (Sweden)

    Silvana Gasar

    2002-12-01

    Full Text Available For efficient prevention of inappropriate secondary school choices and by that academic failure, school counselors need a tool for the prediction of individual pupil's final academic achievements. Using data mining techniques on pupils' data base and expert modeling, we developed several models for the prediction of final academic achievement in an individual high school educational program. For data mining, we used statistical analyses, clustering and two machine learning methods: developing classification decision trees and hierarchical decision models. Using an expert system shell DEX, an expert system, based on a hierarchical multi-attribute decision model, was developed manually. All the models were validated and evaluated from the viewpoint of their applicability. The predictive accuracy of DEX models and decision trees was equal and very satisfying, as it reached the predictive accuracy of an experienced counselor. With respect on the efficiency and difficulties in developing models, and relatively rapid changing of our education system, we propose that decision trees are used in further development of predictive models.

  5. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solution can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.

  6. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Lenardo C. Silva

    2015-10-01

    Full Text Available Medical Cyber-Physical Systems (MCPS are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  7. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    Science.gov (United States)

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  8. Models Portability: Some Considerations about Transdisciplinary Approaches

    Science.gov (United States)

    Giuliani, Alessandro

    Some critical issues about the relative portability of models and solutions across disciplinary barriers are discussed. The risks linked to the use of models and theories coming from different disciplines are evidentiated with a particular emphasis on biology. A metaphorical use of conceptual tools coming from other fields is suggested, together with the unescapable need to judge about the relative merits of a model on the basis of the amount of facts relative to the particular domain of application it explains. Some examples of metaphorical modeling coming from biochemistry and psychobiology are briefly discussed in order to clarify the above positions.

  9. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  10. A visual approach for modeling spatiotemporal relations

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); C.S.S. Neto; L.F.G. Soares

    2008-01-01

    htmlabstractTextual programming languages have proven to be difficult to learn and to use effectively for many people. For this sake, visual tools can be useful to abstract the complexity of such textual languages, minimizing the specification efforts. In this paper we present a visual approach for

  11. DIVERSE APPROACHES TO MODELLING THE ASSIMILATIVE ...

    African Journals Online (AJOL)

    This study evaluated the assimilative capacity of Ikpoba River using different approaches namely: homogeneous differential equation, ANOVA/Duncan Multiple rage test, first and second order differential equations, correlation analysis, Eigen values and eigenvectors, multiple linear regression, bootstrapping and far-field ...

  12. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity

  13. Modeling Approaches for Describing Microbial Population Heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita

    in a computational (CFD) fluid dynamic model. The anaerobic Growth of a budding yeast population in a continuously run microbioreactor was used as example. The proposed integrated model describes the fluid flow, the local cell size and cell cycle position distributions, as well as the local concentrations of glucose...

  14. A simplified approach to feedwater train modeling

    International Nuclear Information System (INIS)

    Ollat, X.; Smoak, R.A.

    1990-01-01

    This paper presents a method to simplify feedwater train models for power plants. A simple set of algebraic equations, based on mass and energy balances, is developed to replace complex representations of the components under certain assumptions. The method was tested and used to model the low pressure heaters of the Sequoyah Nuclear Plant in a larger simulation

  15. Generalized equilibrium modeling: the methodology of the SRI-Gulf energy model. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Gazalet, E.G.

    1977-05-01

    The report provides documentation of the generalized equilibrium modeling methodology underlying the SRI-Gulf Energy Model and focuses entirely on the philosophical, mathematical, and computational aspects of the methodology. The model is a highly detailed regional and dynamic model of the supply and demand for energy in the US. The introduction emphasized the need to focus modeling efforts on decisions and the coordinated decomposition of complex decision problems using iterative methods. The conceptual framework is followed by a description of the structure of the current SRI-Gulf model and a detailed development of the process relations that comprise the model. The network iteration algorithm used to compute a solution to the model is described and the overall methodology is compared with other modeling methodologies. 26 references.

  16. Three dimensional global modeling of atmospheric CO2. Final technical report

    International Nuclear Information System (INIS)

    Fung, I.; Hansen, J.; Rind, D.

    1983-01-01

    A modeling effort has been initiated to study the prospects of extracting information on carbon dioxide sources and sinks from observed CO 2 variations. The approach uses a three-dimensional global transport model, based on winds from a 3-D general circulation model (GCM), to advect CO 2 noninteractively, i.e., as a tracer, with specified sources and sinks of CO 2 at the surface. This report identifies the 3-D model employed in this study and discusses biosphere, ocean and fossil fuel sources and sinks. Some preliminary model results are presented. 14 figures

  17. Cellular communication and “non-targeted effects”: Modelling approaches

    Science.gov (United States)

    Ballarini, Francesca; Facoetti, Angelica; Mariotti, Luca; Nano, Rosanna; Ottolenghi, Andrea

    2009-10-01

    During the last decade, a large number of experimental studies on the so-called "non-targeted effects", in particular bystander effects, outlined that cellular communication plays a significant role in the pathways leading to radiobiological damage. Although it is known that two main types of cellular communication (i.e. via gap junctions and/or molecular messengers diffusing in the extra-cellular environment, such as cytokines, NO etc.) play a major role, it is of utmost importance to better understand the underlying mechanisms, and how such mechanisms can be modulated by ionizing radiation. Though the "final" goal is of course to elucidate the in vivo scenario, in the meanwhile also in vitro studies can provide useful insights. In the present paper we will discuss key issues on the mechanisms underlying non-targeted effects and cell communication, for which theoretical models and simulation codes can be of great help. In this framework, we will present in detail three literature models, as well as an approach under development at the University of Pavia. More specifically, we will first focus on a version of the "State-Vector Model" including bystander-induced apoptosis of initiated cells, which was successfully fitted to in vitro data on neoplastic transformation supporting the hypothesis of a protective bystander effect mediated by apoptosis. The second analyzed model, focusing on the kinetics of bystander effects in 3D tissues, was successfully fitted to data on bystander damage in an artificial 3D skin system, indicating a signal range of the order of 0.7-1 mm. A third model for bystander effect, taking into account of spatial location, cell killing and repopulation, showed dose-response curves increasing approximately linearly at low dose rates but quickly flattening out for higher dose rates, also predicting an effect augmentation following dose fractionation. Concerning the Pavia approach, which can model the release, diffusion and depletion/degradation of

  18. The workshop on ecosystems modelling approaches for South ...

    African Journals Online (AJOL)

    roles played by models in the OMP approach, and raises questions about the costs of the data collection. (in particular) needed to apply a multispecies modelling approach in South African fisheries management. It then summarizes the deliberations of workshops held by the Scientific Committees of two international ma-.

  19. Final Report Coupling in silico microbial models with reactive transport models to predict the fate of contaminants in the subsurface.

    Energy Technology Data Exchange (ETDEWEB)

    Lovley, Derek R.

    2012-10-31

    This project successfully accomplished its goal of coupling genome-scale metabolic models with hydrological and geochemical models to predict the activity of subsurface microorganisms during uranium bioremediation. Furthermore, it was demonstrated how this modeling approach can be used to develop new strategies to optimize bioremediation. The approach of coupling genome-scale metabolic models with reactive transport modeling is now well enough established that it has been adopted by other DOE investigators studying uranium bioremediation. Furthermore, the basic principles developed during our studies will be applicable to much broader investigations of microbial activities, not only for other types of bioremediation, but microbial metabolism in diversity of environments. This approach has the potential to make an important contribution to predicting the impact of environmental perturbations on the cycling of carbon and other biogeochemical cycles.

  20. A simple approach to modeling ductile failure.

    Energy Technology Data Exchange (ETDEWEB)

    Wellman, Gerald William

    2012-06-01

    Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.

  1. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  2. Chemotaxis: A Multi-Scale Modeling Approach

    Science.gov (United States)

    Bhowmik, Arpan

    We are attempting to build a working simulation of population level self-organization in dictyostelium discoideum cells by combining existing models for chemo-attractant production and detection, along with phenomenological motility models. Our goal is to create a computationally-viable model-framework within which a population of cells can self-generate chemo-attractant waves and self-organize based on the directional cues of those waves. The work is a direct continuation of our previous work published in Physical Biology titled ``Excitable waves and direction-sensing in Dictyostelium Discoideum: steps towards a chemotaxis model''. This is a work in progress, no official draft/paper exists yet.

  3. An Integrated Approach to Modeling Evacuation Behavior

    Science.gov (United States)

    2011-02-01

    A spate of recent hurricanes and other natural disasters have drawn a lot of attention to the evacuation decision of individuals. Here we focus on evacuation models that incorporate two economic phenomena that seem to be increasingly important in exp...

  4. Infectious disease modeling a hybrid system approach

    CERN Document Server

    Liu, Xinzhi

    2017-01-01

    This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.

  5. Final Report - Composite Fermion Approach to Strongly Interacting Quasi Two Dimensional Electron Gas Systems

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, John

    2009-11-30

    Work related to this project introduced the idea of an effective monopole strength Q* that acted as the effective angular momentum of the lowest shell of composite Fermions (CF). This allowed us to predict the angular momentum of the lowest band of energy states for any value of the applied magnetic field simply by determining N{sub QP} the number of quasielectrons (QE) or quasiholes (QH) in a partially filled CF shell and adding angular momenta of the N{sub QP} Fermions excitations. The approach reported treated the filled CF level as a vacuum state which could support QE and QH excitations. Numerical diagonalization of small systems allowed us to determine the angular momenta, the energy, and the pair interaction energies of these elementary excitations. The spectra of low energy states could then be evaluated in a Fermi liquid-like picture, treating the much smaller number of quasiparticles and their interactions instead of the larger system of N electrons with Coulomb interactions.

  6. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  7. 3D Multiscale Integrated Modeling Approach of Complex Rock Mass Structures

    Directory of Open Access Journals (Sweden)

    Mingchao Li

    2014-01-01

    Full Text Available Based on abundant geological data of different regions and different scales in hydraulic engineering, a new approach of 3D engineering-scale and statistical-scale integrated modeling was put forward, considering the complex relationships among geological structures and discontinuities and hydraulic structures. For engineering-scale geological structures, the 3D rock mass model of the study region was built by the exact match modeling method and the reliability analysis technique. For statistical-scale jointed rock mass, the random network simulation modeling method was realized, including Baecher structure plane model, Monte Carlo simulation, and dynamic check of random discontinuities, and the corresponding software program was developed. Finally, the refined model was reconstructed integrating with the engineering-scale model of rock structures, the statistical-scale model of discontinuities network, and the hydraulic structures model. It has been applied to the practical hydraulic project and offers the model basis for the analysis of hydraulic rock mass structures.

  8. Validated Models for Radiation Response and Signal Generation in Scintillators: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Kerisit, Sebastien N.; Gao, Fei; Xie, YuLong; Campbell, Luke W.; Van Ginhoven, Renee M.; Wang, Zhiguo; Prange, Micah P.; Wu, Dangxin

    2014-12-01

    This Final Report presents work carried out at Pacific Northwest National Laboratory (PNNL) under the project entitled “Validated Models for Radiation Response and Signal Generation in Scintillators” (Project number: PL10-Scin-theor-PD2Jf) and led by Drs. Fei Gao and Sebastien N. Kerisit. This project was divided into four tasks: 1) Electronic response functions (ab initio data model) 2) Electron-hole yield, variance, and spatial distribution 3) Ab initio calculations of information carrier properties 4) Transport of electron-hole pairs and scintillation efficiency Detailed information on the results obtained in each of the four tasks is provided in this Final Report. Furthermore, published peer-reviewed articles based on the work carried under this project are included in Appendix. This work was supported by the National Nuclear Security Administration, Office of Nuclear Nonproliferation Research and Development (DNN R&D/NA-22), of the U.S. Department of Energy (DOE).

  9. "Dispersion modeling approaches for near road | Science ...

    Science.gov (United States)

    Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal

  10. Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches

    Science.gov (United States)

    Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia

    2017-10-01

    With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.

  11. Analysis of Final Energy Demand by Sector in Malaysia using MAED Model

    International Nuclear Information System (INIS)

    Kumar, M.; Muhammed Zulfakar Mohd Zolkaffly; Alawiah Musa

    2011-01-01

    Energy supply security is important in ensuring a long term supply to fulfill the growing energy demand. This paper presents the use of IAEA energy planning tool, Model for Analysis of Energy Demand (MAED) to analyze, simulate and compare final energy demand by five different sectors in Malaysia under some assumptions, bounds and restrictions and the outcome can be used for planning of energy supply in future. (author)

  12. 1993-1994 Final technical report for establishing the SECME Model in the District of Columbia

    International Nuclear Information System (INIS)

    Vickers, R.G.

    1995-01-01

    This is the final report for a program to establish the SECME Model in the District of Columbia. This program has seen the development of a partnership between the District of Columbia Public Schools, the University of the District of Columbia, the Department of Energy, and SECME. This partnership has demonstrated positive achievement in mathematics and science education and learning in students within the District of Columbia

  13. 1993-1994 Final technical report for establishing the SECME Model in the District of Columbia

    Energy Technology Data Exchange (ETDEWEB)

    Vickers, R.G.

    1995-12-31

    This is the final report for a program to establish the SECME Model in the District of Columbia. This program has seen the development of a partnership between the District of Columbia Public Schools, the University of the District of Columbia, the Department of Energy, and SECME. This partnership has demonstrated positive achievement in mathematics and science education and learning in students within the District of Columbia.

  14. Application of a single-objective, hybrid genetic algorithm approach to pharmacokinetic model building.

    Science.gov (United States)

    Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R

    2012-08-01

    A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three

  15. Fuzzy Investment Portfolio Selection Models Based on Interval Analysis Approach

    Directory of Open Access Journals (Sweden)

    Haifeng Guo

    2012-01-01

    Full Text Available This paper employs fuzzy set theory to solve the unintuitive problem of the Markowitz mean-variance (MV portfolio model and extend it to a fuzzy investment portfolio selection model. Our model establishes intervals for expected returns and risk preference, which can take into account investors' different investment appetite and thus can find the optimal resolution for each interval. In the empirical part, we test this model in Chinese stocks investment and find that this model can fulfill different kinds of investors’ objectives. Finally, investment risk can be decreased when we add investment limit to each stock in the portfolio, which indicates our model is useful in practice.

  16. A consortium approach to glass furnace modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.-L.; Golchert, B.; Petrick, M.

    1999-04-20

    Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.

  17. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  18. An interdisciplinary approach to modeling tritium transfer into the environment

    International Nuclear Information System (INIS)

    Galeriu, D; Melintescu, A.

    2005-01-01

    More robust radiological assessment models are required to support the safety case for the nuclear industry. Heavy water reactors, fuel processing plants, radiopharmaceutical factories, and the future fusion reactor, all have large tritium loads. While of low probability, large accidental tritium releases cannot be ignored. For Romania that uses CANDU600 for nuclear energy, tritium is the national radionuclide. Tritium enters directly into the life cycle in many physicochemical forms. Tritiated water (HTO) is leaked from most nuclear installations but is partially converted into organically bound tritium (OBT) through plant and animal metabolic processes. Hydrogen and carbon are elemental components of major nutrients and animal tissues and their radioisotopes must be modeled differently from those of most other radionuclides. Tritium transfer from atmosphere to plant and conversion into organically bound tritium strongly depend on plant characteristics, season, and weather conditions. In order to cope with this large variability and avoid expensive calibration experiments, we developed a model using knowledge of plant physiology, agrometeorology, soil sciences, hydrology, and climatology. The transfer of tritiated water to plant was modeled with resistance approach including sparse canopy. The canopy resistance was modeled using the Jarvis-Calvet approach modified in order to make direct use of the canopy photosynthesis rate. The crop growth model WOFOST was used for photosynthesis rate both for canopy resistance and formation of organically bound tritium. Using this formalism, the tritium transfer parameters were directly linked to processes and parameters known from agricultural sciences. Model predictions for tritium in wheat were close to a factor two, according to experimental data without any calibration. The model was also tested on rice and soybean and can be applied for various plants and environmental conditions. For sparse canopy, the model used coupled

  19. Final Technical Report -- Bridging the PSI Knowledge Gap: A Multiscale Approach

    Energy Technology Data Exchange (ETDEWEB)

    Whyte, Dennis [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2014-12-12

    The Plasma Surface Interactions (PSI) Science Center formed by the grant undertook a multidisciplinary set of studies on the complex interface between the plasma and solid states of matter. The strategy of the center was to combine and integrate the experimental, diagnostic and modeling toolkits from multiple institutions towards specific PSI problems. In this way the Center could tackle integrated science issues which were not addressable by single institutions, as well as evolve the underlying science of the PSI in a more general way than just for fusion applications. The overall strategy proved very successful. The research result and highlights of the MIT portion of the Center are primarily described. A particular highlight is the study of tungsten nano-tendril growth in the presence of helium plasmas. The Center research provided valuable new insights to the mechanisms controlling the nano-tendrils by developing coupled modeling and in situ diagnostic methods which could be directly compared. For example, the role of helium accumulation in tungsten distortion in the surface was followed with unique in situ helium concentration diagnostics developed. These depth-profiled, time-resolved helium concentration measurements continue to challenge the numerical models of nano-tendrils. The Center team also combined its expertise on tungsten nano-tendrils to demonstrate for the first time the growth of the tendrils in a fusion environment on the Alcator C-Mod fusion experiment, thus having significant impact on the broader fusion research effort. A new form of isolated nano-tendril “columns” were identified which are now being used to understand the underlying mechanisms controlling the tendril growth. The Center also advanced PSI science on a broader front with a particular emphasis on developing a wide range of in situ PSI diagnostic tools at the DIONISOS facility at MIT. For example the strong suppression of sputtering by the certain combination of light

  20. How is the Current Nano/Microscopic Knowledge Implemented in Model Approaches?

    International Nuclear Information System (INIS)

    Rotenberg, Benjamin

    2013-01-01

    The recent developments of experimental techniques have opened new opportunities and challenges for the modelling and simulation of clay materials, on various scales. In this communication, several aspects of the interaction between experimental and modelling approaches will be presented and dis-cussed. What levels of modelling are available depending on the target property and what experimental input is required? How can experimental information be used to validate models? What knowledge can modelling on different scale bring to the knowledge on the physical properties of clays? Finally, what can we do when experimental information is not available? Models implement the current nano/microscopic knowledge using experimental input, taking advantage of multi-scale approaches, and providing data or insights complementary to experiments. Future work will greatly benefit from the recent experimental developments, in particular for 3D-imaging on intermediate scales, and should also address other properties, e.g. mechanical or thermal properties. (authors)

  1. Final Report for Wetlands as a Source of Atmospheric Methane: A Multiscale and Multidisciplinary Approach

    Energy Technology Data Exchange (ETDEWEB)

    McFarlane, Karis J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-10-28

    Boreal peatlands contain large amounts of old carbon, protected by anaerobic and cold conditions. Climate change could result in favorable conditions for the microbial decomposition and release of this old peat carbon as CO2 or CH4 back into the atmosphere. Our goal was to test the potential for this positive biological feedback to climate change at SPRUCE (Spruce and Peatland Response Under Climatic and Environmental Change), a manipulation experiment funded by DOE and occurring in a forested bog in Minnesota. Taking advantage of LLNL’s capabilities and expertise in chemical and isotopic signatures we found that carbon emissions from peat were dominated by recently fixed photosynthates, even after short-term experimental warming. We also found that subsurface hydrologic transport was surprisingly rapid at SPRUCE, supplying microbes with young dissolved organic carbon (DOC). We also identified which microbes oxidize CH4 to CO2 at SPRUCE and found that the most active of these also fix N2 (which means they can utilize atmospheric N, making it accessible for other microbes and plants). These results reflect important interactions between hydrology, carbon cycling, and nitrogen cycling present at the bog and relevant to interpreting experimental results and modeling the wetland response to experimental treatments. LLNL involvement at SPRUCE continues through collaborations and a small contract with ORNL, the lead lab for the SPRUCE experiment.

  2. Phytoplankton as Particles - A New Approach to Modeling Algal Blooms

    Science.gov (United States)

    2013-07-01

    ER D C/ EL T R -1 3 -1 3 Civil Works Basic Research Program Phytoplankton as Particles – A New Approach to Modeling Algal Blooms E nv... Phytoplankton as Particles – A New Approach to Modeling Algal Blooms Carl F. Cerco and Mark R. Noel Environmental Laboratory U.S. Army Engineer Research... phytoplankton blooms can be modeled by treating phytoplankton as discrete particles capable of self- induced transport via buoyancy regulation or other

  3. Contribution of a companion modelling approach

    African Journals Online (AJOL)

    2009-09-16

    Sep 16, 2009 ... This paper describes the role of participatory modelling and simulation as a way to provide a meaningful framework to enable actors to understand the interdependencies in peri-urban catchment management. A role-playing game, connecting the quantitative and qualitative dynamics of the resources with ...

  4. Numerical modelling approach for mine backfill

    Indian Academy of Sciences (India)

    Muhammad Zaka Emad

    2017-07-24

    Jul 24, 2017 ... Abstract. Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by ...

  5. Energy and development : A modelling approach

    NARCIS (Netherlands)

    van Ruijven, B.J.|info:eu-repo/dai/nl/304834521

    2008-01-01

    Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used explore

  6. Numerical modelling approach for mine backfill

    Indian Academy of Sciences (India)

    Muhammad Zaka Emad

    2017-07-24

    Jul 24, 2017 ... pulse is applied as a stress history on the CRF stope. Blast wave data obtained from the on-site monitoring are very complex. It requires processing before interpreting and using it for numerical models. Generally, mining compa- nies hire geophysics experts for interpretation of such data. The blast wave ...

  7. A new approach to model mixed hydrates

    Czech Academy of Sciences Publication Activity Database

    Hielscher, S.; Vinš, Václav; Jäger, A.; Hrubý, Jan; Breitkopf, C.; Span, R.

    2018-01-01

    Roč. 459, March (2018), s. 170-185 ISSN 0378-3812 R&D Projects: GA ČR(CZ) GA17-08218S Institutional support: RVO:61388998 Keywords : gas hydrate * mixture * modeling Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 https://www. science direct.com/ science /article/pii/S0378381217304983

  8. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  9. ICFD modeling of final settlers - developing consistent and effective simulation model structures

    DEFF Research Database (Denmark)

    Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham

    Summary of key findings The concept of interpreted computational fluid dynamic (iCFD) modelling and the development methodology are presented (Fig. 1). The 1-D advection-dispersion model along with the statistically generated, meta-model for pseudo-dispersion constitutes the newly developed i...... nine different model structures based on literature (1; 3; 2; 10; 9) and on more recent considerations (Fig. 2a). Validation tests were done using the CFD outputs from extreme scenarios. The most effective model structure (relatively low the sum of square of relative errors, SSRE, and computational...... time) obtained is that in which the XTC is set at the concentration of the layer just below the feed-layer. The feed-layer location is set to the highest location where X>Xin (solids concentration in SST influent). An effective discretization level (computational time/numerical error) is assessed...

  10. Integration models: multicultural and liberal approaches confronted

    Science.gov (United States)

    Janicki, Wojciech

    2012-01-01

    European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.

  11. Modelling thermal plume impacts - Kalpakkam approach

    International Nuclear Information System (INIS)

    Rao, T.S.; Anup Kumar, B.; Narasimhan, S.V.

    2002-01-01

    A good understanding of temperature patterns in the receiving waters is essential to know the heat dissipation from thermal plumes originating from coastal power plants. The seasonal temperature profiles of the Kalpakkam coast near Madras Atomic Power Station (MAPS) thermal out fall site are determined and analysed. It is observed that the seasonal current reversal in the near shore zone is one of the major mechanisms for the transport of effluents away from the point of mixing. To further refine our understanding of the mixing and dilution processes, it is necessary to numerically simulate the coastal ocean processes by parameterising the key factors concerned. In this paper, we outline the experimental approach to achieve this objective. (author)

  12. Modelling of air quality for Winter and Summer episodes in Switzerland. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Andreani-Aksoyoglu, S.; Keller, J.; Barmpadimos, L.; Oderbolz, D.; Tinguely, M.; Prevot, A. [Paul Scherrer Institute (PSI), Laboratory of Atmospheric Chemistry, Villigen (Switzerland); Alfarra, R. [University of Manchester, Manchester (United Kingdom); Sandradewi, J. [Jisca Sandradewi, Hoexter (Germany)

    2009-05-15

    This final report issued by the General Energy Research Department and its Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute (PSI) reports on the results obtained from the modelling of regional air quality for three episodes, January-February 2006, June 2006 and January 2007. The focus of the calculations is on particulate matter concentrations, as well as on ozone levels in summer. The model results were compared with the aerosol data collected by an Aerosol Mass Spectrometer (AMS), which was operated during all three episodes as well as with the air quality monitoring data from further monitoring programs. The air quality model used in this study is described and the results obtained for various types of locations - rural, city, high-altitude and motorway-near - are presented and discussed. The models used are described.

  13. Modelling approach for photochemical pollution studies

    International Nuclear Information System (INIS)

    Silibello, C.; Catenacci, G.; Calori, G.; Crapanzano, G.; Pirovano, G.

    1996-01-01

    The comprehension of the relationships between primary pollutants emissions and secondary pollutants concentration and deposition is necessary to design policies and strategies for the maintenance of a healthy environment. The use of mathematical models is a powerful tool to assess the effect of the emissions and of physical and chemical transformations of pollutants on air quality. A photochemical model, Calgrid, developed by CARB (California Air Resources Board), has been used to test the effect of different meteorological and air quality, scenarios on the ozone concentration levels. This way we can evaluate the influence of these conditions to determine the most important chemical species and reactions in atmosphere. The ozone levels are strongly related to the reactive hydrocarbons concentrations and to the solar radiation flux

  14. Colour texture segmentation using modelling approach

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    2005-01-01

    Roč. 3687, č. - (2005), s. 484-491 ISSN 0302-9743. [International Conference on Advances in Pattern Recognition /3./. Bath, 22.08.2005-25.08.2005] R&D Projects: GA MŠk 1M0572; GA AV ČR 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : colour texture segmentation * image models * segmentation benchmark Subject RIV: BD - Theory of Information

  15. Tumour resistance to cisplatin: a modelling approach

    International Nuclear Information System (INIS)

    Marcu, L; Bezak, E; Olver, I; Doorn, T van

    2005-01-01

    Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure

  16. ISM Approach to Model Offshore Outsourcing Risks

    Directory of Open Access Journals (Sweden)

    Sunand Kumar

    2014-07-01

    Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing.  To this effect, authors have identified various risks through extant review of literature.  From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled.  Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.

  17. Smeared crack modelling approach for corrosion-induced concrete damage

    DEFF Research Database (Denmark)

    Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik

    2017-01-01

    In this paper a smeared crack modelling approach is used to simulate corrosion-induced damage in reinforced concrete. The presented modelling approach utilizes a thermal analogy to mimic the expansive nature of solid corrosion products, while taking into account the penetration of corrosion...... products into the surrounding concrete, non-uniform precipitation of corrosion products, and creep. To demonstrate the applicability of the presented modelling approach, numerical predictions in terms of corrosion-induced deformations as well as formation and propagation of micro- and macrocracks were...

  18. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  19. Jackiw-Pi model: A superfield approach

    Science.gov (United States)

    Gupta, Saurabh

    2014-12-01

    We derive the off-shell nilpotent and absolutely anticommuting Becchi-Rouet-Stora-Tyutin (BRST) as well as anti-BRST transformations s ( a) b corresponding to the Yang-Mills gauge transformations of 3D Jackiw-Pi model by exploiting the "augmented" super-field formalism. We also show that the Curci-Ferrari restriction, which is a hallmark of any non-Abelian 1-form gauge theories, emerges naturally within this formalism and plays an instrumental role in providing the proof of absolute anticommutativity of s ( a) b .

  20. A model independent search for new physics in final states containing leptons at the D0 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Piper, Joel Michael [Michigan State Univ., East Lansing, MI (United States)

    2009-01-01

    The standard model is known to be the low energy limit of a more general theory. Several consequences of the standard model point to a strong probability of new physics becoming experimentally visible in high energy collisions of a few TeV, resulting in high momentum objects. The specific signatures of these collisions are topics of much debate. Rather than choosing a specific signature, this analysis broadly searches the data, preferring breadth over sensitivity. In searching for new physics, several different approaches are used. These include the comparison of data with standard model background expectation in overall number of events, comparisons of distributions of many kinematic variables, and finally comparisons on the tails of distributions that sum the momenta of the objects in an event. With 1.07 fb-1 at the D0 experiment, we find no evidence of physics beyond the standard model. Several discrepancies from the standard model were found, but none of these provide a compelling case for new physics.

  1. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  2. Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches

    Science.gov (United States)

    Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem

    2014-01-01

    Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…

  3. Meson dynamics beyond the quark model: a study of final state interactions

    International Nuclear Information System (INIS)

    Au, K.L.; Pennington, M.R.; Morgan, D.

    1986-09-01

    A scalar glueball is predicted in the 1 GeV mass region. The present analysis is concerned with experimental evidence for such a state. Recent high statistics results on central dimeson production at the ISR enable the authors to perform an extensive new coupled channel analysis of I = O S-wave ππ and KK-bar final states. This unambiguously reveals three resonances in the 1 GeV region - S 1 (991), S 2 (988) and epsilon(900) - where the naive quark model expects just two. These new features are discussed including how they may be confirmed experimentally and their present interpretation. The S 1 (991) is a plausible candidate for the scalar glueball. Other production reactions are examined (heavy flavour decays and γγ reactions) which lead to the same final states. (author)

  4. Keyring models: An approach to steerability

    Science.gov (United States)

    Miller, Carl A.; Colbeck, Roger; Shi, Yaoyun

    2018-02-01

    If a measurement is made on one half of a bipartite system, then, conditioned on the outcome, the other half has a new reduced state. If these reduced states defy classical explanation—that is, if shared randomness cannot produce these reduced states for all possible measurements—the bipartite state is said to be steerable. Determining which states are steerable is a challenging problem even for low dimensions. In the case of two-qubit systems, a criterion is known for T-states (that is, those with maximally mixed marginals) under projective measurements. In the current work, we introduce the concept of keyring models—a special class of local hidden state models. When the measurements made correspond to real projectors, these allow us to study steerability beyond T-states. Using keyring models, we completely solve the steering problem for real projective measurements when the state arises from mixing a pure two-qubit state with uniform noise. We also give a partial solution in the case when the uniform noise is replaced by independent depolarizing channels.

  5. Functional RG approach to the Potts model

    Science.gov (United States)

    Ben Alì Zinati, Riccardo; Codello, Alessandro

    2018-01-01

    The critical behavior of the (n+1) -states Potts model in d-dimensions is studied with functional renormalization group techniques. We devise a general method to derive β-functions for continuous values of d and n and we write the flow equation for the effective potential (LPA’) when instead n is fixed. We calculate several critical exponents, which are found to be in good agreement with Monte Carlo simulations and ɛ-expansion results available in the literature. In particular, we focus on Percolation (n\\to0) and Spanning Forest (n\\to-1) which are the only non-trivial universality classes in d  =  4,5 and where our methods converge faster.

  6. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    Science.gov (United States)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  7. Fusion modeling approach for novel plasma sources

    International Nuclear Information System (INIS)

    Melazzi, D; Manente, M; Pavarin, D; Cardinali, A

    2012-01-01

    The physics involved in the coupling, propagation and absorption of RF helicon waves (electronic whistler) in low temperature Helicon plasma sources is investigated by solving the 3D Maxwell-Vlasov model equations using a WKB asymptotic expansion. The reduced set of equations is formally Hamiltonian and allows for the reconstruction of the wave front of the propagating wave, monitoring along the calculation that the WKB expansion remains satisfied. This method can be fruitfully employed in a new investigation of the power deposition mechanisms involved in common Helicon low temperature plasma sources when a general confinement magnetic field configuration is allowed, unveiling new physical insight in the wave propagation and absorption phenomena and stimulating further research for the design of innovative and more efficient low temperature plasma sources. A brief overview of this methodology and its capabilities has been presented in this paper.

  8. Carbonate rock depositional models: A microfacies approach

    Energy Technology Data Exchange (ETDEWEB)

    Carozzi, A.V.

    1988-01-01

    Carbonate rocks contain more than 50% by weight carbonate minerals such as calcite, dolomite, and siderite. Understanding how these rocks form can lead to more efficient methods of petroleum exploration. Micofacies analysis techniques can be used as a method of predicting models of sedimentation for carbonate rocks. Micofacies in carbonate rocks can be seen clearly only in thin sections under a microscope. This section analysis of carbonate rocks is a tool that can be used to understand depositional environments, diagenetic evolution of carbonate rocks, and the formation of porosity and permeability in carbonate rocks. The use of micofacies analysis techniques is applied to understanding the origin and formation of carbonate ramps, carbonate platforms, and carbonate slopes and basins. This book will be of interest to students and professionals concerned with the disciplines of sedimentary petrology, sedimentology, petroleum geology, and palentology.

  9. Wind Turbine Control: Robust Model Based Approach

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood

    . This is because, on the one hand, control methods can decrease the cost of energy by keeping the turbine close to its maximum efficiency. On the other hand, they can reduce structural fatigue and therefore increase the lifetime of the wind turbine. The power produced by a wind turbine is proportional...... to the square of its rotor radius, therefore it seems reasonable to increase the size of the wind turbine in order to capture more power. However as the size increases, the mass of the blades increases by cube of the rotor size. This means in order to keep structural feasibility and mass of the whole structure...... reasonable, the ratio of mass to size should be reduced. This trend results in more flexible structures. Control of the flexible structure of a wind turbine in a wind field with stochastic nature is very challenging. In this thesis we are examining a number of robust model based methods for wind turbine...

  10. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  11. R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report).

    Energy Technology Data Exchange (ETDEWEB)

    Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy

    2008-09-01

    Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model

  12. A dual model approach to ground water recovery trench design

    International Nuclear Information System (INIS)

    Clodfelter, C.L.; Crouch, M.S.

    1992-01-01

    The design of trenches for contaminated ground water recovery must consider several variables. This paper presents a dual-model approach for effectively recovering contaminated ground water migrating toward a trench by advection. The approach involves an analytical model to determine the vertical influence of the trench and a numerical flow model to determine the capture zone within the trench and the surrounding aquifer. The analytical model is utilized by varying trench dimensions and head values to design a trench which meets the remediation criteria. The numerical flow model is utilized to select the type of backfill and location of sumps within the trench. The dual-model approach can be used to design a recovery trench which effectively captures advective migration of contaminants in the vertical and horizontal planes

  13. Simple queueing approach to segregation dynamics in Schelling model

    OpenAIRE

    Sobkowicz, Pawel

    2007-01-01

    A simple queueing approach for segregation of agents in modified one dimensional Schelling segregation model is presented. The goal is to arrive at simple formula for the number of unhappy agents remaining after the segregation.

  14. Virtuous organization: A structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Majid Zamahani

    2013-02-01

    Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.

  15. A systemic approach for modeling soil functions

    Science.gov (United States)

    Vogel, Hans-Jörg; Bartke, Stephan; Daedlow, Katrin; Helming, Katharina; Kögel-Knabner, Ingrid; Lang, Birgit; Rabot, Eva; Russell, David; Stößel, Bastian; Weller, Ulrich; Wiesmeier, Martin; Wollschläger, Ute

    2018-03-01

    The central importance of soil for the functioning of terrestrial systems is increasingly recognized. Critically relevant for water quality, climate control, nutrient cycling and biodiversity, soil provides more functions than just the basis for agricultural production. Nowadays, soil is increasingly under pressure as a limited resource for the production of food, energy and raw materials. This has led to an increasing demand for concepts assessing soil functions so that they can be adequately considered in decision-making aimed at sustainable soil management. The various soil science disciplines have progressively developed highly sophisticated methods to explore the multitude of physical, chemical and biological processes in soil. It is not obvious, however, how the steadily improving insight into soil processes may contribute to the evaluation of soil functions. Here, we present to a new systemic modeling framework that allows for a consistent coupling between reductionist yet observable indicators for soil functions with detailed process understanding. It is based on the mechanistic relationships between soil functional attributes, each explained by a network of interacting processes as derived from scientific evidence. The non-linear character of these interactions produces stability and resilience of soil with respect to functional characteristics. We anticipate that this new conceptional framework will integrate the various soil science disciplines and help identify important future research questions at the interface between disciplines. It allows the overwhelming complexity of soil systems to be adequately coped with and paves the way for steadily improving our capability to assess soil functions based on scientific understanding.

  16. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  17. Towards Translating Graph Transformation Approaches by Model Transformations

    NARCIS (Netherlands)

    Hermann, F.; Kastenberg, H.; Modica, T.; Karsai, G.; Taentzer, G.

    2006-01-01

    Recently, many researchers are working on semantics preserving model transformation. In the field of graph transformation one can think of translating graph grammars written in one approach to a behaviourally equivalent graph grammar in another approach. In this paper we translate graph grammars

  18. An Almost Integration-free Approach to Ordered Response Models

    NARCIS (Netherlands)

    van Praag, B.M.S.; Ferrer-i-Carbonell, A.

    2006-01-01

    'In this paper we propose an alternative approach to the estimation of ordered response models. We show that the Probit-method may be replaced by a simple OLS-approach, called P(robit)OLS, without any loss of efficiency. This method can be generalized to the analysis of panel data. For large-scale

  19. Optimizing technology investments: a broad mission model approach

    Science.gov (United States)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  20. A generalized quarter car modelling approach with frame flexibility ...

    Indian Academy of Sciences (India)

    ... mass distribution and damping. Here we propose a generalized quarter-car modelling approach, incorporating both the frame as well as other-wheel ground contacts. Our approach is linear, uses Laplace transforms, involves vertical motions of key points of interest and has intermediate complexity with improved realism.

  1. Model-Independent Analysis of the Neutron-Proton Final-State Interaction Region in the $\\pi\\pi \\to pn\\pi^+$ Reaction

    CERN Document Server

    Uzikov, Yu N

    2001-01-01

    Experimental data on the \\pi\\pi\\to pn\\pi^+ reaction measured in an exclusive two-arm experiment at 800 MeV show a narrow peak arising from the strong proton-neutron final-state interaction. It was claimed, within the framework of a certain model, that this peak contained up to a 25 % spin-singlet final-state contribution. By comparing the data with those of \\pi\\pi\\to d\\pi^+ in a largely model-independent way, it is here demonstrated that at all the angles measured the whole of the peak could be explained as being due to spin-triplet final states, with the spin-singlet being at most a few percent. Good qualitative agreement with the measured proton analysing power is also found within this approach.

  2. A generalized quarter car modelling approach with frame flexibility ...

    Indian Academy of Sciences (India)

    HUSAIN KANCHWALA

    A simple Matlab code is provided that enables quick parametric studies. Finally, a parametric study and wheel hop analysis are performed for a realistic numerical example. Frequency and time domain responses obtained show clearly the effects of other wheels, which are outside the scope of usual quarter-car models. The.

  3. Aespoe Pillar Stability Experiment. Final coupled 3D thermo-mechanical modeling. Preliminary particle mechanical modeling

    International Nuclear Information System (INIS)

    Wanne, Toivo; Johansson, Erik; Potyondy, David

    2004-02-01

    SKB is planning to perform a large-scale pillar stability experiment called APSE (Aespoe Pillar Stability Experiment) at Aespoe HRL. The study is focused on understanding and control of progressive rock failure in hard crystalline rock and damage caused by high stresses. The elastic thermo-mechanical modeling was carried out in three dimensions because of the complex test geometry and in-situ stress tensor by using a finite-difference modeling software FLAC3D. Cracking and damage formation were modeled in the area of interest (pillar between two large scale holes) in two dimensions by using the Particle Flow Code (PFC), which is based on particle mechanics. FLAC and PFC were coupled to minimize the computer resources and the computing time. According to the modeling the initial temperature rises from 15 deg C to about 65 deg C in the pillar area during the heating period of 120 days. The rising temperature due to thermal expansion induces stresses in the pillar area and after 120 days heating the stresses have increased about 33% from the excavation induced maximum stress of 150 MPa to 200 MPa in the end of the heating period. The results from FLAC3D model showed that only regions where the crack initiation stress has exceeded were identified and they extended to about two meters down the hole wall. These could be considered the areas where damage may occur during the in-situ test. When the other hole is pressurized with a 0.8 MPa confining pressure it yields that 5 MPa more stress is needed to damage the rock than without confining pressure. This makes the damaged area in some degree smaller. High compressive stresses in addition to some tensile stresses might induce some AE (acoustic emission) activity in the upper part of the hole from the very beginning of the test and are thus potential areas where AE activities may be detected. Monitoring like acoustic emissions will be measured during the test execution. The 2D coupled PFC-FLAC modeling indicated that

  4. Aespoe Pillar Stability Experiment. Final coupled 3D thermo-mechanical modeling. Preliminary particle mechanical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wanne, Toivo; Johansson, Erik; Potyondy, David [Saanio and Riekkola Oy, Helsinki (Finland)

    2004-02-01

    SKB is planning to perform a large-scale pillar stability experiment called APSE (Aespoe Pillar Stability Experiment) at Aespoe HRL. The study is focused on understanding and control of progressive rock failure in hard crystalline rock and damage caused by high stresses. The elastic thermo-mechanical modeling was carried out in three dimensions because of the complex test geometry and in-situ stress tensor by using a finite-difference modeling software FLAC3D. Cracking and damage formation were modeled in the area of interest (pillar between two large scale holes) in two dimensions by using the Particle Flow Code (PFC), which is based on particle mechanics. FLAC and PFC were coupled to minimize the computer resources and the computing time. According to the modeling the initial temperature rises from 15 deg C to about 65 deg C in the pillar area during the heating period of 120 days. The rising temperature due to thermal expansion induces stresses in the pillar area and after 120 days heating the stresses have increased about 33% from the excavation induced maximum stress of 150 MPa to 200 MPa in the end of the heating period. The results from FLAC3D model showed that only regions where the crack initiation stress has exceeded were identified and they extended to about two meters down the hole wall. These could be considered the areas where damage may occur during the in-situ test. When the other hole is pressurized with a 0.8 MPa confining pressure it yields that 5 MPa more stress is needed to damage the rock than without confining pressure. This makes the damaged area in some degree smaller. High compressive stresses in addition to some tensile stresses might induce some AE (acoustic emission) activity in the upper part of the hole from the very beginning of the test and are thus potential areas where AE activities may be detected. Monitoring like acoustic emissions will be measured during the test execution. The 2D coupled PFC-FLAC modeling indicated that

  5. Numerical approaches to expansion process modeling

    Directory of Open Access Journals (Sweden)

    G. V. Alekseev

    2017-01-01

    Full Text Available Forage production is currently undergoing a period of intensive renovation and introduction of the most advanced technologies and equipment. More and more often such methods as barley toasting, grain extrusion, steaming and grain flattening, boiling bed explosion, infrared ray treatment of cereals and legumes, followed by flattening, and one-time or two-time granulation of the purified whole grain without humidification in matrix presses By grinding the granules. These methods require special apparatuses, machines, auxiliary equipment, created on the basis of different methods of compiled mathematical models. When roasting, simulating the heat fields arising in the working chamber, provide such conditions, the decomposition of a portion of the starch to monosaccharides, which makes the grain sweetish, but due to protein denaturation the digestibility of the protein and the availability of amino acids decrease somewhat. Grain is roasted mainly for young animals in order to teach them to eat food at an early age, stimulate the secretory activity of digestion, better development of the masticatory muscles. In addition, the high temperature is detrimental to bacterial contamination and various types of fungi, which largely avoids possible diseases of the gastrointestinal tract. This method has found wide application directly on the farms. Apply when used in feeding animals and legumes: peas, soy, lupine and lentils. These feeds are preliminarily ground, and then cooked or steamed for 1 hour for 30–40 minutes. In the feed mill. Such processing of feeds allows inactivating the anti-nutrients in them, which reduce the effectiveness of their use. After processing, legumes are used as protein supplements in an amount of 25–30% of the total nutritional value of the diet. But it is recommended to cook and steal a grain of good quality. A poor-quality grain that has been stored for a long time and damaged by pathogenic micro flora is subject to

  6. Graphical approach to model reduction for nonlinear biochemical networks.

    Science.gov (United States)

    Holland, David O; Krainak, Nicholas C; Saucerman, Jeffrey J

    2011-01-01

    Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a "concentration-clamp" procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1) it incorporates nonlinear system dynamics, and 2) it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β(1)-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal "kinetic biomarkers" of the overall β(1)-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems.

  7. Graphical approach to model reduction for nonlinear biochemical networks.

    Directory of Open Access Journals (Sweden)

    David O Holland

    Full Text Available Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a "concentration-clamp" procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1 it incorporates nonlinear system dynamics, and 2 it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β(1-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal "kinetic biomarkers" of the overall β(1-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems.

  8. A Succinct Approach to Static Analysis and Model Checking

    DEFF Research Database (Denmark)

    Filipiuk, Piotr

    In a number of areas software correctness is crucial, therefore it is often desirable to formally verify the presence of various properties or the absence of errors. This thesis presents a framework for concisely expressing static analysis and model checking problems. The framework facilitates...... in the classical formulation of ALFP logic. Finally, we show that the logics and the associated solvers can be used for rapid prototyping. We illustrate that by a variety of case studies from static analysis and model checking....

  9. Partial sum approaches to mathematical parameters of some growth models

    Science.gov (United States)

    Korkmaz, Mehmet

    2016-04-01

    Growth model is fitted by evaluating the mathematical parameters, a, b and c. In this study, the method of partial sums were used. For finding the mathematical parameters, firstly three partial sums were used, secondly four partial sums were used, thirdly five partial sums were used and finally N partial sums were used. The purpose of increasing the partial decomposition is to produce a better phase model which gives a better expected value by minimizing error sum of squares in the interval used.

  10. Final technical report for DE-SC00012633 AToM (Advanced Tokamak Modeling)

    Energy Technology Data Exchange (ETDEWEB)

    Holland, Christopher [Univ. of California, San Diego, CA (United States); Orlov, Dmitri [Univ. of California, San Diego, CA (United States); Izzo, Valerie [Univ. of California, San Diego, CA (United States)

    2018-02-05

    This final report for the AToM project documents contributions from University of California, San Diego researchers over the period of 9/1/2014 – 8/31/2017. The primary focus of these efforts was on performing validation studies of core tokamak transport models using the OMFIT framework, including development of OMFIT workflow scripts. Additional work was performed to develop tools for use of the nonlinear magnetohydrodynamics code NIMROD in OMFIT, and its use in the study of runaway electron dynamics in tokamak disruptions.

  11. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  12. A Model Management Approach for Co-Simulation Model Evaluation

    NARCIS (Netherlands)

    Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2011-01-01

    Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software

  13. The use of simulation as a learning approach to non-technical skills awareness in final year student nurses.

    Science.gov (United States)

    Pearson, Eloise; McLafferty, Isabella

    2011-11-01

    Understanding what non-technical skills are and their relevance for healthcare practitioners has become a new area of exploration. Although recent literature has highlighted the necessity of introducing non-technical skills training and assessment within medical education, nursing education has still to fully embrace this skills training. The purpose of this paper is to explore the use of simulated practice as a learning approach to demonstrate and assess non-technical skills for final year nursing students. An established ward simulation exercise was refocused to incorporate opportunities for these nursing students to be assessed on their ability to demonstrate application of non-technical skills. Opinions on whether this was a successful strategy were sought from the students by means of module evaluation questionnaires. Analysis of this data revealed that the majority of the students agreed that it was an effective learning approach, allowing them to demonstrate their non-technical skills, be assessed and subsequently identify further learning needs. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. A novel approach to modeling and diagnosing the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)

    1995-07-01

    A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  15. Comparison of the Modeling Approach between Membrane Bioreactor and Conventional Activated Sludge Processes

    DEFF Research Database (Denmark)

    Jiang, Tao; Sin, Gürkan; Spanjers, Henri

    2009-01-01

    Activated sludge models (ASM) have been developed and largely applied in conventional activated sludge (CAS) systems. The applicability of ASM to model membrane bioreactors (MBR) and the differences in modeling approaches have not been studied in detail. A laboratory-scale MBR was modeled using ASM......2d. It was found that the ASM2d model structure can still be used for MBR modeling. There are significant differences related to ASM modeling. First, a lower maximum specific growth rate for MBR nitrifiers was estimated. Independent experiments demonstrated that this might be attributed...... to the inhibition effect of soluble microbial products (SMP) at elevated concentration. Second, a greater biomass affinity to oxygen and ammonium was found, which was probably related to smaller MBR sludge flocs. Finally, the membrane throughput during membrane backwashing/relaxation can be normalized...

  16. Reusable Component Model Development Approach for Parallel and Distributed Simulation

    Science.gov (United States)

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng

    2014-01-01

    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  17. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  18. Final Thesis Models in European Teacher Education and Their Orientation towards the Academy and the Teaching Profession

    Science.gov (United States)

    Råde, Anders

    2014-01-01

    This study concerns different final thesis models in the research on teacher education in Europe and their orientation towards the academy and the teaching profession. In scientific journals, 33 articles support the occurrence of three models: the portfolio model, with a mainly teaching-professional orientation; the thesis model, with a mainly…

  19. Modeling of delays in PKPD: classical approaches and a tutorial for delay differential equations.

    Science.gov (United States)

    Koch, Gilbert; Krzyzanski, Wojciech; Pérez-Ruixo, Juan Jose; Schropp, Johannes

    2014-08-01

    In pharmacokinetics/pharmacodynamics (PKPD) the measured response is often delayed relative to drug administration, individuals in a population have a certain lifespan until they maturate or the change of biomarkers does not immediately affects the primary endpoint. The classical approach in PKPD is to apply transit compartment models (TCM) based on ordinary differential equations to handle such delays. However, an alternative approach to deal with delays are delay differential equations (DDE). DDEs feature additional flexibility and properties, realize more complex dynamics and can complementary be used together with TCMs. We introduce several delay based PKPD models and investigate mathematical properties of general DDE based models, which serve as subunits in order to build larger PKPD models. Finally, we review current PKPD software with respect to the implementation of DDEs for PKPD analysis.

  20. Models of the impact of dengue vaccines: a review of current research and potential approaches

    Science.gov (United States)

    Johansson, Michael A.; Hombach, Joachim; Cummings, Derek A.T.

    2015-01-01

    Vaccination reduces transmission of pathogens directly, by preventing individual infections, and indirectly, by reducing the probability of contact between infected individuals and susceptible ones. The potential combined impact of future dengue vaccines can be estimated using mathematical models of transmission. However, there is considerable uncertainty in the structure of models that accurately represent dengue transmission dynamics. Here, we review models that could be used to assess the impact of future dengue immunization programmes. We also review approaches that have been used to validate and parameterize models. A key parameter of all approaches is the basic reproduction number, R0, which can be used to determine the critical vaccination fraction to eliminate transmission. We review several methods that have been used to estimate this quantity. Finally, we discuss the characteristics of dengue vaccines that must be estimated to accurately assess their potential impact on dengue virus transmission. PMID:21699949

  1. Models of the impact of dengue vaccines: a review of current research and potential approaches.

    Science.gov (United States)

    Johansson, Michael A; Hombach, Joachim; Cummings, Derek A T

    2011-08-11

    Vaccination reduces transmission of pathogens directly, by preventing individual infections, and indirectly, by reducing the probability of contact between infected individuals and susceptible ones. The potential combined impact of future dengue vaccines can be estimated using mathematical models of transmission. However, there is considerable uncertainty in the structure of models that accurately represent dengue transmission dynamics. Here, we review models that could be used to assess the impact of future dengue immunization programmes. We also review approaches that have been used to validate and parameterize models. A key parameter of all approaches is the basic reproduction number, R(0), which can be used to determine the critical vaccination fraction to eliminate transmission. We review several methods that have been used to estimate this quantity. Finally, we discuss the characteristics of dengue vaccines that must be estimated to accurately assess their potential impact on dengue virus transmission. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. An Approach to Using Toxicogenomic Data in U.S. EPA Human Health Risk Assessments: A Dibutyl Phthalate Case Study (Final Report, 2010)

    Science.gov (United States)

    EPA announced the availability of the final report, An Approach to Using Toxicogenomic Data in U.S. EPA Human Health Risk Assessments: A Dibutyl Phthalate Case Study. This report outlines an approach to evaluate genomic data for use in risk assessment and a case study to ...

  3. Search for the standard model Higgs boson in tau lepton final states

    Energy Technology Data Exchange (ETDEWEB)

    Abazov, Victor Mukhamedovich; et al.

    2012-08-01

    We present a search for the standard model Higgs boson in final states with an electron or muon and a hadronically decaying tau lepton in association with zero, one, or two or more jets using data corresponding to an integrated luminosity of up to 7.3 fb{sup -1} collected with the D0 detector at the Fermilab Tevatron collider. The analysis is sensitive to Higgs boson production via gluon gluon fusion, associated vector boson production, and vector boson fusion, and to Higgs boson decays to tau lepton pairs or W boson pairs. Observed (expected) limits are set on the ratio of 95% C.L. upper limits on the cross section times branching ratio, relative to those predicted by the Standard Model, of 14 (22) at a Higgs boson mass of 115 GeV and 7.7 (6.8) at 165 GeV.

  4. Blooms' separation of the final exam of Engineering Mathematics II: Item reliability using Rasch measurement model

    Science.gov (United States)

    Fuaad, Norain Farhana Ahmad; Nopiah, Zulkifli Mohd; Tawil, Norgainy Mohd; Othman, Haliza; Asshaari, Izamarlina; Osman, Mohd Hanif; Ismail, Nur Arzilah

    2014-06-01

    In engineering studies and researches, Mathematics is one of the main elements which express physical, chemical and engineering laws. Therefore, it is essential for engineering students to have a strong knowledge in the fundamental of mathematics in order to apply the knowledge to real life issues. However, based on the previous results of Mathematics Pre-Test, it shows that the engineering students lack the fundamental knowledge in certain topics in mathematics. Due to this, apart from making improvements in the methods of teaching and learning, studies on the construction of questions (items) should also be emphasized. The purpose of this study is to assist lecturers in the process of item development and to monitor the separation of items based on Blooms' Taxonomy and to measure the reliability of the items itself usingRasch Measurement Model as a tool. By using Rasch Measurement Model, the final exam questions of Engineering Mathematics II (Linear Algebra) for semester 2 sessions 2012/2013 were analysed and the results will provide the details onthe extent to which the content of the item providesuseful information about students' ability. This study reveals that the items used in Engineering Mathematics II (Linear Algebra) final exam are well constructed but the separation of the items raises concern as it is argued that it needs further attention, as there is abig gap between items at several levels of Blooms' cognitive skill.

  5. Selection of hydrologic modeling approaches for climate change assessment: A comparison of model scale and structures

    Science.gov (United States)

    Surfleet, Christopher G.; Tullos, Desirèe; Chang, Heejun; Jung, Il-Won

    2012-09-01

    SummaryA wide variety of approaches to hydrologic (rainfall-runoff) modeling of river basins confounds our ability to select, develop, and interpret models, particularly in the evaluation of prediction uncertainty associated with climate change assessment. To inform the model selection process, we characterized and compared three structurally-distinct approaches and spatial scales of parameterization to modeling catchment hydrology: a large-scale approach (using the VIC model; 671,000 km2 area), a basin-scale approach (using the PRMS model; 29,700 km2 area), and a site-specific approach (the GSFLOW model; 4700 km2 area) forced by the same future climate estimates. For each approach, we present measures of fit to historic observations and predictions of future response, as well as estimates of model parameter uncertainty, when available. While the site-specific approach generally had the best fit to historic measurements, the performance of the model approaches varied. The site-specific approach generated the best fit at unregulated sites, the large scale approach performed best just downstream of flood control projects, and model performance varied at the farthest downstream sites where streamflow regulation is mitigated to some extent by unregulated tributaries and water diversions. These results illustrate how selection of a modeling approach and interpretation of climate change projections require (a) appropriate parameterization of the models for climate and hydrologic processes governing runoff generation in the area under study, (b) understanding and justifying the assumptions and limitations of the model, and (c) estimates of uncertainty associated with the modeling approach.

  6. Interoperable transactions in business models: A structured approach

    NARCIS (Netherlands)

    Weigand, H.; Verharen, E.; Dignum, F.P.M.

    1996-01-01

    Recent database research has given much attention to the specification of "flexible" transactions that can be used in interoperable systems. Starting from a quite different angle, Business Process Modelling has approached the area of communication modelling as well (the Language/Action

  7. A Model-Driven Approach to e-Course Management

    Science.gov (United States)

    Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana

    2018-01-01

    This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…

  8. Modeling Alaska boreal forests with a controlled trend surface approach

    Science.gov (United States)

    Mo Zhou; Jingjing Liang

    2012-01-01

    An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...

  9. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  10. Towards modeling future energy infrastructures - the ELECTRA system engineering approach

    DEFF Research Database (Denmark)

    Uslar, Mathias; Heussen, Kai

    2016-01-01

    of the IEC 62559 use case template as well as needed changes to cope particularly with the aspects of controller conflicts and Greenfield technology modeling. From the original envisioned use of the standards, we show a possible transfer on how to properly deal with a Greenfield approach when modeling....

  11. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  12. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  13. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  14. Product Trial Processing (PTP): a model approach from ...

    African Journals Online (AJOL)

    Product Trial Processing (PTP): a model approach from theconsumer's perspective. ... Global Journal of Social Sciences ... Among the constructs used in the model of consumer's processing of product trail includes; experiential and non- experiential attributes, perceived validity of product trial, consumer perceived expertise, ...

  15. A MIXTURE LIKELIHOOD APPROACH FOR GENERALIZED LINEAR-MODELS

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS

    1995-01-01

    A mixture model approach is developed that simultaneously estimates the posterior membership probabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of

  16. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  17. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  18. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  19. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  20. Numerical linked-cluster approach to quantum lattice models.

    Science.gov (United States)

    Rigol, Marcos; Bryant, Tyler; Singh, Rajiv R P

    2006-11-03

    We present a novel algorithm that allows one to obtain temperature dependent properties of quantum lattice models in the thermodynamic limit from exact diagonalization of small clusters. Our numerical linked-cluster approach provides a systematic framework to assess finite-size effects and is valid for any quantum lattice model. Unlike high temperature expansions, which have a finite radius of convergence in inverse temperature, these calculations are accurate at all temperatures provided the range of correlations is finite. We illustrate the power of our approach studying spin models on kagomé, triangular, and square lattices.

  1. A New Approach for Magneto-Static Hysteresis Behavioral Modeling

    DEFF Research Database (Denmark)

    Astorino, Antonio; Swaminathan, Madhavan; Antonini, Giulio

    2016-01-01

    in this paper is based on simple functions, which do not require calculus to be involved, thus assuring a very good efficiency in the algorithm. In addition, the proposed method enables initial magnetization curves, symmetric loops, minor loops, normal curves, and reversal curves of any order to be reproduced......, as demonstrated through the pertinent results provided in this paper. A model example based on the proposed modeling technique is also introduced and used as inductor core, in order to simulate an LR series circuit. Finally, the model ability to emulate hysteretic inductors is proved by the satisfactory agreement...

  2. Metamodelling Approach and Software Tools for Physical Modelling and Simulation

    Directory of Open Access Journals (Sweden)

    Vitaliy Mezhuyev

    2015-02-01

    Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.

  3. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  4. An integrated modeling approach to age invariant face recognition

    Science.gov (United States)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  5. Final Report for Award #DE-SC3956 Separating Algorithm and Implementation via programming Model Injection (SAIMI)

    Energy Technology Data Exchange (ETDEWEB)

    Strout, Michelle [Colorado State Univ., Fort Collins, CO (United States)

    2015-08-15

    Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programs through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.

  6. Modeling Approaches and Systems Related to Structured Modeling.

    Science.gov (United States)

    1987-02-01

    Lasdon 򒾂> and Maturana 򒾃> for surveys of several modern systems. A -6- N NN- %0 CAMPS (Lucas and Mitra 򒾁>) -- Computer Assisted Mathe- %l...583-589. MATURANA , S. 򒾃>. "Comparative Analysis of Mathematical Modeling Systems," informal note, Graduate School of Manage- ment, UCLA, February

  7. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  8. Railway Container Station Reselection Approach and Application: Based on Entropy-Cloud Model

    Directory of Open Access Journals (Sweden)

    Wencheng Huang

    2017-01-01

    Full Text Available Reasonable railway container freight stations layout means higher transportation efficiency and less transportation cost. To obtain more objective and accurate reselection results, a new entropy-cloud approach is formulated to solve the problem. The approach comprises three phases: Entropy Method is used to obtain the weight of each subcriterion during Phase  1, then cloud model is designed to form the evaluation cloud for each subcriterion during Phase  2, and finally during Phase  3 we use the weight during Phase  1 to multiply the initial evaluation cloud during Phase  2. MATLAB is applied to determine the evaluation figures and help us to make the final alternative decision. To test our approach, the railway container stations in Wuhan Railway Bureau were selected for our case study. The final evaluation result indicates only Xiangyang Station should be renovated and developed as a Special Transaction Station, five other stations should be kept and developed as Ordinary Stations, and the remaining 16 stations should be closed. Furthermore, the results show that, before the site reselection process, the average distance between two railway container stations was only 74.7 km but has improved to 182.6 km after using the approach formulated in this paper.

  9. Soil moisture simulations using two different modelling approaches

    Czech Academy of Sciences Publication Activity Database

    Šípek, Václav; Tesař, Miroslav

    2013-01-01

    Roč. 64, 3-4 (2013), s. 99-103 ISSN 0006-5471 R&D Projects: GA AV ČR IAA300600901; GA ČR GA205/08/1174 Institutional research plan: CEZ:AV0Z20600510 Keywords : soil moisture modelling * SWIM model * box modelling approach Subject RIV: DA - Hydrology ; Limnology http://www.boku.ac.at/diebodenkultur/volltexte/sondernummern/band-64/heft-3-4/sipek.pdf

  10. A Systems Genetic Approach to Identify Low Dose Radiation-Induced Lymphoma Susceptibility/DOE2013FinalReport

    Energy Technology Data Exchange (ETDEWEB)

    Balmain, Allan [University of California, San Francisco; Song, Ihn Young [University of California, San Francisco

    2013-05-15

    The ultimate goal of this project is to identify the combinations of genetic variants that confer an individual's susceptibility to the effects of low dose (0.1 Gy) gamma-radiation, in particular with regard to tumor development. In contrast to the known effects of high dose radiation in cancer induction, the responses to low dose radiation (defined as 0.1 Gy or less) are much less well understood, and have been proposed to involve a protective anti-tumor effect in some in vivo scientific models. These conflicting results confound attempts to develop predictive models of the risk of exposure to low dose radiation, particularly when combined with the strong effects of inherited genetic variants on both radiation effects and cancer susceptibility. We have used a Systems Genetics approach in mice that combines genetic background analysis with responses to low and high dose radiation, in order to develop insights that will allow us to reconcile these disparate observations. Using this comprehensive approach we have analyzed normal tissue gene expression (in this case the skin and thymus), together with the changes that take place in this gene expression architecture a) in response to low or high- dose radiation and b) during tumor development. Additionally, we have demonstrated that using our expression analysis approach in our genetically heterogeneous/defined radiation-induced tumor mouse models can uniquely identify genes and pathways relevant to human T-ALL, and uncover interactions between common genetic variants of genes which may lead to tumor susceptibility.

  11. A Data-Based Approach for Modeling and Analysis of Vehicle Collision by LPV-ARMAX Models

    Directory of Open Access Journals (Sweden)

    Qiugang Lu

    2013-01-01

    Full Text Available Vehicle crash test is considered to be the most direct and common approach to assess the vehicle crashworthiness. However, it suffers from the drawbacks of high experiment cost and huge time consumption. Therefore, the establishment of a mathematical model of vehicle crash which can simplify the analysis process is significantly attractive. In this paper, we present the application of LPV-ARMAX model to simulate the car-to-pole collision with different initial impact velocities. The parameters of the LPV-ARMAX are assumed to have dependence on the initial impact velocities. Instead of establishing a set of LTI models for vehicle crashes with various impact velocities, the LPV-ARMAX model is comparatively simple and applicable to predict the responses of new collision situations different from the ones used for identification. Finally, the comparison between the predicted response and the real test data is conducted, which shows the high fidelity of the LPV-ARMAX model.

  12. Comparison of two model approaches in the Zambezi river basin with regard to model reliability and identifiability

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2006-01-01

    Full Text Available Variations of water stocks in the upper Zambezi river basin have been determined by 2 different hydrological modelling approaches. The purpose was to provide preliminary terrestrial storage estimates in the upper Zambezi, which will be compared with estimates derived from the Gravity Recovery And Climate Experiment (GRACE in a future study. The first modelling approach is GIS-based, distributed and conceptual (STREAM. The second approach uses Lumped Elementary Watersheds identified and modelled conceptually (LEW. The STREAM model structure has been assessed using GLUE (Generalized Likelihood Uncertainty Estimation a posteriori to determine parameter identifiability. The LEW approach could, in addition, be tested for model structure, because computational efforts of LEW are low. Both models are threshold models, where the non-linear behaviour of the Zambezi river basin is explained by a combination of thresholds and linear reservoirs. The models were forced by time series of gauged and interpolated rainfall. Where available, runoff station data was used to calibrate the models. Ungauged watersheds were generally given the same parameter sets as their neighbouring calibrated watersheds. It appeared that the LEW model structure could be improved by applying GLUE iteratively. Eventually, it led to better identifiability of parameters and consequently a better model structure than the STREAM model. Hence, the final model structure obtained better represents the true hydrology. After calibration, both models show a comparable efficiency in representing discharge. However the LEW model shows a far greater storage amplitude than the STREAM model. This emphasizes the storage uncertainty related to hydrological modelling in data-scarce environments such as the Zambezi river basin. It underlines the need and potential for independent observations of terrestrial storage to enhance our understanding and modelling capacity of the hydrological processes. GRACE

  13. A generic approach to haptic modeling of textile artifacts

    Science.gov (United States)

    Shidanshidi, H.; Naghdy, F.; Naghdy, G.; Wood Conroy, D.

    2009-08-01

    Haptic Modeling of textile has attracted significant interest over the last decade. In spite of extensive research, no generic system has been proposed. The previous work mainly assumes that textile has a 2D planar structure. They also require time-consuming measurement of textile properties in construction of the mechanical model. A novel approach for haptic modeling of textile is proposed to overcome the existing shortcomings. The method is generic, assumes a 3D structure for the textile, and deploys computational intelligence to estimate the mechanical properties of textile. The approach is designed primarily for display of textile artifacts in museums. The haptic model is constructed by superimposing the mechanical model of textile over its geometrical model. Digital image processing is applied to the still image of textile to identify its pattern and structure through a fuzzy rule-base algorithm. The 3D geometric model of the artifact is automatically generated in VRML based on the identified pattern and structure obtained from the textile image. Selected mechanical properties of the textile are estimated by an artificial neural network; deploying the textile geometric characteristics and yarn properties as inputs. The estimated mechanical properties are then deployed in the construction of the textile mechanical model. The proposed system is introduced and the developed algorithms are described. The validation of method indicates the feasibility of the approach and its superiority to other haptic modeling algorithms.

  14. Final Report: Natural State Models of The Geysers Geothermal System, Sonoma County, California

    Energy Technology Data Exchange (ETDEWEB)

    T. H. Brikowski; D. L. Norton; D. D. Blackwell

    2001-12-31

    Final project report of natural state modeling effort for The Geysers geothermal field, California. Initial models examined the liquid-dominated state of the system, based on geologic constraints and calibrated to match observed whole rock delta-O18 isotope alteration. These models demonstrated that the early system was of generally low permeability (around 10{sup -12} m{sup 2}), with good hydraulic connectivity at depth (along the intrusive contact) and an intact caprock. Later effort in the project was directed at development of a two-phase, supercritical flow simulation package (EOS1sc) to accompany the Tough2 flow simulator. Geysers models made using this package show that ''simmering'', or the transient migration of vapor bubbles through the hydrothermal system, is the dominant transition state as the system progresses to vapor-dominated. Such a system is highly variable in space and time, making the rock record more difficult to interpret, since pressure-temperature indicators likely reflect only local, short duration conditions.

  15. Study of GMSB models with photon final states using the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Terwort, Mark

    2009-11-30

    Models with gauge mediated supersymmetry breaking (GMSB) provide a possible mechanism to mediate supersymmetry breaking to the electroweak scale. In these models the lightest-supersymmetric particle is the gravitino, while the next-to-lightest supersymmetric particle is either the lightest neutralino or a slepton. In the former case final states with large missing transverse energy from the gravitinos, multiple jets and two hard photons are expected in pp-collisions at the LHC. Depending on the lifetime of the neutralino the photons might not point back to the interaction vertex, which requires dedicated search strategies. Additionally, this feature can be used to measure the neutralino lifetime using either the timing information from the electromagnetic calorimeter or the reconstructed photon direction. Together with the measurements of kinematic endpoints in invariant mass distributions, the lifetime can be used as input for fits of the GMSB model and for the determination of the underlying parameters. The signal selection and the discovery potential for GMSB models with photons in the nal state are discussed using simulated data of the ATLAS detector. In addition, the measurement of supersymmetric particle masses and of the neutralino lifetime as well as the results of the global GMSB fits are presented. (orig.)

  16. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  17. Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies (Final Report)

    Science.gov (United States)

    EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...

  18. Cowichan Valley energy mapping and modelling. Report 6 - Findings and recommendations. Final report. [Vancouver Island, Canada

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-06-15

    This report is the final report in a series of six reports detailing the findings from the Cowichan Valley Energy Mapping and Modelling project that was carried out from April of 2011 to March of 2012 by Ea Energy Analyses in conjunction with Geographic Resource Analysis and Science (GRAS). The driving force behind the Integrated Energy Mapping and Analysis project was the identification and analysis of a suite of pathways that the Cowichan Valley Regional District (CVRD) can utilise to increase its energy resilience, as well as reduce energy consumption and GHG emissions, with a primary focus on the residential sector. Mapping and analysis undertaken will support provincial energy and GHG reduction targets, and the suite of pathways outlined will address a CVRD internal target that calls for 75% of the region's energy within the residential sector to come from locally sourced renewables by 2050. The target has been developed as a mechanism to meet resilience and climate action target. The maps and findings produced are to be integrated as part of a regional policy framework currently under development. The present report is the final report and presents a summary of the findings of project tasks 1-5 and provides a set of recommendations to the CVRD based on the work done and with an eye towards the next steps in the energy planning process of the CVRD. (LN)

  19. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    Directory of Open Access Journals (Sweden)

    Wei-Cheng Wu

    2018-03-01

    Full Text Available This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization, using the unstructured grid Simulating WAve Nearshore (SWAN model coupled with a nested grid WAVEWATCH III® (WWIII model. The flexibility of models with various spatial resolutions and the effects of open boundary conditions simulated by a nested grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured grid-modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Center Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the ST2 physics package’s ability to predict wave power density for large waves, which is important for wave resource assessment, load calculation of devices, and risk management. In addition, bivariate distributions show that the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than with the ST2 physics package. This study demonstrated that the unstructured grid wave modeling approach, driven by regional nested grid WWIII outputs along with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (102 km.

  20. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  1. A review of function modeling: Approaches and applications

    OpenAIRE

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research fields of artificial intelligence, design theory, and maintenance are discussed. In this discussion the goals are to highlight the features of various classical approaches in relation to FM, to delin...

  2. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    Science.gov (United States)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  3. A Cluster-based Approach Towards Detecting and Modeling Network Dictionary Attacks

    Directory of Open Access Journals (Sweden)

    A. Tajari Siahmarzkooh

    2016-12-01

    Full Text Available In this paper, we provide an approach to detect network dictionary attacks using a data set collected as flows based on which a clustered graph is resulted. These flows provide an aggregated view of the network traffic in which the exchanged packets in the network are considered so that more internally connected nodes would be clustered. We show that dictionary attacks could be detected through some parameters namely the number and the weight of clusters in time series and their evolution over the time. Additionally, the Markov model based on the average weight of clusters,will be also created. Finally, by means of our suggested model, we demonstrate that artificial clusters of the flows are created for normal and malicious traffic. The results of the proposed approach on CAIDA 2007 data set suggest a high accuracy for the model and, therefore, it provides a proper method for detecting the dictionary attack.

  4. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  5. METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Gorbenkova Elena Vladimirovna

    2017-10-01

    Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of

  6. An algebraic approach to modeling in software engineering

    International Nuclear Information System (INIS)

    Loegel, C.J.; Ravishankar, C.V.

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form

  7. Economic and industrial development. EID - EMPLOY. Final report. Task 1. Review of approaches for employment impact assessment of renewable energy deployment

    Energy Technology Data Exchange (ETDEWEB)

    Breitschopf, Barbara [Fraunhofer-Institut fuer System- und Innovationsforschung (ISI), Karlsruhe (Germany); Nathani, Carsten; Resch, Gustav

    2011-11-15

    full picture of the impacts of RE deployment on the total economy - covering all economic activities like production, service and consumption (industries, households). To get the number of additional jobs caused by RE deployment, they compare a situation without RE (baseline or counterfactual) to a situation under a strong RE deployment. In a second step, we characterize the studies inter alia by their scope, activities and impacts and show the relevant positive and negative effects that are included in gross or net impact assessment studies. The effects are briefly described in Table 0-1. While gross studies mainly include the positive effects listed here, net studies in general include positive and negative effects. Third, we distinguish between methodological approaches assessing impacts. We observe that the more effects are incorporated in the approach, the more data are needed, the more complex and demanding the methodological approach becomes and the more the impacts capture effects of and in the whole economy - representing net impacts. A simple approach requires a few data and allows answering simple questions concerning the impact on the RE-industry - representing gross impacts. We identify six main approaches, three for gross and three for net impacts. They are depicted in Figure 0-2. The methodological approaches are characterized by their effects captured, the complexity of model and additional data requirement (besides data on RE investments, capacities and generation) as well as by their depicted impacts reflecting the economic comprehensiveness. A detailed overview of the diverse studies in table form is given in the Annex to this report. Finally, we suggest to elaborate guidelines for the simple EF-approach, the gross IO-modelling and net IO-modelling approach. The first approach enables policy makers to do a quick assessment on gross effects, while the second is a more sophisticated approach for gross effects. The third approach builds on the gross IO

  8. FInal Report: First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality

    Energy Technology Data Exchange (ETDEWEB)

    Aberg, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sadigh, Babak [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zhou, Fei [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-01

    This final report presents work carried out on the project “First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality” at Lawrence Livermore National Laboratory during 2013-2015. The scope of the work was to further the physical understanding of the microscopic mechanisms behind scintillator nonproportionality that effectively limits the achievable detector resolution. Thereby, crucial quantitative data for these processes as input to large-scale simulation codes has been provided. In particular, this project was divided into three tasks: (i) Quantum mechanical rates of non-radiative quenching, (ii) The thermodynamics of point defects and dopants, and (iii) Formation and migration of self-trapped polarons. The progress and results of each of these subtasks are detailed.

  9. Towards the final BSA modeling for the accelerator-driven BNCT facility at INFN LNL

    Energy Technology Data Exchange (ETDEWEB)

    Ceballos, C. [Centro de Aplicaciones Tecnlogicas y Desarrollo Nuclear, 5ta y30, Miramar, Playa, Ciudad Habana (Cuba); Esposito, J., E-mail: juan.esposito@lnl.infn.it [INFN, Laboratori Nazionali di Legnaro (LNL), via dell' Universita, 2, I-35020 Legnaro (PD) (Italy); Agosteo, S. [Politecnico di Milano, Dipartimento di Energia, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)] [INFN, Sezione di Milano, via Celoria 16, 20133 Milano (Italy); Colautti, P.; Conte, V.; Moro, D. [INFN, Laboratori Nazionali di Legnaro (LNL), via dell' Universita, 2, I-35020 Legnaro (PD) (Italy); Pola, A. [Politecnico di Milano, Dipartimento di Energia, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)] [INFN, Sezione di Milano, via Celoria 16, 20133 Milano (Italy)

    2011-12-15

    Some remarkable advances have been made in the last years on the SPES-BNCT project of the Istituto Nazionale di Fisica Nucleare (INFN) towards the development of the accelerator-driven thermal neutron beam facility at the Legnaro National Laboratories (LNL), aimed at the BNCT experimental treatment of extended skin melanoma. The compact neutron source will be produced via the {sup 9}Be(p,xn) reactions using the 5 MeV, 30 mA beam driven by the RFQ accelerator, whose modules construction has been recently completed, into a thick beryllium target prototype already available. The Beam Shaping Assembly (BSA) final modeling, using both neutron converter and the new, detailed, Be(p,xn) neutron yield spectra at 5 MeV energy recently measured at the CN Van de Graaff accelerator at LNL, is summarized here.

  10. Modeling of integrated environmental control systems for coal-fired power plants. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Rubin, E.S.; Salmento, J.S.; Frey, H.C.; Abu-Baker, A.; Berkenpas, M.

    1991-05-01

    The Integrated Environmental Control Model (IECM) was designed to permit the systematic evaluation of environmental control options for pulverized coal-fired (PC) power plants. Of special interest was the ability to compare the performance and cost of advanced pollution control systems to ``conventional`` technologies for the control of particulate, SO{sub 2} and NO{sub x}. Of importance also was the ability to consider pre-combustion, combustion and post-combustion control methods employed alone or in combination to meet tough air pollution emission standards. Finally, the ability to conduct probabilistic analyses is a unique capability of the IECM. Key results are characterized as distribution functions rather than as single deterministic values. (VC)

  11. Final Report. Fumex-III. Improvement of Models Used for Fuel Behaviour Simulation

    International Nuclear Information System (INIS)

    Kulacsy, Katalin

    2013-01-01

    The FUMEX-III coordinated research programme organised by the IAEA was the first FUMEX exercise in which AEKI (Hungarian Academy of Sciences KFKI Atomic Energy Research Institute) took part with the partial support of Paks NPP. The aim of the participation was to test the code FUROM developed at AEKI against not only measurements but also other fuel behaviour simulation codes, to share and discuss modelling experience and issues, and to establish acquaintance with fuel modellers in other countries. Among the numerous cases proposed for the programme, AEKI chose to simulate normal operation up to high burn-up and ramp tests, with special interest in VVER rods and PWR rods with annular pellets. The US PWR 16x16, the SPC RE GINNA, the Kola3-MIR, the IFA-519.9 cases and the AREVA idealised rod were thus selected. The present Final Report gives a short description of the FUROM models relevant to the selected cases, presents the results for the 5 cases and summarises the conclusions of the FUMEX-III programme. The input parameters used for the simulations can be found in the Appendix at the end of the Report. Observations concerning the IFPE datasets are collected for each dataset in their respective Sections for possible use in the IFPE database. (author)

  12. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  13. Joint Modeling of Multiple Crimes: A Bayesian Spatial Approach

    Directory of Open Access Journals (Sweden)

    Hongqiang Liu

    2017-01-01

    Full Text Available A multivariate Bayesian spatial modeling approach was used to jointly model the counts of two types of crime, i.e., burglary and non-motor vehicle theft, and explore the geographic pattern of crime risks and relevant risk factors. In contrast to the univariate model, which assumes independence across outcomes, the multivariate approach takes into account potential correlations between crimes. Six independent variables are included in the model as potential risk factors. In order to fully present this method, both the multivariate model and its univariate counterpart are examined. We fitted the two models to the data and assessed them using the deviance information criterion. A comparison of the results from the two models indicates that the multivariate model was superior to the univariate model. Our results show that population density and bar density are clearly associated with both burglary and non-motor vehicle theft risks and indicate a close relationship between these two types of crime. The posterior means and 2.5% percentile of type-specific crime risks estimated by the multivariate model were mapped to uncover the geographic patterns. The implications, limitations and future work of the study are discussed in the concluding section.

  14. Injury prevention risk communication: A mental models approach

    DEFF Research Database (Denmark)

    Austin, Laurel Cecelia; Fischhoff, Baruch

    2012-01-01

    Individuals' decisions and behaviour can play a critical role in determining both the probability and severity of injury. Behavioural decision research studies peoples' decision-making processes in terms comparable to scientific models of optimal choices, providing a basis for focusing...... interventions on the most critical opportunities to reduce risks. That research often seeks to identify the ‘mental models’ that underlie individuals' interpretations of their circumstances and the outcomes of possible actions. In the context of injury prevention, a mental models approach would ask why people...... and uses examples to discuss how the approach can be used to develop scientifically validated context-sensitive injury risk communications....

  15. Assessing risk factors for dental caries: a statistical modeling approach.

    Science.gov (United States)

    Trottini, Mario; Bossù, Maurizio; Corridore, Denise; Ierardo, Gaetano; Luzzi, Valeria; Saccucci, Matteo; Polimeni, Antonella

    2015-01-01

    The problem of identifying potential determinants and predictors of dental caries is of key importance in caries research and it has received considerable attention in the scientific literature. From the methodological side, a broad range of statistical models is currently available to analyze dental caries indices (DMFT, dmfs, etc.). These models have been applied in several studies to investigate the impact of different risk factors on the cumulative severity of dental caries experience. However, in most of the cases (i) these studies focus on a very specific subset of risk factors; and (ii) in the statistical modeling only few candidate models are considered and model selection is at best only marginally addressed. As a result, our understanding of the robustness of the statistical inferences with respect to the choice of the model is very limited; the richness of the set of statistical models available for analysis in only marginally exploited; and inferences could be biased due the omission of potentially important confounding variables in the model's specification. In this paper we argue that these limitations can be overcome considering a general class of candidate models and carefully exploring the model space using standard model selection criteria and measures of global fit and predictive performance of the candidate models. Strengths and limitations of the proposed approach are illustrated with a real data set. In our illustration the model space contains more than 2.6 million models, which require inferences to be adjusted for 'optimism'.

  16. A modeling approach to hospital location for effective marketing.

    Science.gov (United States)

    Cokelez, S; Peacock, E

    1993-01-01

    This paper develops a mixed integer linear programming model for locating health care facilities. The parameters of the objective function of this model are based on factor rating analysis and grid method. Subjective and objective factors representative of the real life situations are incorporated into the model in a unique way permitting a trade-off analysis of certain factors pertinent to the location of hospitals. This results in a unified approach and a single model whose credibility is further enhanced by inclusion of geographical and demographical factors.

  17. Mathematical and computer modeling of electro-optic systems using a generic modeling approach

    OpenAIRE

    Smith, M.I.; Murray-Smith, D.J.; Hickman, D.

    2007-01-01

    The conventional approach to modelling electro-optic sensor systems is to develop separate models for individual systems or classes of system, depending on the detector technology employed in the sensor and the application. However, this ignores commonality in design and in components of these systems. A generic approach is presented for modelling a variety of sensor systems operating in the infrared waveband that also allows systems to be modelled with different levels of detail and at diffe...

  18. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  19. Earthquake response analysis of RC bridges using simplified modeling approaches

    Science.gov (United States)

    Lee, Do Hyung; Kim, Dookie; Park, Taehyo

    2009-07-01

    In this paper, simplified modeling approaches describing the hysteretic behavior of reinforced concrete bridge piers are proposed. For this purpose, flexure-axial and shear-axial interaction models are developed and implemented into a nonlinear finite element analysis program. Comparative verifications for reinforced concrete columns prove that the analytical predictions obtained with the new formulations show good correlation with experimental results under various levels of axial forces and section types. In addition, analytical correlation studies for the inelastic earthquake response of reinforced concrete bridge structures are also carried out using the simplified modeling approaches. Relatively good agreement is observed in the results between the current modeling approach and the elaborated fiber models. It is thus encouraging that the present developments and approaches are capable of identifying the contribution of deformation mechanisms correctly. Subsequently, the present developments can be used as a simple yet effective tool for the deformation capacity evaluation of reinforced concrete columns in general and reinforced concrete bridge piers in particular.

  20. Bianchi VI0 and III models: self-similar approach

    International Nuclear Information System (INIS)

    Belinchon, Jose Antonio

    2009-01-01

    We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  1. Modeling and control approach to a distinctive quadrotor helicopter.

    Science.gov (United States)

    Wu, Jun; Peng, Hui; Chen, Qing; Peng, Xiaoyan

    2014-01-01

    The referenced quadrotor helicopter in this paper has a unique configuration. It is more complex than commonly used quadrotors because of its inaccurate parameters, unideal symmetrical structure and unknown nonlinear dynamics. A novel method was presented to handle its modeling and control problems in this paper, which adopts a MIMO RBF neural nets-based state-dependent ARX (RBF-ARX) model to represent its nonlinear dynamics, and then a MIMO RBF-ARX model-based global LQR controller is proposed to stabilize the quadrotor's attitude. By comparing with a physical model-based LQR controller and an ARX model-set-based gain scheduling LQR controller, superiority of the MIMO RBF-ARX model-based control approach was confirmed. This successful application verified the validity of the MIMO RBF-ARX modeling method to the quadrotor helicopter with complex nonlinearity. © 2013 Published by ISA. All rights reserved.

  2. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey......-box models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey...

  3. Environmental Radiation Effects on Mammals A Dynamical Modeling Approach

    CERN Document Server

    Smirnova, Olga A

    2010-01-01

    This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...

  4. Next-Gen3: Sequencing, Modeling, and Advanced Biofuels - Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Zengler, Karsten

    2017-12-27

    Successful, scalable implementation of biofuels is dependent on the efficient and near complete utilization of diverse biomass sources. One approach is to utilize the large recalcitrant biomass fraction (or any organic waste stream) through the thermochemical conversion of organic compounds to syngas, a mixture of carbon monoxide (CO), carbon dioxide (CO2), and hydrogen (H2), which can subsequently be metabolized by acetogenic microorganisms to produce next-gen biofuels. The goal of this proposal was to advance the development of the acetogen Clostridium ljungdahlii as a chassis organism for next-gen biofuel production from cheap, renewable sources and to detail the interconnectivity of metabolism, energy conservation, and regulation of acetogens using next-gen sequencing and next-gen modeling. To achieve this goal we determined optimization of carbon and energy utilization through differential translational efficiency in C. ljungdahlii. Furthermore, we reconstructed a next-generation model of all major cellular processes, such as macromolecular synthesis and transcriptional regulation and deployed this model to predicting proteome allocation, overflow metabolism, and metal requirements in this model acetogen. In addition we explored the evolutionary significance of tRNA operon structure using the next-gen model and determined the optimal operon structure for bioproduction. Our study substantially enhanced the knowledgebaase for chemolithoautotrophs and their potential for advanced biofuel production. It provides next-generation modeling capability, offer innovative tools for genome-scale engineering, and provide novel methods to utilize next-generation models for the design of tunable systems that produce commodity chemicals from inexpensive sources.

  5. Modelling dynamic ecosystems : venturing beyond boundaries with the Ecopath approach

    OpenAIRE

    Coll, Marta; Akoglu, E.; Arreguin-Sanchez, F.; Fulton, E. A.; Gascuel, D.; Heymans, J. J.; Libralato, S.; Mackinson, S.; Palomera, I.; Piroddi, C.; Shannon, L. J.; Steenbeek, J.; Villasante, S.; Christensen, V.

    2015-01-01

    Thirty years of progress using the Ecopath with Ecosim (EwE) approach in different fields such as ecosystem impacts of fishing and climate change, emergent ecosystem dynamics, ecosystem-based management, and marine conservation and spatial planning were showcased November 2014 at the conference "Ecopath 30 years-modelling dynamic ecosystems: beyond boundaries with EwE". Exciting new developments include temporal-spatial and end-to-end modelling, as well as novel applications to environmental ...

  6. Regularization of quantum gravity in the matrix model approach

    International Nuclear Information System (INIS)

    Ueda, Haruhiko

    1991-02-01

    We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)

  7. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems. ...... in a SCADA system because the most important information on the specific system is provided on-line...

  8. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    conservativeness level , the conservative probability of failure obtained from Section 4 must be maintained. The mathematical formulation of conservative model... CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...PDF and a probability of failure are selected from these predicted output PDFs at a user-specified conservativeness level for validation. For

  9. Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report

    International Nuclear Information System (INIS)

    Cai, Xiao-Chuan; Yang, Chao; Pernice, Michael

    2014-01-01

    The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementation since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.

  10. Estimating, Testing, and Comparing Specific Effects in Structural Equation Models: The Phantom Model Approach

    Science.gov (United States)

    Macho, Siegfried; Ledermann, Thomas

    2011-01-01

    The phantom model approach for estimating, testing, and comparing specific effects within structural equation models (SEMs) is presented. The rationale underlying this novel method consists in representing the specific effect to be assessed as a total effect within a separate latent variable model, the phantom model that is added to the main…

  11. Comparative flood damage model assessment: towards a European approach

    Science.gov (United States)

    Jongman, B.; Kreibich, H.; Apel, H.; Barredo, J. I.; Bates, P. D.; Feyen, L.; Gericke, A.; Neal, J.; Aerts, J. C. J. H.; Ward, P. J.

    2012-12-01

    There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth-damage functions) and exposure (i.e. asset values), whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.

  12. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard

    2008-03-15

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  13. Stochastic modeling of macrodispersion in unsaturated heterogeneous porous media. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, T.C.J.

    1995-02-01

    Spatial heterogeneity of geologic media leads to uncertainty in predicting both flow and transport in the vadose zone. In this work an efficient and flexible, combined analytical-numerical Monte Carlo approach is developed for the analysis of steady-state flow and transient transport processes in highly heterogeneous, variably saturated porous media. The approach is also used for the investigation of the validity of linear, first order analytical stochastic models. With the Monte Carlo analysis accurate estimates of the ensemble conductivity, head, velocity, and concentration mean and covariance are obtained; the statistical moments describing displacement of solute plumes, solute breakthrough at a compliance surface, and time of first exceedance of a given solute flux level are analyzed; and the cumulative probability density functions for solute flux across a compliance surface are investigated. The results of the Monte Carlo analysis show that for very heterogeneous flow fields, and particularly in anisotropic soils, the linearized, analytical predictions of soil water tension and soil moisture flux become erroneous. Analytical, linearized Lagrangian transport models also overestimate both the longitudinal and the transverse spreading of the mean solute plume in very heterogeneous soils and in dry soils. A combined analytical-numerical conditional simulation algorithm is also developed to estimate the impact of in-situ soil hydraulic measurements on reducing the uncertainty of concentration and solute flux predictions.

  14. Use on non-conjugate prior distributions in compound failure models. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.

    1981-12-01

    Several theoretical and computational techniques are presented for compound failure models in which the failure rate or failure probability for a class of components is considered to be a random variable. Both the failure-on-demand and failure-rate situation are considered. Ten different prior families are presented for describing the variation or uncertainty of the failure parameter. Methods considered for estimating values for the prior parameters from a given set of failure data are (1) matching data moments to those of the prior distribution, (2) matching data moments to those of the compound marginal distribution, and (3) the marginal maximum likelihood method. Numerical methods for computing the parameter estimators for all ten prior families are presented, as well as methods for obtaining estimates of the variances and covariance of the parameter estimators, it is shown that various confidence, probability, and tolerance intervals can be evaluated. Finally, to test the resulting failure models against the given failure data, generalized chi-squage and Kolmogorov-Smirnov goodness-of-fit tests are proposed together with a test to eliminate outliers from the failure data. Computer codes based on the results presented here have been prepared and are presented in a companion report

  15. Search for Standard Model Higgs in Two Photon Final State at ATLAS

    CERN Document Server

    Kim, Hyeon Jin

    2010-01-01

    The Standard Model of particle physics describes very precisely the nuclear strong and weak forces and the electromagnetic interaction, by the exchange of vector bosons. It also describes all matter as composed of quarks and leptons and predicts their interactions. The Higgs boson is the last missing piece of the Standard Model, yet to be observed. The search for the Higgs particle is one of the most important goals of the ATLAS experiment at the Large Hadron Collider (LHC). The ATLAS electromagnetic (EM) calorimeter is a crucial subdetector system of the ATLAS detector in searching for the Higgs boson, in particular its final states that include high $p_{T}$ photons or electrons. To be able to detect the rare Higgs signals, the EM calorimeter must be not only be able to precisely measure the energy and direction of electrons and photons, but also identify electrons and photons against the overwhelming background from hadronic jets that mimic these particles. The discrimination against these background can be...

  16. Software package r3t. Model for transport and retention in porous media. Final report

    International Nuclear Information System (INIS)

    Fein, E.

    2004-01-01

    In long-termsafety analyses for final repositories for hazardous wastes in deep geological formations the impact to the biosphere due to potential release of hazardous materials is assessed for relevant scenarios. The model for migration of wastes from repositories to men is divided into three almost independent parts: the near field, the geosphere, and the biosphere. With the development of r 3 t the feasibility to model the pollutant transport through the geosphere for porous or equivalent porous media in large, three-dimensional, and complex regions is established. Furthermore one has at present the ability to consider all relevant retention and interaction effects which are important for long-term safety analyses. These are equilibrium sorption, kinetically controlled sorption, diffusion into immobile pore waters, and precipitation. The processes of complexing, colloidal transport and matrix diffusion may be considered at least approximately by skilful choice of parameters. Speciation is not part of the very recently developed computer code r 3 t. With r 3 t it is possible to assess the potential dilution and the barrier impact of the overburden close to reality

  17. Recent Tests of the Standard Model with Multiboson final states at the ATLAS Detector

    CERN Document Server

    Becker, Maurice; The ATLAS collaboration

    2017-01-01

    Measurements of the cross sections of the production of two and three electroweak gauge bosons at the LHC constitute stringent tests of the electroweak sector of the Standard Model and provide a model-independent means to search for new physics at the TeV scale. The ATLAS collaboration has performed new measurements of integrated and differential cross sections of the production of heavy di-boson pairs in fully-leptonic and semi-leptonic final states at centre-of-mass energies of 8 and 13 TeV. We present in particular new measurements of WW, WZ and Z+photon cross sections in semi-leptonic or hadronic decays using standard or boosted technologies and new measurements of the inclusive and differential ZZ cross section at 13 TeV in various decay modes. In addition, the ATLAS collaboration has recently searched for the production of three W bosons or of a W boson and a photon together with a Z or W boson at a center of mass energy of 8 TeV. Moreover, the electroweak production in vector boson fusion of single W a...

  18. Modeling of phase equilibria with CPA using the homomorph approach

    DEFF Research Database (Denmark)

    Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios

    2011-01-01

    For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij......, are estimated from binary systems; one binary interaction parameter per system. No additional mixing rules are needed for cross-associating systems, but combining rules are required, e.g. the Elliott rule or the so-called CR-1 rule. There is a very large class of mixtures, e.g. water or glycols with aromatic...... interaction parameters are often used for solvating systems; one for the physical part (kij) and one for the association part (βcross). This limits the predictive capabilities and possibilities of generalization of the model. In this work we present an approach to reduce the number of adjustable parameters...

  19. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model......-based synthesis method must employ models at lower levels of aggregation and through combination rules for phenomena, generate (synthesize) new intensified unit operations. An efficient solution procedure for the synthesis problem is needed to tackle the potentially large number of options that would be obtained...

  20. A Two Step Face Alignment Approach Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Ying Cui

    2012-10-01

    Full Text Available Although face alignment using the Active Appearance Model (AAM is relatively stable, it is known to be sensitive to initial values and not robust under inconstant circumstances. In order to strengthen the ability of AAM performance for face alignment, a two step approach for face alignment combining AAM and Active Shape Model (ASM is proposed. In the first step, AAM is used to locate the inner landmarks of the face. In the second step, the extended ASM is used to locate the outer landmarks of the face under the constraint of the estimated inner landmarks by AAM. The two kinds of landmarks are then combined together to form the whole facial landmarks. The proposed approach is compared with the basic AAM and the progressive AAM methods. Experimental results show that the proposed approach gives a much more effective performance.

  1. A review of function modeling : Approaches and applications

    NARCIS (Netherlands)

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research

  2. The Bipolar Approach: A Model for Interdisciplinary Art History Courses.

    Science.gov (United States)

    Calabrese, John A.

    1993-01-01

    Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)

  3. Model-independent approach for dark matter phenomenology ...

    Indian Academy of Sciences (India)

    We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detection experiments of dark matter. Once the dark matter is discovered in the ...

  4. A Behavioral Decision Making Modeling Approach Towards Hedging Services

    NARCIS (Netherlands)

    Pennings, J.M.E.; Candel, M.J.J.M.; Egelkraut, T.M.

    2003-01-01

    This paper takes a behavioral approach toward the market for hedging services. A behavioral decision-making model is developed that provides insight into how and why owner-managers decide the way they do regarding hedging services. Insight into those choice processes reveals information needed by

  5. Comparing State SAT Scores Using a Mixture Modeling Approach

    Science.gov (United States)

    Kim, YoungKoung Rachel

    2009-01-01

    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  6. Export of microplastics from land to sea. A modelling approach

    NARCIS (Netherlands)

    Siegfried, Max; Koelmans, A.A.; Besseling, E.; Kroeze, C.

    2017-01-01

    Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea.

  7. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  8. Hidden Markov model-based approach for generation of Pitman ...

    Indian Academy of Sciences (India)

    Speech is one of the most basic means of human communication. ... human beings is carried out with the aid of communication and has facilitated the development ... Hidden Markov model-based approach for generation of PSL symbols. 279. Table 1. PSL basic strokes and English consonants. English consonant.

  9. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    van der Stoep, A.W.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    We present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant. Finance,

  10. Model-independent approach for dark matter phenomenology

    Indian Academy of Sciences (India)

    We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detection experiments of dark matter. Once the dark matter is discovered in the ...

  11. Model-independent approach for dark matter phenomenology ...

    Indian Academy of Sciences (India)

    Abstract. We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detec- tion experiments of dark matter. Once the dark matter is discovered ...

  12. Using artificial neural network approach for modelling rainfall–runoff ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 122; Issue 2. Using artificial neural network approach for modelling ... Nevertheless, water level and flow records are essential in hydrological analysis for designing related water works of flood management. Due to the complexity of the hydrological process, ...

  13. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  14. Hidden Markov model-based approach for generation of Pitman ...

    Indian Academy of Sciences (India)

    In this paper, an approach for feature extraction using Mel frequency cep- stral coefficients (MFCC) and classification using hidden Markov models (HMM) for generating strokes comprising consonants and vowels (CV) in the process of production of Pitman shorthand language from spoken English is proposed. The.

  15. Pruning Chinese trees : an experimental and modelling approach

    NARCIS (Netherlands)

    Zeng, Bo

    2001-01-01

    Pruning of trees, in which some branches are removed from the lower crown of a tree, has been extensively used in China in silvicultural management for many purposes. With an experimental and modelling approach, the effects of pruning on tree growth and on the harvest of plant material were studied.

  16. Non-frontal Model Based Approach to Forensic Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance

  17. A unified modeling approach for physical experiment design and optimization in laser driven inertial confinement fusion

    International Nuclear Information System (INIS)

    Li, Haiyan; Huang, Yunbao; Jiang, Shaoen; Jing, Longfei; Tianxuan, Huang; Ding, Yongkun

    2015-01-01

    Highlights: • A unified modeling approach for physical experiment design is presented. • Any laser facility can be flexibly defined and included with two scripts. • Complex targets and laser beams can be parametrically modeled for optimization. • Automatically mapping of laser beam energy facilitates targets shape optimization. - Abstract: Physical experiment design and optimization is very essential for laser driven inertial confinement fusion due to the high cost of each shot. However, only limited experiments with simple structure or shape on several laser facilities can be designed and evaluated in available codes, and targets are usually defined by programming, which may lead to it difficult for complex shape target design and optimization on arbitrary laser facilities. A unified modeling approach for physical experiment design and optimization on any laser facilities is presented in this paper. Its core idea includes: (1) any laser facility can be flexibly defined and included with two scripts, (2) complex shape targets and laser beams can be parametrically modeled based on features, (3) an automatically mapping scheme of laser beam energy onto discrete mesh elements of targets enable targets or laser beams be optimized without any additional interactive modeling or programming, and (4) significant computation algorithms are additionally presented to efficiently evaluate radiation symmetry on the target. Finally, examples are demonstrated to validate the significance of such unified modeling approach for physical experiments design and optimization in laser driven inertial confinement fusion.

  18. Reconciliation with oneself and with others: From approach to model

    Directory of Open Access Journals (Sweden)

    Nikolić-Ristanović Vesna

    2010-01-01

    Full Text Available The paper intends to present the approach to dealing with war and its consequences which was developed within Victimology Society of Serbia over the last five years, in the framework of Association Joint Action for Truth and Reconciliation (ZAIP. First, the short review of the Association and the process through which ZAIP approach to dealing with a past was developed is presented. Then, the detailed description of the approach itself, with identification of its most important specificities, is presented. In the conclusion, next steps, aimed at development of the model of reconciliation which will have the basis in ZAIP approach and which will be appropriate to social context of Serbia and its surrounding, are suggested.

  19. EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES

    Directory of Open Access Journals (Sweden)

    Slavko Arsovski

    2009-03-01

    Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.

  20. Revisit the modeling of the Saturnian ring atmosphere and ionosphere from the "Cassini Grand Finale" results

    Science.gov (United States)

    Tseng, W. L.; Johnson, R. E.; Tucker, O. J.; Perry, M. E.; Ip, W. H.

    2017-12-01

    During the Cassini Grand Finale mission, this spacecraft, for the first time, has done the in-situ measurements of Saturn's upper atmosphere and its rings and provides critical information for understanding the coupling dynamics between the main rings and the Saturnian system. The ring atmosphere is the source of neutrals (i.e., O2, H2, H; Tseng et al., 2010; 2013a), which is primarily generated by photolytic decomposition of water ice (Johnson et al., 2006), and plasma (i.e., O2+ and H2+; Tseng et al., 2011) in the Saturnian magnetosphere. In addition, the main rings have strong interaction with Saturn's atmosphere and ionosphere (i.e., a source of oxygen into Saturn's upper atmosphere and/or the "ring rain" in O'Donoghue et al., 2013). Furthermore, the near-ring plasma environment is complicated by the neutrals from both the seasonally dependent ring atmosphere and Enceladus torus (Tseng et al., 2013b), and, possibly, from small grains from the main and tenuous F and G rings (Johnson et al.2017). The data now coming from Cassini Grand Finale mission already shed light on the dominant physics and chemistry in this region of Saturn's magnetosphere, for example, the presence of carbonaceous material from meteorite impacts in the main rings and each gas species have similar distribution in the ring atmosphere. We will revisit the details in our ring atmosphere/ionosphere model to study, such as the source mechanism for the organic material and the neutral-grain-plasma interaction processes.

  1. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  2. Modeling electricity spot and futures price dependence: A multifrequency approach

    Science.gov (United States)

    Malo, Pekka

    2009-11-01

    Electricity prices are known to exhibit multifractal properties. We accommodate this finding by investigating multifractal models for electricity prices. In this paper we propose a flexible Copula-MSM (Markov Switching Multifractal) approach for modeling spot and weekly futures price dynamics. By using a conditional copula function, the framework allows us to separately model the dependence structure, while enabling use of multifractal stochastic volatility models to characterize fluctuations in marginal returns. An empirical experiment is carried out using data from Nord Pool. A study of volatility forecasting performance for electricity spot prices reveals that multifractal techniques are a competitive alternative to GARCH models. We also demonstrate how the Copula-MSM model can be employed for finding optimal portfolios, which minimizes the Conditional Value-at-Risk.

  3. Multiphysics modeling using COMSOL a first principles approach

    CERN Document Server

    Pryor, Roger W

    2011-01-01

    Multiphysics Modeling Using COMSOL rapidly introduces the senior level undergraduate, graduate or professional scientist or engineer to the art and science of computerized modeling for physical systems and devices. It offers a step-by-step modeling methodology through examples that are linked to the Fundamental Laws of Physics through a First Principles Analysis approach. The text explores a breadth of multiphysics models in coordinate systems that range from 1D to 3D and introduces the readers to the numerical analysis modeling techniques employed in the COMSOL Multiphysics software. After readers have built and run the examples, they will have a much firmer understanding of the concepts, skills, and benefits acquired from the use of computerized modeling techniques to solve their current technological problems and to explore new areas of application for their particular technological areas of interest.

  4. Evaluation of Workflow Management Systems - A Meta Model Approach

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    1998-11-01

    Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.

  5. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  6. A simplified GIS approach to modeling global leaf water isoscapes.

    Directory of Open Access Journals (Sweden)

    Jason B West

    Full Text Available The stable hydrogen (delta(2H and oxygen (delta(18O isotope ratios of organic and inorganic materials record biological and physical processes through the effects of substrate isotopic composition and fractionations that occur as reactions proceed. At large scales, these processes can exhibit spatial predictability because of the effects of coherent climatic patterns over the Earth's surface. Attempts to model spatial variation in the stable isotope ratios of water have been made for decades. Leaf water has a particular importance for some applications, including plant organic materials that record spatial and temporal climate variability and that may be a source of food for migrating animals. It is also an important source of the variability in the isotopic composition of atmospheric gases. Although efforts to model global-scale leaf water isotope ratio spatial variation have been made (especially of delta(18O, significant uncertainty remains in models and their execution across spatial domains. We introduce here a Geographic Information System (GIS approach to the generation of global, spatially-explicit isotope landscapes (= isoscapes of "climate normal" leaf water isotope ratios. We evaluate the approach and the resulting products by comparison with simulation model outputs and point measurements, where obtainable, over the Earth's surface. The isoscapes were generated using biophysical models of isotope fractionation and spatially continuous precipitation isotope and climate layers as input model drivers. Leaf water delta(18O isoscapes produced here generally agreed with latitudinal averages from GCM/biophysical model products, as well as mean values from point measurements. These results show global-scale spatial coherence in leaf water isotope ratios, similar to that observed for precipitation and validate the GIS approach to modeling leaf water isotopes. These results demonstrate that relatively simple models of leaf water enrichment

  7. A Genetic Algorithm Approach for Modeling a Grounding Electrode

    Science.gov (United States)

    Mishra, Arbind Kumar; Nagaoka, Naoto; Ametani, Akihiro

    This paper has proposed a genetic algorithm based approach to determine a grounding electrode model circuit composed of resistances, inductances and capacitances. The proposed methodology determines the model circuit parameters based on a general ladder circuit directly from a measured result. Transient voltages of some electrodes were measured when applying a step like current. An EMTP simulation of a transient voltage on the grounding electrode has been carried out by adopting the proposed model circuits. The accuracy of the proposed method has been confirmed to be high in comparison with the measured transient voltage.

  8. Modeling Laser Effects on the Final Optics in Simulated IFE Environments

    Energy Technology Data Exchange (ETDEWEB)

    Nasr Ghoniem

    2004-08-14

    When laser light interacts with a material's surface, photons rapidly heat the electronic system, resulting in very fast energy transfer to the underlying atomic crystal structure. The intense rate of energy deposition in the shallow sub-surface layer creates atomic defects, which alter the optical characteristics of the surface itself. In addition, the small fraction of energy absorbed in the mirror leads to its global deformation by thermal and gravity loads (especially for large surface area mirrors). The aim of this research was to model the deformation of mirror surfaces at multiple length and time scales for applications in advanced Inertial Fusion Energy (IFE) systems. The goal is to control micro- and macro-deformations by material system and structural design. A parallel experimental program at UCSD has been set up to validate the modeling efforts. The main objective of the research program was to develop computer models and simulations for Laser-Induced Damage (LID) in reflective and transmissive final optical elements in IFE laser-based systems. A range of materials and material concepts were investigated and verified by experiments at UCSD. Four different classes of materials were considered: (1) High-reflectivity FCC metals (e.g. Cu, Au, Ag, and Al), (2) BCC metals (e.g. Mo, Ta and W), (3) Advanced material concepts (e.g. functionally graded material systems, amorphous coatings, and layered structures), and (4) Transmissive dielectrics (e.g. fused SiO2). In this report, we give a summary of the three-year project, followed by details in three areas: (1) Characterization of laser-induced damage; (2) Theory development for LIDT; and (3) Design of IFE reflective laser mirrors.

  9. Data-driven approach to dynamic visual attention modelling

    Science.gov (United States)

    Culibrk, Dubravko; Sladojevic, Srdjan; Riche, Nicolas; Mancas, Matei; Crnojevic, Vladimir

    2012-06-01

    Visual attention deployment mechanisms allow the Human Visual System to cope with an overwhelming amount of visual data by dedicating most of the processing power to objects of interest. The ability to automatically detect areas of the visual scene that will be attended to by humans is of interest for a large number of applications, from video coding, video quality assessment to scene understanding. Due to this fact, visual saliency (bottom-up attention) models have generated significant scientific interest in recent years. Most recent work in this area deals with dynamic models of attention that deal with moving stimuli (videos) instead of traditionally used still images. Visual saliency models are usually evaluated against ground-truth eye-tracking data collected from human subjects. However, there are precious few recently published approaches that try to learn saliency from eyetracking data and, to the best of our knowledge, no approaches that try to do so when dynamic saliency is concerned. The paper attempts to fill this gap and describes an approach to data-driven dynamic saliency model learning. A framework is proposed that enables the use of eye-tracking data to train an arbitrary machine learning algorithm, using arbitrary features derived from the scene. We evaluate the methodology using features from a state-of-the art dynamic saliency model and show how simple machine learning algorithms can be trained to distinguish between visually salient and non-salient parts of the scene.

  10. Polynomial Chaos Expansion Approach to Interest Rate Models

    Directory of Open Access Journals (Sweden)

    Luca Di Persio

    2015-01-01

    Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

  11. Common modelling approaches for training simulators for nuclear power plants

    International Nuclear Information System (INIS)

    1990-02-01

    Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs

  12. Estimating a DIF decomposition model using a random-weights linear logistic test model approach.

    Science.gov (United States)

    Paek, Insu; Fukuhara, Hirotaka

    2015-09-01

    A differential item functioning (DIF) decomposition model separates a testlet item DIF into two sources: item-specific differential functioning and testlet-specific differential functioning. This article provides an alternative model-building framework and estimation approach for a DIF decomposition model that was proposed by Beretvas and Walker (2012). Although their model is formulated under multilevel modeling with the restricted pseudolikelihood estimation method, our approach illustrates DIF decomposition modeling that is directly built upon the random-weights linear logistic test model framework with the marginal maximum likelihood estimation method. In addition to demonstrating our approach's performance, we provide detailed information on how to implement this new DIF decomposition model using an item response theory software program; using DIF decomposition may be challenging for practitioners, yet practical information on how to implement it has previously been unavailable in the measurement literature.

  13. Evaluating Asset Pricing Models in a Simulated Multifactor Approach

    Directory of Open Access Journals (Sweden)

    Wagner Piazza Gaglianone

    2012-12-01

    Full Text Available In this paper a methodology to compare the performance of different stochastic discount factor (SDF models is suggested. The starting point is the estimation of several factor models in which the choice of the fundamental factors comes from different procedures. Then, a Monte Carlo simulation is designed in order to simulate a set of gross returns with the objective of mimicking the temporal dependency and the observed covariance across gross returns. Finally, the artificial returns are used to investigate the performance of the competing asset pricing models through the Hansen and Jagannathan (1997 distance and some goodness-of-fit statistics of the pricing error. An empirical application is provided for the U.S. stock market.

  14. sigma model approach to the heterotic string theory

    International Nuclear Information System (INIS)

    Sen, A.

    1985-09-01

    Relation between the equations of motion for the massless fields in the heterotic string theory, and the conformal invariance of the sigma model describing the propagation of the heterotic string in arbitrary background massless fields is discussed. It is emphasized that this sigma model contains complete information about the string theory. Finally, we discuss the extension of the Hull-Witten proof of local gauge and Lorentz invariance of the sigma-model to higher order in α', and the modification of the transformation laws of the antisymmetric tensor field under these symmetries. Presence of anomaly in the naive N = 1/2 supersymmetry transformation is also pointed out in this context. 12 refs

  15. Plenary lecture: innovative modeling approaches applicable to risk assessments.

    Science.gov (United States)

    Oscar, T P

    2011-06-01

    Proper identification of safe and unsafe food at the processing plant is important for maximizing the public health benefit of food by ensuring both its consumption and safety. Risk assessment is a holistic approach to food safety that consists of four steps: 1) hazard identification; 2) exposure assessment; 3) hazard characterization; and 4) risk characterization. Risk assessments are modeled by mapping the risk pathway as a series of unit operations and associated pathogen events and then using probability distributions and a random sampling method to simulate the rare, random, variable and uncertain nature of pathogen events in the risk pathway. To model pathogen events, a rare event modeling approach is used that links a discrete distribution for incidence of the pathogen event with a continuous distribution for extent of the pathogen event. When applied to risk assessment, rare event modeling leads to the conclusion that the most highly contaminated food at the processing plant does not necessarily pose the highest risk to public health because of differences in post-processing risk factors among distribution channels and consumer populations. Predictive microbiology models for individual pathogen events can be integrated with risk assessment models using the rare event modeling method. Published by Elsevier Ltd.

  16. Final Report for High Latitude Climate Modeling: ARM Takes Us Beyond Case Studies

    Energy Technology Data Exchange (ETDEWEB)

    Russell, Lynn M [Scripps/UCSD; Lubin, Dan [Scripps/UCSD

    2013-06-18

    The main thrust of this project was to devise a method by which the majority of North Slope of Alaska (NSA) meteorological and radiometric data, collected on a daily basis, could be used to evaluate and improve global climate model (GCM) simulations and their parameterizations, particularly for cloud microphysics. Although the standard ARM Program sensors for a less complete suite of instruments for cloud and aerosol studies than the instruments on an intensive field program such as the 2008 Indirect and Semi-Direct Aerosol Campaign (ISDAC), the advantage they offer lies in the long time base and large volume of data that covers a wide range of meteorological and climatological conditions. The challenge has been devising a method to interpret the NSA data in a practical way, so that a wide variety of meteorological conditions in all seasons can be examined with climate models. If successful, climate modelers would have a robust alternative to the usual “case study” approach (i.e., from intensive field programs only) for testing and evaluating their parameterizations’ performance. Understanding climate change on regional scales requires a broad scientific consideration of anthropogenic influences that goes beyond greenhouse gas emissions to also include aerosol-induced changes in cloud properties. For instance, it is now clear that on small scales, human-induced aerosol plumes can exert microclimatic radiative and hydrologic forcing that rivals that of greenhouse gas–forced warming. This project has made significant scientific progress by investigating what causes successive versions of climate models continue to exhibit errors in cloud amount, cloud microphysical and radiative properties, precipitation, and radiation balance, as compared with observations and, in particular, in Arctic regions. To find out what is going wrong, we have tested the models' cloud representation over the full range of meteorological conditions found in the Arctic using the

  17. Model-Driven Approach for Body Area Network Application Development

    Directory of Open Access Journals (Sweden)

    Algimantas Venčkauskas

    2016-05-01

    Full Text Available This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS. We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  18. A fuzzy approach for modelling radionuclide in lake system

    International Nuclear Information System (INIS)

    Desai, H.K.; Christian, R.A.; Banerjee, J.; Patra, A.K.

    2013-01-01

    Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of 3 H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict 3 H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and 3 H concentration at discharge point. The Output was 3 H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. -- Highlights: • Uncommon approach (Fuzzy Rule Base) of modelling radionuclide dispersion in Lake. • Predicts 3 H released from Kakrapar Atomic Power Station at a point of human exposure. • RMSE of fuzzy model is 1.95, which means, it has well imitated natural ecosystem

  19. Modeling energy fluxes in heterogeneous landscapes employing a mosaic approach

    Science.gov (United States)

    Klein, Christian; Thieme, Christoph; Priesack, Eckart

    2015-04-01

    Recent studies show that uncertainties in regional and global climate and weather simulations are partly due to inadequate descriptions of the energy flux exchanges between the land surface and the atmosphere. One major shortcoming is the limitation of the grid-cell resolution, which is recommended to be about at least 3x3 km² in most models due to limitations in the model physics. To represent each individual grid cell most models select one dominant soil type and one dominant land use type. This resolution, however, is often too coarse in regions where the spatial diversity of soil and land use types are high, e.g. in Central Europe. An elegant method to avoid the shortcoming of grid cell resolution is the so called mosaic approach. This approach is part of the recently developed ecosystem model framework Expert-N 5.0. The aim of this study was to analyze the impact of the characteristics of two managed fields, planted with winter wheat and potato, on the near surface soil moistures and on the near surface energy flux exchanges of the soil-plant-atmosphere interface. The simulated energy fluxes were compared with eddy flux tower measurements between the respective fields at the research farm Scheyern, North-West of Munich, Germany. To perform these simulations, we coupled the ecosystem model Expert-N 5.0 to an analytical footprint model. The coupled model system has the ability to calculate the mixing ratio of the surface energy fluxes at a given point within one grid cell (in this case at the flux tower between the two fields). This approach accounts for the differences of the two soil types, of land use managements, and of canopy properties due to footprint size dynamics. Our preliminary simulation results show that a mosaic approach can improve modeling and analyzing energy fluxes when the land surface is heterogeneous. In this case our applied method is a promising approach to extend weather and climate models on the regional and on the global scale.

  20. Toward the Development of a Cold Regions Regional-Scale Hydrologic Model, Final Project Report

    Energy Technology Data Exchange (ETDEWEB)

    Hinzman, Larry D [Univ. of Alaska, Fairbanks, AK (United States); Bolton, William Robert [Univ. of Alaska, Fairbanks, AK (United States); Young-Robertson, Jessica (Cable) [Univ. of Alaska, Fairbanks, AK (United States)

    2018-01-02

    This project improves meso-scale hydrologic modeling in the boreal forest by: (1) demonstrating the importance of capturing the heterogeneity of the landscape using small scale datasets for parameterization for both small and large basins; (2) demonstrating that in drier parts of the landscape and as the boreal forest dries with climate change, modeling approaches must consider the sensitivity of simulations to soil hydraulic parameters - such as residual water content - that are usually held constant. Thus, variability / flexibility in residual water content must be considered for accurate simulation of hydrologic processes in the boreal forest; (3) demonstrating that assessing climate change impacts on boreal forest hydrology through multiple model integration must account for direct effects of climate change (temperature and precipitation), and indirect effects from climate impacts on landscape characteristics (permafrost and vegetation distribution). Simulations demonstrated that climate change will increase runoff, but will increase ET to a greater extent and result in a drying of the landscape; and (4) vegetation plays a significant role in boreal hydrologic processes in permafrost free areas that have deciduous trees. This landscape type results in a decoupling of ET and precipitation, a tight coupling of ET and temperature, low runoff, and overall soil drying.

  1. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  2. Pressure sintering and creep deformation: a joint modeling approach

    International Nuclear Information System (INIS)

    Notis, M.R.

    1979-10-01

    Work related to microchemical and microstructural aspects of the joint modeling of pressure sintering and creep in ceramic oxides is reported. Quantitative techniques for the microchemical analysis of ceramic oxides and for the examination of impurity segregation effects in polycrystalline ceramic materials were developed. This has included fundamental absorption corrections for the oxygen anion species as a function of foil thickness. The evolution in microstructure during the transition from intermediate stage to final stage densification during hot pressing of cobalt oxide and preliminary studies with doped oxides were studied. This work shows promise in using time-integrated microstructural effects to elucidate the role of impurities in the sintering of ceramic materials

  3. Pressure sintering and creep deformation: a joint modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Notis, M.R.

    1979-10-01

    Work related to microchemical and microstructural aspects of the joint modeling of pressure sintering and creep in ceramic oxides is reported. Quantitative techniques for the microchemical analysis of ceramic oxides and for the examination of impurity segregation effects in polycrystalline ceramic materials were developed. This has included fundamental absorption corrections for the oxygen anion species as a function of foil thickness. The evolution in microstructure during the transition from intermediate stage to final stage densification during hot pressing of cobalt oxide and preliminary studies with doped oxides were studied. This work shows promise in using time-integrated microstructural effects to elucidate the role of impurities in the sintering of ceramic materials.

  4. A screening-level modeling approach to estimate nitrogen ...

    Science.gov (United States)

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce

  5. A fuzzy approach for modelling radionuclide in lake system.

    Science.gov (United States)

    Desai, H K; Christian, R A; Banerjee, J; Patra, A K

    2013-10-01

    Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of (3)H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict (3)H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and (3)H concentration at discharge point. The Output was (3)H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. BUSINESS MODEL ROADMAPPING: A PRACTICAL APPROACH TO COME FROM AN EXISTING TO A DESIRED BUSINESS MODEL

    OpenAIRE

    MARK DE REUVER; HARRY BOUWMAN; TIMBER HAAKER

    2013-01-01

    Literature on business models deals extensively with how to design new business models, but hardly with how to make the transition from an existing to a newly designed business model. The transition to a new business model raises several practical and strategic issues, such as how to replace an existing value proposition with a new one, when to acquire new resources and capabilities, and when to start new partnerships. In this paper, we coin the term business model roadmapping as an approach ...

  7. Final report of MoReMO 2011-2012. Modelling resilience for maintenance and outage

    Energy Technology Data Exchange (ETDEWEB)

    Gotcheva, N.; Macchi, L.; Oedewald, P. [Technical Research Centre of Finland (VTT), Espoo (Finland); Eitrheim, M.H.R. [Institute for Energy Technology (IFE) (Norway); Axelsson, C.; Reiman, T.; Pietikaeinen, E. [Ringhals AB (NPP), Vattenfall AB (Sweden)

    2013-04-15

    The project Modelling Resilience for Maintenance and Outage (MoReMO) represents a two-year joint effort by VTT Technical Research Centre of Finland, Institute for Energy Technology (IFE, Norway) and Vattenfall (Sweden) to develop and test new approaches for safety management. The overall goal of the project was to present concepts on how resilience can be operationalized and built in a safety critical and socio-technical context. Furthermore, the project also aimed at providing guidance for other organizations that strive to develop and improve their safety performance in a business driven industry. We have applied four approaches in different case studies: Organisational Core Task modelling (OCT), Functional Resonance Analysis Method (FRAM), Efficiency Thoroughness Trade-Off (ETTO) analysis, and Work Practice and Culture Characterisation. During 2011 and 2012 the MoReMO project team has collected data through field observations, interviews, workshops, and document analysis on the work practices and adjustments in maintenance and outage in Nordic NPPs. The project consisted of two sub-studies, one focused on identifying and assessing adjustments and supporting resilient work practices in maintenance activities, while the other focused on handling performance trade-offs in maintenance and outage, as follows: A. Adjustments in maintenance work in Nordic nuclear power plants (VTT and Vattenfall). B. Handling performance trade-offs - the support of adaptive capacities (IFE and Vattenfall). The historical perspective of maintenance and outage management (Chapter 1.1) was provided by Vattenfall. Together, the two sub-studies have provided valuable insights for understanding the rationale behind work practices and adjustments, their effects on resilience, promoting flexibility and balancing between flexibility and reliability. (Author)

  8. Final report of MoReMO 2011-2012. Modelling resilience for maintenance and outage

    International Nuclear Information System (INIS)

    Gotcheva, N.; Macchi, L.; Oedewald, P.; Eitrheim, M.H.R.; Axelsson, C.; Reiman, T.; Pietikaeinen, E.

    2013-04-01

    The project Modelling Resilience for Maintenance and Outage (MoReMO) represents a two-year joint effort by VTT Technical Research Centre of Finland, Institute for Energy Technology (IFE, Norway) and Vattenfall (Sweden) to develop and test new approaches for safety management. The overall goal of the project was to present concepts on how resilience can be operationalized and built in a safety critical and socio-technical context. Furthermore, the project also aimed at providing guidance for other organizations that strive to develop and improve their safety performance in a business driven industry. We have applied four approaches in different case studies: Organisational Core Task modelling (OCT), Functional Resonance Analysis Method (FRAM), Efficiency Thoroughness Trade-Off (ETTO) analysis, and Work Practice and Culture Characterisation. During 2011 and 2012 the MoReMO project team has collected data through field observations, interviews, workshops, and document analysis on the work practices and adjustments in maintenance and outage in Nordic NPPs. The project consisted of two sub-studies, one focused on identifying and assessing adjustments and supporting resilient work practices in maintenance activities, while the other focused on handling performance trade-offs in maintenance and outage, as follows: A. Adjustments in maintenance work in Nordic nuclear power plants (VTT and Vattenfall). B. Handling performance trade-offs - the support of adaptive capacities (IFE and Vattenfall). The historical perspective of maintenance and outage management (Chapter 1.1) was provided by Vattenfall. Together, the two sub-studies have provided valuable insights for understanding the rationale behind work practices and adjustments, their effects on resilience, promoting flexibility and balancing between flexibility and reliability. (Author)

  9. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  10. A modal approach to modeling spatially distributed vibration energy dissipation.

    Energy Technology Data Exchange (ETDEWEB)

    Segalman, Daniel Joseph

    2010-08-01

    The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

  11. Building spatio-temporal database model based on ontological approach using relational database environment

    International Nuclear Information System (INIS)

    Mahmood, N.; Burney, S.M.A.

    2017-01-01

    Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)

  12. Site-conditions map for Portugal based on VS measurements: methodology and final model

    Science.gov (United States)

    Vilanova, Susana; Narciso, João; Carvalho, João; Lopes, Isabel; Quinta Ferreira, Mario; Moura, Rui; Borges, José; Nemser, Eliza; Pinto, carlos

    2017-04-01

    In this paper we present a statistically significant site-condition model for Portugal based on shear-wave velocity (VS) data and surface geology. We also evaluate the performance of commonly used Vs30 proxies based on exogenous data and analyze the implications of using those proxies for calculating site amplification in seismic hazard assessment. The dataset contains 161 Vs profiles acquired in Portugal in the context of research projects, technical reports, academic thesis and academic papers. The methodologies involved in characterizing the Vs structure at the sites in the database include seismic refraction, multichannel analysis of seismic waves and refraction microtremor. Invasive measurements were performed in selected locations in order to compare the Vs profiles obtained from both invasive and non-invasive techniques. In general there was good agreement in the subsurface structure of Vs30 obtained from the different methodologies. The database flat-file includes information on Vs30, surface geology at 1:50.000 and 1:500.000 scales, elevation and topographic slope and based on SRTM30 topographic dataset. The procedure used to develop the site-conditions map is based on a three-step process that includes defining a preliminary set of geological units based on the literature, performing statistical tests to assess whether or not the differences in the distributions of Vs30 are statistically significant, and merging of the geological units accordingly. The dataset was, to some extent, affected by clustering and/or preferential sampling and therefore a declustering algorithm was applied. The final model includes three geological units: 1) Igneous, metamorphic and old (Paleogene and Mesozoic) sedimentary rocks; 2) Neogene and Pleistocene formations, and 3) Holocene formations. The evaluation of proxies indicates that although geological analogues and topographic slope are in general unbiased, the latter shows significant bias for particular geological units and

  13. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  14. Final Project Report: Development of Micro-Structural Mitigation Strategies for PEM Fuel Cells: Morphological Simulations and Experimental Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Wessel, Silvia [Ballard Materials Products; Harvey, David [Ballard Materials Products

    2013-06-28

    The durability of PEM fuel cells is a primary requirement for large scale commercialization of these power systems in transportation and stationary market applications that target operational lifetimes of 5,000 hours and 40,000 hours by 2015, respectively. Key degradation modes contributing to fuel cell lifetime limitations have been largely associated with the platinum-based cathode catalyst layer. Furthermore, as fuel cells are driven to low cost materials and lower catalyst loadings in order to meet the cost targets for commercialization, the catalyst durability has become even more important. While over the past few years significant progress has been made in identifying the underlying causes of fuel cell degradation and key parameters that greatly influence the degradation rates, many gaps with respect to knowledge of the driving mechanisms still exist; in particular, the acceleration of the mechanisms due to different structural compositions and under different fuel cell conditions remains an area not well understood. The focus of this project was to address catalyst durability by using a dual path approach that coupled an extensive range of experimental analysis and testing with a multi-scale modeling approach. With this, the major technical areas/issues of catalyst and catalyst layer performance and durability that were addressed are: 1. Catalyst and catalyst layer degradation mechanisms (Pt dissolution, agglomeration, Pt loss, e.g. Pt in the membrane, carbon oxidation and/or corrosion). a. Driving force for the different degradation mechanisms. b. Relationships between MEA performance, catalyst and catalyst layer degradation and operational conditions, catalyst layer composition, and structure. 2. Materials properties a. Changes in catalyst, catalyst layer, and MEA materials properties due to degradation. 3. Catalyst performance a. Relationships between catalyst structural changes and performance. b. Stability of the three-phase boundary and its effect on

  15. A Novel Approach to Implement Takagi-Sugeno Fuzzy Models.

    Science.gov (United States)

    Chang, Chia-Wen; Tao, Chin-Wang

    2017-09-01

    This paper proposes new algorithms based on the fuzzy c-regressing model algorithm for Takagi-Sugeno (T-S) fuzzy modeling of the complex nonlinear systems. A fuzzy c-regression state model (FCRSM) algorithm is a T-S fuzzy model in which the functional antecedent and the state-space-model-type consequent are considered with the available input-output data. The antecedent and consequent forms of the proposed FCRSM consists mainly of two advantages: one is that the FCRSM has low computation load due to only one input variable is considered in the antecedent part; another is that the unknown system can be modeled to not only the polynomial form but also the state-space form. Moreover, the FCRSM can be extended to FCRSM-ND and FCRSM-Free algorithms. An algorithm FCRSM-ND is presented to find the T-S fuzzy state-space model of the nonlinear system when the input-output data cannot be precollected and an assumed effective controller is available. In the practical applications, the mathematical model of controller may be hard to be obtained. In this case, an online tuning algorithm, FCRSM-FREE, is designed such that the parameters of a T-S fuzzy controller and the T-S fuzzy state model of an unknown system can be online tuned simultaneously. Four numerical simulations are given to demonstrate the effectiveness of the proposed approach.

  16. Systematic approach to verification and validation: High explosive burn models

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  17. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertaintie...

  18. Transforming the representation of the boundary layer and low clouds for high-resolution regional climate modeling: Final report

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Alex [University of California, Los Angeles, CA (United States). Joint Institute for Regional Earth System Science and Engineering

    2013-07-24

    the mostly dry mountain-breeze circulations force an additional component that results in semi-diurnal variations near the coast. A series of numerical tests, however, reveal sensitivity of the simulations to the choice of vertical grid, limiting the possibility of solid quantitative statements on the amplitudes and phases of the diurnal and semidiurnal components across the domain. According to our experiments, the Mellor-Yamada-Nakanishi-Niino (MYNN) boundary layer scheme and the WSM6 microphysics scheme is the combination of schemes that performs best. For that combination, mean cloud cover, liquid water path, and cloud depth are fairly wellsimulated, while mean cloud top height remains too low in comparison to observations. Both microphysics and boundary layer schemes contribute to the spread in liquid water path and cloud depth, although the microphysics contribution is slightly more prominent. Boundary layer schemes are the primary contributors to cloud top height, degree of adiabaticity, and cloud cover. Cloud top height is closely related to surface fluxes and boundary layer structure. Thus, our study infers that an appropriate tuning of cloud top height would likely improve the low-cloud representation in the model. Finally, we show that entrainment governs the degree of adiabaticity, while boundary layer decoupling is a control on cloud cover. In the intercomparison study using WRF single-column model experiments, most parameterizations show a poor agreement of the vertical boundary layer structure when compared with large-eddy simulation models. We also implement a new Total-Energy/Mass- Flux boundary layer scheme into the WRF model and evaluate its ability to simulate both stratocumulus and shallow cumulus clouds. Result comparisons against large-eddy simulation show that this advanced parameterization based on the new Eddy-Diffusivity/Mass-Flux approach provides a better performance than other boundary layer parameterizations.

  19. A nonlinear optimal control approach to stabilization of a macroeconomic development model

    Science.gov (United States)

    Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.

    2017-11-01

    A nonlinear optimal (H-infinity) control approach is proposed for the problem of stabilization of the dynamics of a macroeconomic development model that is known as the Grossman-Helpman model of endogenous product cycles. The dynamics of the macroeconomic development model is divided in two parts. The first one describes economic activities in a developed country and the second part describes variation of economic activities in a country under development which tries to modify its production so as to serve the needs of the developed country. The article shows that through control of the macroeconomic model of the developed country, one can finally control the dynamics of the economy in the country under development. The control method through which this is achieved is the nonlinear H-infinity control. The macroeconomic model for the country under development undergoes approximate linearization round a temporary operating point. This is defined at each time instant by the present value of the system's state vector and the last value of the control input vector that was exerted on it. The linearization is based on Taylor series expansion and the computation of the associated Jacobian matrices. For the linearized model an H-infinity feedback controller is computed. The controller's gain is calculated by solving an algebraic Riccati equation at each iteration of the control method. The asymptotic stability of the control approach is proven through Lyapunov analysis. This assures that the state variables of the macroeconomic model of the country under development will finally converge to the designated reference values.

  20. Innovation Networks New Approaches in Modelling and Analyzing

    CERN Document Server

    Pyka, Andreas

    2009-01-01

    The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.

  1. Research on teacher education programs: logic model approach.

    Science.gov (United States)

    Newton, Xiaoxia A; Poon, Rebecca C; Nunes, Nicole L; Stone, Elisa M

    2013-02-01

    Teacher education programs in the United States face increasing pressure to demonstrate their effectiveness through pupils' learning gains in classrooms where program graduates teach. The link between teacher candidates' learning in teacher education programs and pupils' learning in K-12 classrooms implicit in the policy discourse suggests a one-to-one correspondence. However, the logical steps leading from what teacher candidates have learned in their programs to what they are doing in classrooms that may contribute to their pupils' learning are anything but straightforward. In this paper, we argue that the logic model approach from scholarship on evaluation can enhance research on teacher education by making explicit the logical links between program processes and intended outcomes. We demonstrate the usefulness of the logic model approach through our own work on designing a longitudinal study that focuses on examining the process and impact of an undergraduate mathematics and science teacher education program. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Understanding complex urban systems multidisciplinary approaches to modeling

    CERN Document Server

    Gurr, Jens; Schmidt, J

    2014-01-01

    Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...

  3. A Model-Driven Approach for 3D Modeling of Pylon from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Qingquan Li

    2015-09-01

    Full Text Available Reconstructing three-dimensional model of the pylon from LiDAR (Light Detection And Ranging point clouds automatically is one of the key techniques for facilities management GIS system of high-voltage nationwide transmission smart grid. This paper presents a model-driven three-dimensional pylon modeling (MD3DM method using airborne LiDAR data. We start with constructing a parametric model of pylon, based on its actual structure and the characteristics of point clouds data. In this model, a pylon is divided into three parts: pylon legs, pylon body and pylon head. The modeling approach mainly consists of four steps. Firstly, point clouds of individual pylon are detected and segmented from massive high-voltage transmission corridor point clouds automatically. Secondly, an individual pylon is divided into three relatively simple parts in order to reconstruct different parts with different strategies. Its position and direction are extracted by contour analysis of the pylon body in this stage. Thirdly, the geometric features of the pylon head are extracted, from which the head type is derived with a SVM (Support Vector Machine classifier. After that, the head is constructed by seeking corresponding model from pre-build model library. Finally, the body is modeled by fitting the point cloud to planes. Experiment results on several point clouds data sets from China Southern high-voltage nationwide transmission grid from Yunnan Province to Guangdong Province show that the proposed approach can achieve the goal of automatic three-dimensional modeling of the pylon effectively.

  4. Modelling transport energy demand: A socio-technical approach

    International Nuclear Information System (INIS)

    Anable, Jillian; Brand, Christian; Tran, Martino; Eyre, Nick

    2012-01-01

    Despite an emerging consensus that societal energy consumption and related emissions are not only influenced by technical efficiency but also by lifestyles and socio-cultural factors, few attempts have been made to operationalise these insights in models of energy demand. This paper addresses that gap by presenting a scenario exercise using an integrated suite of sectoral and whole systems models to explore potential energy pathways in the UK transport sector. Techno-economic driven scenarios are contrasted with one in which social change is strongly influenced by concerns about energy use, the environment and well-being. The ‘what if’ Lifestyle scenario reveals a future in which distance travelled by car is reduced by 74% by 2050 and final energy demand from transport is halved compared to the reference case. Despite the more rapid uptake of electric vehicles and the larger share of electricity in final energy demand, it shows a future where electricity decarbonisation could be delayed. The paper illustrates the key trade-off between the more aggressive pursuit of purely technological fixes and demand reduction in the transport sector and concludes there are strong arguments for pursuing both demand and supply side solutions in the pursuit of emissions reduction and energy security.

  5. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  6. DISCRETE LATTICE ELEMENT APPROACH FOR ROCK FAILURE MODELING

    Directory of Open Access Journals (Sweden)

    Mijo Nikolić

    2017-01-01

    Full Text Available This paper presents the ‘discrete lattice model’, or, simply, the ‘lattice model’, developed for rock failure modeling. The main difficulties in numerical modeling, namely, those related to complex crack initiations and multiple crack propagations, their coalescence under the influence of natural disorder, and heterogeneities, are overcome using the approach presented in this paper. The lattice model is constructed as an assembly of Timoshenko beams, representing the cohesive links between the grains of the material, which are described by Voronoi polygons. The kinematics of the Timoshenko beams are enhanced by the embedded strong discontinuities in their axial and transversal directions so as to provide failure modes I, II, and III. The model presented is suitable for meso-scale rock simulations. The representative numerical simulations, in both 2D and 3D settings, are provided in order to illustrate the model’s capabilities.

  7. Computer Modeling of Violent Intent: A Content Analysis Approach

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  8. A fuzzy approach to the Weighted Overlap Dominance model

    DEFF Research Database (Denmark)

    Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt

    2013-01-01

    Decision support models are required to handle the various aspects of multi-criteria decision problems in order to help the individual understand its possible solutions. In this sense, such models have to be capable of aggregating and exploiting different types of measurements and evaluations...... in an interactive way, where input data can take the form of uniquely-graded or interval-valued information. Here we explore the Weighted Overlap Dominance (WOD) model from a fuzzy perspective and its outranking approach to decision support and multidimensional interval analysis. Firstly, imprecision measures...... are introduced for characterizing the type of uncertainty being expressed by intervals, examining at the same time how the WOD model handles both non-interval as well as interval data, and secondly, relevance degrees are proposed for obtaining a ranking over the alternatives. Hence, a complete methodology...

  9. A reservoir simulation approach for modeling of naturally fractured reservoirs

    Directory of Open Access Journals (Sweden)

    H. Mohammadi

    2012-12-01

    Full Text Available In this investigation, the Warren and Root model proposed for the simulation of naturally fractured reservoir was improved. A reservoir simulation approach was used to develop a 2D model of a synthetic oil reservoir. Main rock properties of each gridblock were defined for two different types of gridblocks called matrix and fracture gridblocks. These two gridblocks were different in porosity and permeability values which were higher for fracture gridblocks compared to the matrix gridblocks. This model was solved using the implicit finite difference method. Results showed an improvement in the Warren and Root model especially in region 2 of the semilog plot of pressure drop versus time, which indicated a linear transition zone with no inflection point as predicted by other investigators. Effects of fracture spacing, fracture permeability, fracture porosity, matrix permeability and matrix porosity on the behavior of a typical naturally fractured reservoir were also presented.

  10. Fibroblast motility on substrates with different rigidities: modeling approach

    Science.gov (United States)

    Gracheva, Maria; Dokukina, Irina

    2009-03-01

    We develop a discrete model for cell locomotion on substrates with different rigidities and simulate experiments described in Lo, Wang, Dembo, Wang (2000) ``Cell movement is guided by the rigidity of the substrate'', Biophys. J. 79: 144-152. In these experiments fibroblasts were planted on a substrate with a step rigidity and showed preference for locomotion over stiffer side of the substrate when approaches the boundary between the soft and the stiff sides of the substrate. The model reproduces experimentally observed behavior of fibroblasts. In particular, we are able to show with our model how cell characteristics (such as cell length, shape, area and speed) change during cell crawling through the ``soft-stiff'' substrate boundary. Also, our model suggests the temporary increase of both cell speed and area in that very moment when cell leaves soft side of substrate.

  11. Modeling fabrication of nuclear components: An integrative approach

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.

    1996-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  12. Immunologic approach to the identification and development of vaccines to various toxins. Final report, June 1992-May 1994

    Energy Technology Data Exchange (ETDEWEB)

    Chanh, T.C.

    1994-08-01

    We have used the protein synthesis inhibitor ricin and the sodium channel blocker saxitoxin (STX) as model toxic agents to investigate the feasibility of developing safe and effective vaccines against the in vivo toxicity of biological and chemical toxins. Because of their extreme in vivo toxicity, they can not be utilized as immunogens to elicit protective immunity. Thus, we have focused on the anti-idiotype and synthetic peptide-based approaches in our vaccine development strategy. A number of anti-idiotype reagents were produced, some of which were demonstrated to elicit in vivo protective immunity against ricin intoxication. In addition, recent results suggested that a cyclic ricin A peptide homologous to residues 88-112 provided some protection against ricin toxicity in vivo. The results obtained thus far with the STX system were less encouraging. Although anti-idiotype reagents produced were capable of inducing specific and systemic anti-STX antibody responses in vivo, the immunity elicited did not provide significant protection against STX toxicity. However, some delay in the time between STX administration and death was observed with some of the anti-idiotype reagents.

  13. THE SIGNAL APPROACH TO MODELLING THE BALANCE OF PAYMENT CRISIS

    Directory of Open Access Journals (Sweden)

    O. Chernyak

    2016-12-01

    Full Text Available The paper considers and presents synthesis of theoretical models of balance of payment crisis and investigates the most effective ways to model the crisis in Ukraine. For mathematical formalization of balance of payment crisis, comparative analysis of the effectiveness of different calculation methods of Exchange Market Pressure Index was performed. A set of indicators that signal the growing likelihood of balance of payments crisis was defined using signal approach. With the help of minimization function thresholds indicators were selected, the crossing of which signalize increase in the probability of balance of payment crisis.

  14. Risk Modeling Approaches in Terms of Volatility Banking Transactions

    Directory of Open Access Journals (Sweden)

    Angelica Cucşa (Stratulat

    2016-01-01

    Full Text Available The inseparability of risk and banking activity is one demonstrated ever since banking systems, the importance of the topic being presend in current life and future equally in the development of banking sector. Banking sector development is done in the context of the constraints of nature and number of existing risks and those that may arise, and serves as limiting the risk of banking activity. We intend to develop approaches to analyse risk through mathematical models by also developing a model for the Romanian capital market 10 active trading picks that will test investor reaction in controlled and uncontrolled conditions of risk aggregated with harmonised factors.

  15. Modeling software with finite state machines a practical approach

    CERN Document Server

    Wagner, Ferdinand; Wagner, Thomas; Wolstenholme, Peter

    2006-01-01

    Modeling Software with Finite State Machines: A Practical Approach explains how to apply finite state machines to software development. It provides a critical analysis of using finite state machines as a foundation for executable specifications to reduce software development effort and improve quality. This book discusses the design of a state machine and of a system of state machines. It also presents a detailed analysis of development issues relating to behavior modeling with design examples and design rules for using finite state machines. This volume describes a coherent and well-tested fr

  16. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  17. An Adaptive Agent-Based Model of Homing Pigeons: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Francis Oloo

    2017-01-01

    Full Text Available Conventionally, agent-based modelling approaches start from a conceptual model capturing the theoretical understanding of the systems of interest. Simulation outcomes are then used “at the end” to validate the conceptual understanding. In today’s data rich era, there are suggestions that models should be data-driven. Data-driven workflows are common in mathematical models. However, their application to agent-based models is still in its infancy. Integration of real-time sensor data into modelling workflows opens up the possibility of comparing simulations against real data during the model run. Calibration and validation procedures thus become automated processes that are iteratively executed during the simulation. We hypothesize that incorporation of real-time sensor data into agent-based models improves the predictive ability of such models. In particular, that such integration results in increasingly well calibrated model parameters and rule sets. In this contribution, we explore this question by implementing a flocking model that evolves in real-time. Specifically, we use genetic algorithms approach to simulate representative parameters to describe flight routes of homing pigeons. The navigation parameters of pigeons are simulated and dynamically evaluated against emulated GPS sensor data streams and optimised based on the fitness of candidate parameters. As a result, the model was able to accurately simulate the relative-turn angles and step-distance of homing pigeons. Further, the optimised parameters could replicate loops, which are common patterns in flight tracks of homing pigeons. Finally, the use of genetic algorithms in this study allowed for a simultaneous data-driven optimization and sensitivity analysis.

  18. Understanding Gulf War Illness: An Integrative Modeling Approach

    Science.gov (United States)

    2017-10-01

    high-order diffusion imaging in a rat model of Gulf War Illness. §These authors contributed equally to the work. Brain Behavior and Immunity. pii...astrocyte specific transcriptome responses to neurotoxicity. §These authors contributed equally to the work. Submitted for Internal CDC-NIOSH...Antagonist: Evaluation of Beneficial Effects for Gulf War Illness 4) GW160116 (Nathanson) Genomics approach to find gender specific mechanisms of GWI

  19. An Approach for Modeling and Formalizing SOA Design Patterns

    OpenAIRE

    Tounsi , Imen; Hadj Kacem , Mohamed; Hadj Kacem , Ahmed; Drira , Khalil

    2013-01-01

    11 pages; International audience; Although design patterns has become increasingly popular, most of them are presented in an informal way, which can give rise to ambiguity and may lead to their incorrect usage. Patterns proposed by the SOA design pattern community are described with informal visual notations. Modeling SOA design patterns with a standard formal notation contributes to avoid misunderstanding by software architects and helps endowing design methods with refinement approaches for...

  20. An approach for quantifying small effects in regression models.

    Science.gov (United States)

    Bedrick, Edward J; Hund, Lauren

    2018-04-01

    We develop a novel approach for quantifying small effects in regression models. Our method is based on variation in the mean function, in contrast to methods that focus on regression coefficients. Our idea applies in diverse settings such as testing for a negligible trend and quantifying differences in regression functions across strata. Straightforward Bayesian methods are proposed for inference. Four examples are used to illustrate the ideas.

  1. A Conditional Approach to Panel Data Models with Common Shocks

    Directory of Open Access Journals (Sweden)

    Giovanni Forchini

    2016-01-01

    Full Text Available This paper studies the effects of common shocks on the OLS estimators of the slopes’ parameters in linear panel data models. The shocks are assumed to affect both the errors and some of the explanatory variables. In contrast to existing approaches, which rely on using results on martingale difference sequences, our method relies on conditional strong laws of large numbers and conditional central limit theorems for conditionally-heterogeneous random variables.

  2. Modeling Electronic Circular Dichroism within the Polarizable Embedding Approach

    DEFF Research Database (Denmark)

    Nørby, Morten S; Olsen, Jógvan Magnus Haugaard; Steinmann, Casper

    2017-01-01

    We present a systematic investigation of the key components needed to model single chromophore electronic circular dichroism (ECD) within the polarizable embedding (PE) approach. By relying on accurate forms of the embedding potential, where especially the inclusion of local field effects...... are in focus, we show that qualitative agreement between rotatory strength parameters calculated by full quantum mechanical calculations and the more efficient embedding calculations can be obtained. An important aspect in the computation of reliable absorption parameters is the need for conformational...

  3. Modeling Defibrillation of the Heart: Approaches and Insights

    Science.gov (United States)

    Trayanova, Natalia; Constantino, Jason; Ashihara, Takashi; Plank, Gernot

    2012-01-01

    Cardiac defibrillation, as accomplished nowadays by automatic, implantable devices (ICDs), constitutes the most important means of combating sudden cardiac death. While ICD therapy has proved to be efficient and reliable, defibrillation is a traumatic experience. Thus, research on defibrillation mechanisms, particularly aimed at lowering defibrillation voltage, remains an important topic. Advancing our understanding towards a full appreciation of the mechanisms by which a shock interacts with the heart is the most promising approach to achieve this goal. The aim of this paper is to assess the current state-of-the-art in ventricular defibrillation modeling, focusing on both numerical modeling approaches and major insights that have been obtained using defibrillation models, primarily those of realistic ventricular geometry. The paper showcases the contributions that modeling and simulation have made to our understanding of the defibrillation process. The review thus provides an example of biophysically based computational modeling of the heart (i.e., cardiac defibrillation) that has advanced the understanding of cardiac electrophysiological interaction at the organ level and has the potential to contribute to the betterment of the clinical practice of defibrillation. PMID:22273793

  4. Clinical and epidemiological round: Approach to clinical prediction models

    Directory of Open Access Journals (Sweden)

    Isaza-Jaramillo, Sandra

    2017-01-01

    Full Text Available Research related to prognosis can be classified as follows: fundamental, which shows differences in health outcomes; prognostic factors, which identifies and characterizes variables; development, validation and impact of predictive models; and finally, stratified medicine, to establish groups that share a risk factor associated with the outcome of interest. The outcome of a person regarding health or disease status can be predicted considering certain characteristics associated, before or simultaneously, with that outcome. This can be done by means of prognostic or diagnostic predictive models. The development of a predictive model requires to be careful in the selection, definition, measurement and categorization of predictor variables; in the exploration of interactions; in the number of variables to be included; in the calculation of sample size; in the handling of lost data; in the statistical tests to be used, and in the presentation of the model. The model thus developed must be validated in a different group of patients to establish its calibration, discrimination and usefulness.

  5. Consensus approach for modeling HTS assays using in silico descriptors

    Directory of Open Access Journals (Sweden)

    Ahmed eAbdelaziz Sayed

    2016-02-01

    Full Text Available The need for filling information gaps while reducing toxicity testing in animals is becoming more predominant in risk assessment. Recent legislations are accepting in silico approaches for predicting toxicological outcomes. This article describes the results of Quantitative Structure Activity Relationship (QSAR modeling efforts within Tox21 Data Challenge 2014, which calculated the best balanced accuracy across all molecular pathway endpoints as well as the highest scores for ATAD5 and mitochondrial membrane potential disruption. Automated QSPR workflow systems, OCHEM (http://ochem.eu, the analytics platform, KNIME and the statistics software, CRAN R, were used to conduct the analysis and develop consensus models using ten different descriptor sets. A detailed analysis of QSAR models for all 12 molecular pathways and the effect of underlying models’ accuracy on the quality of the consensus model are provided. The resulting consensus models yielded a balanced accuracy as high as 88.1%±0.6 for mitochondrial membrane disruptors. Such high balanced accuracy and use of the applicability domain show a promising potential for in silico modeling to complement design HTS screening experiments. The summary statistics of all models are publicly available online at https://github.com/amaziz/Tox21-Challenge-Publication while the developed consensus models can be accessed at http://ochem.eu/article/98009.

  6. A multi-model ensemble approach to seabed mapping

    Science.gov (United States)

    Diesing, Markus; Stephens, David

    2015-06-01

    Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.

  7. Finite element modeling approach and performance evaluation of fiber reinforced polymer sandwich bridge panels : final report.

    Science.gov (United States)

    2009-08-01

    In the United States, about 27% of the bridges are classified as structurally deficient or functionally obsolete. : Bridge owners are continually investigating methods to effectively retrofit existing bridges, or to economically replace : them with n...

  8. Expanding Model Independent Approaches for Measuring the CKM angle $\\gamma$ at LHCb

    CERN Multimedia

    Prouve, Claire

    2017-01-01

    Model independent approaches to measuring the CKM angle $\\gamma$ in $B\\rightarrow DK$ decays at LHCb are explored. In particular, we consider the case where the $D$ meson decays into a final state with four hadrons. Using four-body final states such as $\\pi^+ \\pi^- \\pi^+ \\pi^-$, $K^+ \\pi^- \\pi^+ \\pi^-$ and $K^+ K^- \\pi^+ \\pi^-$ in addition to traditional 2 and 3 body states and has the potential to significantly improve to the overall constraint on $\\gamma$. There is a significant systematic uncertainty associated with modelling the complex phase of the $D$ decay amplitude across the five-dimensional phase space of the four body decay. It is therefore important to replace these model-dependent quantities with model-independent parameters as input for the $\\gamma$ measurement. These model independent parameters have been measured using quantum-correlated $\\psi(3770) \\rightarrow D^0 \\overline{D^0}$ decays collected by the CLEO-c experiment, and, for $D\\rightarrow K^+ \\pi^- \\pi^+ \\pi^-$, with $D^0-\\overline{D^0...

  9. Final Report: Model interacting particle systems for simulation and macroscopic description of particulate suspensions

    Energy Technology Data Exchange (ETDEWEB)

    Peter J. Mucha

    2007-08-30

    Suspensions of solid particles in liquids appear in numerous applications, from environmental settings like river silt, to industrial systems of solids transport and water treatment, and biological flows such as blood flow. Despite their importance, much remains unexplained about these complicated systems. Mucha's research aims to improve understanding of basic properties of suspensions through a program of simulating model interacting particle systems with critical evaluation of proposed continuum equations, in close collaboration with experimentalists. Natural to this approach, the original proposal centered around collaboration with studies already conducted in various experimental groups. However, as was detailed in the 2004 progress report, following the first year of this award, a number of the questions from the original proposal were necessarily redirected towards other specific goals because of changes in the research programs of the proposed experimental collaborators. Nevertheless, the modified project goals and the results that followed from those goals maintain close alignment with the main themes of the original proposal, improving efficient simulation and macroscopic modeling of sedimenting and colloidal suspensions. In particular, the main investigations covered under this award have included: (1) Sedimentation instabilities, including the sedimentation analogue of the Rayleigh-Taylor instability (for heavy, particle-laden fluid over lighter, clear fluid). (2) Ageing dynamics of colloidal suspensions at concentrations above the glass transition, using simplified interactions. (3) Stochastic reconstruction of velocity-field dependence for particle image velocimetry (PIV). (4) Stochastic modeling of the near-wall bias in 'nano-PIV'. (5) Distributed Lagrange multiplier simulation of the 'internal splash' of a particle falling through a stable stratified interface. (6) Fundamental study of velocity fluctuations in sedimentation

  10. Policy harmonized approach for the EU agricultural sector modelling

    Directory of Open Access Journals (Sweden)

    G. SALPUTRA

    2008-12-01

    Full Text Available Policy harmonized (PH approach allows for the quantitative assessment of the impact of various elements of EU CAP direct support schemes, where the production effects of direct payments are accounted through reaction prices formed by producer price and policy price add-ons. Using the AGMEMOD model the impacts of two possible EU agricultural policy scenarios upon beef production have been analysed – full decoupling with a switch from historical to regional Single Payment scheme or alternatively with re-distribution of country direct payment envelopes via introduction of EU-wide flat area payment. The PH approach, by systematizing and harmonizing the management and use of policy data, ensures that projected differential policy impacts arising from changes in common EU policies reflect the likely actual differential impact as opposed to differences in how “common” policies are implemented within analytical models. In the second section of the paper the AGMEMOD model’s structure is explained. The policy harmonized evaluation method is presented in the third section. Results from an application of the PH approach are presented and discussed in the paper’s penultimate section, while section 5 concludes.;

  11. A Hybrid Latent Class Analysis Modeling Approach to Analyze Urban Expressway Crash Risk.

    Science.gov (United States)

    Yu, Rongjie; Wang, Xuesong; Abdel-Aty, Mohamed

    2017-04-01

    Crash risk analysis is rising as a hot research topic as it could reveal the relationships between traffic flow characteristics and crash occurrence risk, which is beneficial to understand crash mechanisms which would further refine the design of Active Traffic Management System (ATMS). However, the majority of the current crash risk analysis studies have ignored the impact of geometric characteristics on crash risk estimation while recent studies proved that crash occurrence risk was affected by the various alignment features. In this study, a hybrid Latent Class Analysis (LCA) modeling approach was proposed to account for the heterogeneous effects of geometric characteristics. Crashes were first segmented into homogenous subgroups, where the optimal number of latent classes was identified based on bootstrap likelihood ratio tests. Then, separate crash risk analysis models were developed using Bayesian random parameter logistic regression technique; data from Shanghai urban expressway system were employed to conduct the empirical study. Different crash risk contributing factors were unveiled by the hybrid LCA approach and better model goodness-of-fit was obtained while comparing to an overall total crash model. Finally, benefits of the proposed hybrid LCA approach were discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Final Report: Towards an Emergent Model of Technology Adoption for Accelerating the Diffusion of Residential Solar PV

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Varun [Univ. of Texas, Austin, TX (United States)

    2016-08-15

    This project sought to enable electric utilities in Texas to accelerate diffusion of residential solar photovoltaic (PV) by systematically identifying and targeting existing barriers to PV adoption. A core goal of the project was to develop an integrated research framework that combines survey research, econometric modeling, financial modeling, and implementation and evaluation of pilot projects to study the PV diffusion system. This project considered PV diffusion as an emergent system, with attention to the interactions between the constituent parts of the PV socio-technical system including: economics of individual decision-making; peer and social influences; behavioral responses; and information and transaction costs. We also conducted two pilot projects, which have yielded new insights into behavioral and informational aspects of PV adoption. Finally, this project has produced robust and generalizable results that will provide deeper insights into the technology-diffusion process that will be applicable for the design of utility programs for other technologies such as home-energy management systems and plug-in electric vehicles. When we started this project in 2013 there was little systematic research on characterizing the decision-making process of households interested in adopting PV. This project was designed to fill that research gap by analyzing the PV adoption process from the consumers' decision-making perspective and with the objective to systematically identifying and addressing the barriers that consumers face in the adoption of PV. The two key components of that decision-making process are consumers' evaluation of: (i) uncertainties and non-monetary costs associated with the technology and (ii) the direct monetary cost-benefit. This project used an integrated approach to study both the non-monetary and the monetary components of the consumer decision-making process.

  13. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    Science.gov (United States)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  14. A New Approach of Modeling an Ultra-Super-Critical Power Plant for Performance Improvement

    Directory of Open Access Journals (Sweden)

    Guolian Hou

    2016-04-01

    Full Text Available A suitable model of coordinated control system (CCS with high accuracy and simple structure is essential for the design of advanced controllers which can improve the efficiency of the ultra-super-critical (USC power plant. Therefore, with the demand of plant performance improvement, an improved T-S fuzzy model identification approach is proposed in this paper. Firstly, the improved entropy cluster algorithm is applied to identify the premise parameters which can automatically determine the cluster numbers and initial cluster centers by introducing the concept of a decision-making constant and threshold. Then, the learning algorithm is used to modify the initial cluster center and a new structure of concluding part is discussed, the incremental data around the cluster center is used to identify the local linear model through a weighted recursive least-square algorithm. Finally, the proposed approach is employed to model the CCS of a 1000 MW USC one-through boiler power plant by using on-site measured data. Simulation results show that the T-S fuzzy model built in this paper is accurate enough to reflect the dynamic performance of CCS and can be treated as a foundation model for the overall optimizing control of the USC power plant.

  15. Modelling an industrial anaerobic granular reactor using a multi-scale approach

    DEFF Research Database (Denmark)

    Feldman, Hannah; Flores Alsina, Xavier; Ramin, Pedram

    2017-01-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within...... the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark...... simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally...

  16. Multiscale modeling of alloy solidification using a database approach

    Science.gov (United States)

    Tan, Lijian; Zabaras, Nicholas

    2007-11-01

    A two-scale model based on a database approach is presented to investigate alloy solidification. Appropriate assumptions are introduced to describe the behavior of macroscopic temperature, macroscopic concentration, liquid volume fraction and microstructure features. These assumptions lead to a macroscale model with two unknown functions: liquid volume fraction and microstructure features. These functions are computed using information from microscale solutions of selected problems. This work addresses the selection of sample problems relevant to the interested problem and the utilization of data from the microscale solution of the selected sample problems. A computationally efficient model, which is different from the microscale and macroscale models, is utilized to find relevant sample problems. In this work, the computationally efficient model is a sharp interface solidification model of a pure material. Similarities between the sample problems and the problem of interest are explored by assuming that the liquid volume fraction and microstructure features are functions of solution features extracted from the solution of the computationally efficient model. The solution features of the computationally efficient model are selected as the interface velocity and thermal gradient in the liquid at the time the sharp solid-liquid interface passes through. An analytical solution of the computationally efficient model is utilized to select sample problems relevant to solution features obtained at any location of the domain of the problem of interest. The microscale solution of selected sample problems is then utilized to evaluate the two unknown functions (liquid volume fraction and microstructure features) in the macroscale model. The temperature solution of the macroscale model is further used to improve the estimation of the liquid volume fraction and microstructure features. Interpolation is utilized in the feature space to greatly reduce the number of required

  17. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  18. Modeling drug- and chemical- induced hepatotoxicity with systems biology approaches

    Directory of Open Access Journals (Sweden)

    Sudin eBhattacharya

    2012-12-01

    Full Text Available We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of ‘toxicity pathways’ is described in the context of the 2007 US National Academies of Science report, Toxicity testing in the 21st Century: A Vision and A Strategy. Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically-based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular virtual tissue model of the liver lobule that combines molecular circuits in individual hepatocytes with cell-cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the AhR toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsymTM to understand drug-induced liver injury (DILI, the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales.

  19. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    Energy Technology Data Exchange (ETDEWEB)

    1988-12-15

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source. The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.

  20. Overview of the FEP analysis approach to model development

    International Nuclear Information System (INIS)

    Bailey, L.

    1998-01-01

    This report heads a suite of documents describing the Nirex model development programme. The programme is designed to provide a clear audit trail from the identification of significant features, events and processes (FEPs) to the models and modelling processes employed within a detailed safety assessment. A five stage approach has been adopted, which provides a systematic framework for addressing uncertainty and for the documentation of all modelling decisions and assumptions. The five stages are as follows: Stage 1: EP Analysis - compilation and structuring of a FEP database; Stage 2: Scenario and Conceptual Model Development; Stage 3: Mathematical Model Development; Stage 4: Software Development; Stage 5: confidence Building. This report describes the development and structuring of a FEP database as a Master Directed Diagram (MDD) and explains how this may be used to identify different modelling scenarios, based upon the identification of scenario -defining FEPs. The methodology describes how the possible evolution of a repository system can be addressed in terms of a base scenario, a broad and reasonable representation of the 'natural' evolution of the system, and a number of variant scenarios, representing the effects of probabilistic events and processes. The MDD has been used to identify conceptual models to represent the base scenario and the interactions between these conceptual models have been systematically reviewed using a matrix diagram technique. This has led to the identification of modelling requirements for the base scenario, against which existing assessment software capabilities have been reviewed. A mechanism for combining probabilistic scenario-defining FEPs to construct multi-FEP variant scenarios has been proposed and trialled using the concept of a 'timeline', a defined sequence of events, from which consequences can be assessed. An iterative approach, based on conservative modelling principles, has been proposed for the evaluation of

  1. Quantitative versus qualitative modeling: a complementary approach in ecosystem study.

    Science.gov (United States)

    Bondavalli, C; Favilla, S; Bodini, A

    2009-02-01

    Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.

  2. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja

    2015-01-01

    Abstract Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  3. Fugacity superposition: a new approach to dynamic multimedia fate modeling.

    Science.gov (United States)

    Hertwich, E G

    2001-08-01

    The fugacities, concentrations, or inventories of pollutants in environmental compartments as determined by multimedia environmental fate models of the Mackay type can be superimposed on each other. This is true for both steady-state (level III) and dynamic (level IV) models. Any problem in multimedia fate models with linear, time-invariant transfer and transformation coefficients can be solved through a superposition of a set of n independent solutions to a set of coupled, homogeneous first-order differential equations, where n is the number of compartments in the model. For initial condition problems in dynamic models, the initial inventories can be separated, e.g. by a compartment. The solution is obtained by adding the single-compartment solutions. For time-varying emissions, a convolution integral is used to superimpose solutions. The advantage of this approach is that the differential equations have to be solved only once. No numeric integration is required. Alternatively, the dynamic model can be simplified to algebraic equations using the Laplace transform. For time-varying emissions, the Laplace transform of the model equations is simply multiplied with the Laplace transform of the emission profile. It is also shown that the time-integrated inventories of the initial conditions problems are the same as the inventories in the steady-state problem. This implies that important properties of pollutants such as potential dose, persistence, and characteristic travel distance can be derived from the steady state.

  4. A DYNAMICAL SYSTEM APPROACH IN MODELING TECHNOLOGY TRANSFER

    Directory of Open Access Journals (Sweden)

    Hennie Husniah

    2016-05-01

    Full Text Available In this paper we discuss a mathematical model of two parties technology transfer from a leader to a follower. The model is reconstructed via dynamical system approach from a known standard Raz and Assa model and we found some important conclusion which have not been discussed in the original model. The model assumes that in the absence of technology transfer from a leader to a follower, both the leader and the follower have a capability to grow independently with a known upper limit of the development. We obtain a rich mathematical structure of the steady state solution of the model. We discuss a special situation in which the upper limit of the technological development of the follower is higher than that of the leader, but the leader has started earlier than the follower in implementing the technology. In this case we show a paradox stating that the follower is unable to reach its original upper limit of the technological development could appear whenever the transfer rate is sufficiently high.  We propose a new model to increase realism so that any technological transfer rate could only has a positive effect in accelerating the rate of growth of the follower in reaching its original upper limit of the development.

  5. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  6. A comprehensive approach to age-dependent dosimetric modeling

    Energy Technology Data Exchange (ETDEWEB)

    Leggett, R.W.; Cristy, M.; Eckerman, K.F.

    1986-01-01

    In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks.

  7. A cascade modelling approach to flood extent estimation

    Science.gov (United States)

    Pedrozo-Acuña, Adrian; Rodríguez-Rincón, Juan Pablo; Breña-Naranjo, Agustin

    2014-05-01

    Recent efforts dedicated to the generation of new flood risk management strategies, have pointed out that a possible way forward for an improvement in this field relies on the reduction and quantification of uncertainties associated to the prediction system. With the purpose of reducing these uncertainties, this investigation follows a cascade modelling approach (meteorological - hydrological - 2D hydrodynamic) in combination with high-quality data (LiDAR, satellite imagery, precipitation), to study an extreme event registered last year in Mexico. The presented approach is useful for both, the characterisation of epistemic uncertainties and the generation of flood management strategies through probabilistic flood maps. Uncertainty is considered in both meteorological and hydrological models, and is propagated to a given flood extent as determined with a hydrodynamic model. Despite the methodology does not consider all the uncertainties that may be involved in the determination of a flooded area, it enables better understanding of the interaction between errors in the set-up of models and their propagation to a given result.

  8. A comprehensive approach to age-dependent dosimetric modeling

    International Nuclear Information System (INIS)

    Leggett, R.W.; Cristy, M.; Eckerman, K.F.

    1986-01-01

    In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks

  9. Micromechanical modeling and inverse identification of damage using cohesive approaches

    International Nuclear Information System (INIS)

    Blal, Nawfal

    2013-01-01

    In this study a micromechanical model is proposed for a collection of cohesive zone models embedded between two each elements of a standard cohesive-volumetric finite element method. An equivalent 'matrix-inclusions' composite is proposed as a representation of the cohesive-volumetric discretization. The overall behaviour is obtained using homogenization approaches (Hashin Shtrikman scheme and the P. Ponte Castaneda approach). The derived model deals with elastic, brittle and ductile materials. It is available whatever the triaxiality loading rate and the shape of the cohesive law, and leads to direct relationships between the overall material properties and the local cohesive parameters and the mesh density. First, rigorous bounds on the normal and tangential cohesive stiffnesses are obtained leading to a suitable control of the inherent artificial elastic loss induced by intrinsic cohesive models. Second, theoretical criteria on damageable and ductile cohesive parameters are established (cohesive peak stress, critical separation, cohesive failure energy,... ). These criteria allow a practical calibration of the cohesive zone parameters as function of the overall material properties and the mesh length. The main interest of such calibration is its promising capacity to lead to a mesh-insensitive overall response in surface damage. (author) [fr

  10. Artificial Life of Soybean Plant Growth Modeling Using Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Atris Suyantohadi

    2010-03-01

    Full Text Available The natural process on plant growth system has a complex system and it has could be developed on characteristic studied using intelligent approaches conducting with artificial life system. The approaches on examining the natural process on soybean (Glycine Max L.Merr plant growth have been analyzed and synthesized in these research through modeling using Artificial Neural Network (ANN and Lindenmayer System (L-System methods. Research aimed to design and to visualize plant growth modeling on the soybean varieties which these could help for studying botany of plant based on fertilizer compositions on plant growth with Nitrogen (N, Phosphor (P and Potassium (K. The soybean plant growth has been analyzed based on the treatments of plant fertilizer compositions in the experimental research to develop plant growth modeling. By using N, P, K fertilizer compositions, its capable result on the highest production 2.074 tons/hectares. Using these models, the simulation on artificial life for describing identification and visualization on the characteristic of soybean plant growth could be demonstrated and applied.

  11. Renormalization group approach to a p-wave superconducting model

    International Nuclear Information System (INIS)

    Continentino, Mucio A.; Deus, Fernanda; Caldas, Heron

    2014-01-01

    We present in this work an exact renormalization group (RG) treatment of a one-dimensional p-wave superconductor. The model proposed by Kitaev consists of a chain of spinless fermions with a p-wave gap. It is a paradigmatic model of great actual interest since it presents a weak pairing superconducting phase that has Majorana fermions at the ends of the chain. Those are predicted to be useful for quantum computation. The RG allows to obtain the phase diagram of the model and to study the quantum phase transition from the weak to the strong pairing phase. It yields the attractors of these phases and the critical exponents of the weak to strong pairing transition. We show that the weak pairing phase of the model is governed by a chaotic attractor being non-trivial from both its topological and RG properties. In the strong pairing phase the RG flow is towards a conventional strong coupling fixed point. Finally, we propose an alternative way for obtaining p-wave superconductivity in a one-dimensional system without spin–orbit interaction.

  12. Vertically-integrated Approaches for Carbon Sequestration Modeling

    Science.gov (United States)

    Bandilla, K.; Celia, M. A.; Guo, B.

    2015-12-01

    Carbon capture and sequestration (CCS) is being considered as an approach to mitigate anthropogenic CO2 emissions from large stationary sources such as coal fired power plants and natural gas processing plants. Computer modeling is an essential tool for site design and operational planning as it allows prediction of the pressure response as well as the migration of both CO2 and brine in the subsurface. Many processes, such as buoyancy, hysteresis, geomechanics and geochemistry, can have important impacts on the system. While all of the processes can be taken into account simultaneously, the resulting models are computationally very expensive and require large numbers of parameters which are often uncertain or unknown. In many cases of practical interest, the computational and data requirements can be reduced by choosing a smaller domain and/or by neglecting or simplifying certain processes. This leads to a series of models with different complexity, ranging from coupled multi-physics, multi-phase three-dimensional models to semi-analytical single-phase models. Under certain conditions the three-dimensional equations can be integrated in the vertical direction, leading to a suite of two-dimensional multi-phase models, termed vertically-integrated models. These models are either solved numerically or simplified further (e.g., assumption of vertical equilibrium) to allow analytical or semi-analytical solutions. This presentation focuses on how different vertically-integrated models have been applied to the simulation of CO2 and brine migration during CCS projects. Several example sites, such as the Illinois Basin and the Wabamun Lake region of the Alberta Basin, are discussed to show how vertically-integrated models can be used to gain understanding of CCS operations.

  13. Numerical modelling of carbonate platforms and reefs: approaches and opportunities

    Energy Technology Data Exchange (ETDEWEB)

    Dalmasso, H.; Montaggioni, L.F.; Floquet, M. [Universite de Provence, Marseille (France). Centre de Sedimentologie-Palaeontologie; Bosence, D. [Royal Holloway University of London, Egham (United Kingdom). Dept. of Geology

    2001-07-01

    This paper compares different computing procedures that have been utilized in simulating shallow-water carbonate platform development. Based on our geological knowledge we can usually give a rather accurate qualitative description of the mechanisms controlling geological phenomena. Further description requires the use of computer stratigraphic simulation models that allow quantitative evaluation and understanding of the complex interactions of sedimentary depositional carbonate systems. The roles of modelling include: (1) encouraging accuracy and precision in data collection and process interpretation (Watney et al., 1999); (2) providing a means to quantitatively test interpretations concerning the control of various mechanisms on producing sedimentary packages; (3) predicting or extrapolating results into areas of limited control; (4) gaining new insights regarding the interaction of parameters; (5) helping focus on future studies to resolve specific problems. This paper addresses two main questions, namely: (1) What are the advantages and disadvantages of various types of models? (2) How well do models perform? In this paper we compare and discuss the application of five numerical models: CARBONATE (Bosence and Waltham, 1990), FUZZIM (Nordlund, 1999), CARBPLAT (Bosscher, 1992), DYNACARB (Li et al., 1993), PHIL (Bowman, 1997) and SEDPAK (Kendall et al., 1991). The comparison, testing and evaluation of these models allow one to gain a better knowledge and understanding of controlling parameters of carbonate platform development, which are necessary for modelling. Evaluating numerical models, critically comparing results from models using different approaches, and pushing experimental tests to their limits, provide an effective vehicle to improve and develop new numerical models. A main feature of this paper is to closely compare the performance between two numerical models: a forward model (CARBONATE) and a fuzzy logic model (FUZZIM). These two models use common

  14. Towards representing human behavior and decision making in Earth system models - an overview of techniques and approaches

    Science.gov (United States)

    Müller-Hansen, Finn; Schlüter, Maja; Mäs, Michael; Donges, Jonathan F.; Kolb, Jakob J.; Thonicke, Kirsten; Heitzig, Jobst

    2017-11-01

    Today, humans have a critical impact on the Earth system and vice versa, which can generate complex feedback processes between social and ecological dynamics. Integrating human behavior into formal Earth system models (ESMs), however, requires crucial modeling assumptions about actors and their goals, behavioral options, and decision rules, as well as modeling decisions regarding human social interactions and the aggregation of individuals' behavior. Here, we review existing modeling approaches and techniques from various disciplines and schools of thought dealing with human behavior at different levels of decision making. We demonstrate modelers' often vast degrees of freedom but also seek to make modelers aware of the often crucial consequences of seemingly innocent modeling assumptions. After discussing which socioeconomic units are potentially important for ESMs, we compare models of individual decision making that correspond to alternative behavioral theories and that make diverse modeling assumptions about individuals' preferences, beliefs, decision rules, and foresight. We review approaches to model social interaction, covering game theoretic frameworks, models of social influence, and network models. Finally, we discuss approaches to studying how the behavior of individuals, groups, and organizations can aggregate to complex collective phenomena, discussing agent-based, statistical, and representative-agent modeling and economic macro-dynamics. We illustrate the main ingredients of modeling techniques with examples from land-use dynamics as one of the main drivers of environmental change bridging local to global scales.

  15. Medical Inpatient Journey Modeling and Clustering: A Bayesian Hidden Markov Model Based Approach.

    Science.gov (United States)

    Huang, Zhengxing; Dong, Wei; Wang, Fei; Duan, Huilong

    2015-01-01

    Modeling and clustering medical inpatient journeys is useful to healthcare organizations for a number of reasons including inpatient journey reorganization in a more convenient way for understanding and browsing, etc. In this study, we present a probabilistic model-based approach to model and cluster medical inpatient journeys. Specifically, we exploit a Bayesian Hidden Markov Model based approach to transform medical inpatient journeys into a probabilistic space, which can be seen as a richer representation of inpatient journeys to be clustered. Then, using hierarchical clustering on the matrix of similarities, inpatient journeys can be clustered into different categories w.r.t their clinical and temporal characteristics. We evaluated the proposed approach on a real clinical data set pertaining to the unstable angina treatment process. The experimental results reveal that our method can identify and model latent treatment topics underlying in personalized inpatient journeys, and yield impressive clustering quality.

  16. Modelling of capital requirements in the energy sector: capital market access. Final memorandum

    Energy Technology Data Exchange (ETDEWEB)

    1978-04-01

    Formal modelling techniques for analyzing the capital requirements of energy industries have been performed at DOE. A survey has been undertaken of a number of models which forecast energy-sector capital requirements or which detail the interactions of the energy sector and the economy. Models are identified which can be useful as prototypes for some portion of DOE's modelling needs. The models are examined to determine any useful data bases which could serve as inputs to an original DOE model. A selected group of models are examined which can comply with the stated capabilities. The data sources being used by these models are covered and a catalog of the relevant data bases is provided. The models covered are: capital markets and capital availability models (Fossil 1, Bankers Trust Co., DRI Macro Model); models of physical capital requirements (Bechtel Supply Planning Model, ICF Oil and Gas Model and Coal Model, Stanford Research Institute National Energy Model); macroeconomic forecasting models with input-output analysis capabilities (Wharton Annual Long-Term Forecasting Model, Brookhaven/University of Illinois Model, Hudson-Jorgenson/Brookhaven Model); utility models (MIT Regional Electricity Model-Baughman Joskow, Teknekron Electric Utility Simulation Model); and others (DRI Energy Model, DRI/Zimmerman Coal Model, and Oak Ridge Residential Energy Use Model).

  17. Modeling the cometary environment using a fluid approach

    Science.gov (United States)

    Shou, Yinsi

    Comets are believed to have preserved the building material of the early solar system and to hold clues to the origin of life on Earth. Abundant remote observations of comets by telescopes and the in-situ measurements by a handful of space missions reveal that the cometary environments are complicated by various physical and chemical processes among the neutral gases and dust grains released from comets, cometary ions, and the solar wind in the interplanetary space. Therefore, physics-based numerical models are in demand to interpret the observational data and to deepen our understanding of the cometary environment. In this thesis, three models using a fluid approach, which include important physical and chemical processes underlying the cometary environment, have been developed to study the plasma, neutral gas, and the dust grains, respectively. Although models based on the fluid approach have limitations in capturing all of the correct physics for certain applications, especially for very low gas density environment, they are computationally much more efficient than alternatives. In the simulations of comet 67P/Churyumov-Gerasimenko at various heliocentric distances with a wide range of production rates, our multi-fluid cometary neutral gas model and multi-fluid cometary dust model have achieved comparable results to the Direct Simulation Monte Carlo (DSMC) model, which is based on a kinetic approach that is valid in all collisional regimes. Therefore, our model is a powerful alternative to the particle-based model, especially for some computationally intensive simulations. Capable of accounting for the varying heating efficiency under various physical conditions in a self-consistent way, the multi-fluid cometary neutral gas model is a good tool to study the dynamics of the cometary coma with different production rates and heliocentric distances. The modeled H2O expansion speeds reproduce the general trend and the speed's nonlinear dependencies of production rate

  18. International energy market dynamics: a modelling approach. Tome 2

    International Nuclear Information System (INIS)

    Nachet, S.

    1996-01-01

    This work is an attempt to model international energy market and reproduce the behaviour of both energy demand and supply. Energy demand was represented using sector versus source approach. For developing countries, existing link between economic and energy sectors were analysed. Energy supply is exogenous for energy sources other than oil and natural gas. For hydrocarbons, exploration-production process was modelled and produced figures as production yield, exploration effort index, ect. The model build is econometric and is solved using a software that was constructed for this purpose. We explore the energy market future using three scenarios and obtain projections by 2010 for energy demand per source and oil and natural gas supply per region. Economic variables are used to produce different indicators as energy intensity, energy per capita, etc. (author). 378 refs., 26 figs., 35 tabs., 11 appends

  19. International energy market dynamics: a modelling approach. Tome 1

    International Nuclear Information System (INIS)

    Nachet, S.

    1996-01-01

    This work is an attempt to model international energy market and reproduce the behaviour of both energy demand and supply. Energy demand was represented using sector versus source approach. For developing countries, existing link between economic and energy sectors were analysed. Energy supply is exogenous for energy sources other than oil and natural gas. For hydrocarbons, exploration-production process was modelled and produced figures as production yield, exploration effort index, etc. The model built is econometric and is solved using a software that was constructed for this purpose. We explore the energy market future using three scenarios and obtain projections by 2010 for energy demand per source and oil natural gas supply per region. Economic variables are used to produce different indicators as energy intensity, energy per capita, etc. (author). 378 refs., 26 figs., 35 tabs., 11 appends

  20. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  1. New business models for electric cars-A holistic approach

    International Nuclear Information System (INIS)

    Kley, Fabian; Lerch, Christian; Dallinger, David

    2011-01-01

    Climate change and global resource shortages have led to rethinking traditional individual mobility services based on combustion engines. As the consequence of technological improvements, the first electric vehicles are now being introduced and greater market penetration can be expected. But any wider implementation of battery-powered electrical propulsion systems in the future will give rise to new challenges for both the traditional automotive industry and other new players, e.g. battery manufacturers, the power supply industry and other service providers. Different application cases of electric vehicles are currently being discussed which means that numerous business models could emerge, leading to new shares in value creation and involving new players. Consequently, individual stakeholders are uncertain about which business models are really effective with regard to targeting a profitable overall concept. Therefore, this paper aims to define a holistic approach to developing business models for electric mobility, which analyzes the system as a whole on the one hand and provides decision support for affected enterprises on the other. To do so, the basic elements of electric mobility are considered and topical approaches to business models for various stakeholders are discussed. The paper concludes by presenting a systemic instrument for business models based on morphological methods. - Highlights: → We present a systemic instrument to analyze business models for electric vehicles. → Provide decision support for an enterprises dealing with electric vehicle innovations. → Combine business aspects of the triad between vehicles concepts, infrastructure as well as system integration. → In the market, activities in all domains have been initiated, but often with undefined or unclear structures.

  2. Authoring and verification of clinical guidelines: a model driven approach.

    Science.gov (United States)

    Pérez, Beatriz; Porres, Ivan

    2010-08-01

    The goal of this research is to provide a framework to enable authoring and verification of clinical guidelines. The framework is part of a larger research project aimed at improving the representation, quality and application of clinical guidelines in daily clinical practice. The verification process of a guideline is based on (1) model checking techniques to verify guidelines against semantic errors and inconsistencies in their definition, (2) combined with Model Driven Development (MDD) techniques, which enable us to automatically process manually created guideline specifications and temporal-logic statements to be checked and verified regarding these specifications, making the verification process faster and cost-effective. Particularly, we use UML statecharts to represent the dynamics of guidelines and, based on this manually defined guideline specifications, we use a MDD-based tool chain to automatically process them to generate the input model of a model checker. The model checker takes the resulted model together with the specific guideline requirements, and verifies whether the guideline fulfils such properties. The overall framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, particularly, starting from the UML statechart representing a guideline, allows the verification of the guideline against specific requirements. Additionally, we have established a pattern-based approach for defining commonly occurring types of requirements in guidelines. We have successfully validated our overall approach by verifying properties in different clinical guidelines resulting in the detection of some inconsistencies in their definition. The proposed framework allows (1) the authoring and (2) the verification of clinical guidelines against specific requirements defined based on a set of property specification patterns, enabling non-experts to easily write formal specifications and thus easing the verification process. Copyright 2010 Elsevier Inc

  3. Leader communication approaches and patient safety: An integrated model.

    Science.gov (United States)

    Mattson, Malin; Hellgren, Johnny; Göransson, Sara

    2015-06-01

    Leader communication is known to influence a number of employee behaviors. When it comes to the relationship between leader communication and safety, the evidence is more scarce and ambiguous. The aim of the present study is to investigate whether and in what way leader communication relates to safety outcomes. The study examines two leader communication approaches: leader safety priority communication and feedback to subordinates. These approaches were assumed to affect safety outcomes via different employee behaviors. Questionnaire data, collected from 221 employees at two hospital wards, were analyzed using structural equation modeling. The two examined communication approaches were both positively related to safety outcomes, although leader safety priority communication was mediated by employee compliance and feedback communication by organizational citizenship behaviors. The findings suggest that leader communication plays a vital role in improving organizational and patient safety and that different communication approaches seem to positively affect different but equally essential employee safety behaviors. The results highlights the necessity for leaders to engage in one-way communication of safety values as well as in more relational feedback communication with their subordinates in order to enhance patient safety. Copyright © 2015 Elsevier Ltd. and National Safety Council. Published by Elsevier Ltd. All rights reserved.

  4. A probabilistic approach to the drag-based model

    Science.gov (United States)

    Napoletano, Gianluca; Forte, Roberta; Moro, Dario Del; Pietropaolo, Ermanno; Giovannelli, Luca; Berrilli, Francesco

    2018-02-01

    The forecast of the time of arrival (ToA) of a coronal mass ejection (CME) to Earth is of critical importance for our high-technology society and for any future manned exploration of the Solar System. As critical as the forecast accuracy is the knowledge of its precision, i.e. the error associated to the estimate. We propose a statistical approach for the computation of the ToA using the drag-based model by introducing the probability distributions, rather than exact values, as input parameters, thus allowing the evaluation of the uncertainty on the forecast. We test this approach using a set of CMEs whose transit times are known, and obtain extremely promising results: the average value of the absolute differences between measure and forecast is 9.1h, and half of these residuals are within the estimated errors. These results suggest that this approach deserves further investigation. We are working to realize a real-time implementation which ingests the outputs of automated CME tracking algorithms as inputs to create a database of events useful for a further validation of the approach.

  5. Novel approach for modeling separation forces between deformable bodies.

    Science.gov (United States)

    Mahvash, Mohsen

    2006-07-01

    Many minimally invasive surgeries (MISs) involve removing whole organs or tumors that are connected to other organs. Development of haptic simulators that reproduce separation forces between organs can help surgeons learn MIS procedures. Powerful computational approaches such as finite-element methods generally cannot simulate separation in real time. This paper presents a novel approach for real-time computation of separation forces between deformable bodies. Separation occurs either due to fracture when a tool applies extensive forces to the bodies or due to evaporation when a laser beam burns the connection between the bodies. The separation forces are generated online from precalculated force-displacement functions that depend on the local adhesion/separation states between bodies. The precalculated functions are accurately synthesized from a large number of force responses obtained through either offline simulation, measurement, or analytical approximation during the preprocessing step. The approach does not require online computation of force versus global deformation to obtain separation forces. Only online interpolation of precalculated responses is required. The state of adhesion/separation during fracture and evaporation are updated by computationally simple models, which are derived based on the law of conservation of energy. An implementation of the approach for the haptic simulation of the removal of a diseased organ is presented, showing the fidelity of the simulation.

  6. Fast Pyrolysis Oil Stabilization: An Integrated Catalytic and Membrane Approach for Improved Bio-oils. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Huber, George W.; Upadhye, Aniruddha A.; Ford, David M.; Bhatia, Surita R.; Badger, Phillip C.

    2012-10-19

    This University of Massachusetts, Amherst project, "Fast Pyrolysis Oil Stabilization: An Integrated Catalytic and Membrane Approach for Improved Bio-oils" started on 1st February 2009 and finished on August 31st 2011. The project consisted following tasks: Task 1.0: Char Removal by Membrane Separation Technology The presence of char particles in the bio-oil causes problems in storage and end-use. Currently there is no well-established technology to remove char particles less than 10 micron in size. This study focused on the application of a liquid-phase microfiltration process to remove char particles from bio-oil down to slightly sub-micron levels. Tubular ceramic membranes of nominal pore sizes 0.5 and 0.8m were employed to carry out the microfiltration, which was conducted in the cross-flow mode at temperatures ranging from 38 to 45 C and at three different trans-membrane pressures varying from 1 to 3 bars. The results demonstrated the removal of the major quantity of char particles with a significant reduction in overall ash content of the bio-oil. The results clearly showed that the cake formation mechanism of fouling is predominant in this process. Task 2.0 Acid Removal by Membrane Separation Technology The feasibility of removing small organic acids from the aqueous fraction of fast pyrolysis bio-oils using nanofiltration (NF) and reverse osmosis (RO) membranes was studied. Experiments were carried out with a single solute solutions of acetic acid and glucose, binary solute solutions containing both acetic acid and glucose, and a model aqueous fraction of bio-oil (AFBO). Retention factors above 90% for glucose and below 0% for acetic acid were observed at feed pressures near 40 bar for single and binary solutions, so that their separation in the model AFBO was expected to be feasible. However, all of the membranes were irreversibly damaged when experiments were conducted with the model AFBO due to the presence of guaiacol in the feed solution. Experiments

  7. The Infant Care Project: A Mother-Child Intervention Model Directed at Cocaine Use during Pregnancy. Final Report.

    Science.gov (United States)

    O'Donnell, Karen J.; And Others

    This final report discusses the outcomes of the federally funded Infant Care Project (ICP) that provided comprehensive and continuous services to 99 women who had used cocaine during pregnancy and their infants. The ICP model combined high risk obstetric care, infant and child development services, and substance abuse services on site in the…

  8. Foundation Heat Exchanger Final Report: Demonstration, Measured Performance, and Validated Model and Design Tool

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, Patrick [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Im, Piljae [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)

    2012-01-01

    Geothermal heat pumps, sometimes called ground-source heat pumps (GSHPs), have been proven capable of significantly reducing energy use and peak demand in buildings. Conventional equipment for controlling the temperature and humidity of a building, or supplying hot water and fresh outdoor air, must exchange energy (or heat) with the building's outdoor environment. Equipment using the ground as a heat source and heat sink consumes less non-renewable energy (electricity and fossil fuels) because the earth is cooler than outdoor air in summer and warmer in winter. The most important barrier to rapid growth of the GSHP industry is high first cost of GSHP systems to consumers. The most common GSHP system utilizes a closed-loop ground heat exchanger. This type of GSHP system can be used almost anywhere. There is reason to believe that reducing the cost of closed-loop systems is the strategy that would achieve the greatest energy savings with GSHP technology. The cost premium of closed-loop GSHP systems over conventional space conditioning and water heating systems is primarily associated with drilling boreholes or excavating trenches, installing vertical or horizontal ground heat exchangers, and backfilling the excavations. This project investigates reducing the cost of horizontal closed-loop ground heat exchangers by installing them in the construction excavations, augmented when necessary with additional trenches. This approach applies only to new construction of residential and light commercial buildings or additions to such buildings. In the business-as-usual scenario, construction excavations are not used for the horizontal ground heat exchanger (HGHX); instead the HGHX is installed entirely in trenches dug specifically for that purpose. The potential cost savings comes from using the construction excavations for the installation of ground heat exchangers, thereby minimizing the need and expense of digging additional trenches. The term foundation heat exchanger

  9. Foundation Heat Exchanger Final Report: Demonstration, Measured Performance, and Validated Model and Design Tool

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, Patrick [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Im, Piljae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2012-04-01

    Geothermal heat pumps, sometimes called ground-source heat pumps (GSHPs), have been proven capable of significantly reducing energy use and peak demand in buildings. Conventional equipment for controlling the temperature and humidity of a building, or supplying hot water and fresh outdoor air, must exchange energy (or heat) with the building's outdoor environment. Equipment using the ground as a heat source and heat sink consumes less non-renewable energy (electricity and fossil fuels) because the earth is cooler than outdoor air in summer and warmer in winter. The most important barrier to rapid growth of the GSHP industry is high first cost of GSHP systems to consumers. The most common GSHP system utilizes a closed-loop ground heat exchanger. This type of GSHP system can be used almost anywhere. There is reason to believe that reducing the cost of closed-loop systems is the strategy that would achieve the greatest energy savings with GSHP technology. The cost premium of closed-loop GSHP systems over conventional space conditioning and water heating systems is primarily associated with drilling boreholes or excavating trenches, installing vertical or horizontal ground heat exchangers, and backfilling the excavations. This project investigates reducing the cost of horizontal closed-loop ground heat exchangers by installing them in the construction excavations, augmented when necessary with additional trenches. This approach applies only to new construction of residential and light commercial buildings or additions to such buildings. In the business-as-usual scenario, construction excavations are not used for the horizontal ground heat exchanger (HGHX); instead the HGHX is installed entirely in trenches dug specifically for that purpose. The potential cost savings comes from using the construction excavations for the installation of ground heat exchangers, thereby minimizing the need and expense of digging additional trenches. The term foundation heat exchanger

  10. A chain reaction approach to modelling gene pathways.

    Science.gov (United States)

    Cheng, Gary C; Chen, Dung-Tsa; Chen, James J; Soong, Seng-Jaw; Lamartiniere, Coral; Barnes, Stephen

    2012-08-01

    BACKGROUND: Of great interest in cancer prevention is how nutrient components affect gene pathways associated with the physiological events of puberty. Nutrient-gene interactions may cause changes in breast or prostate cells and, therefore, may result in cancer risk later in life. Analysis of gene pathways can lead to insights about nutrient-gene interactions and the development of more effective prevention approaches to reduce cancer risk. To date, researchers have relied heavily upon experimental assays (such as microarray analysis, etc.) to identify genes and their associated pathways that are affected by nutrient and diets. However, the vast number of genes and combinations of gene pathways, coupled with the expense of the experimental analyses, has delayed the progress of gene-pathway research. The development of an analytical approach based on available test data could greatly benefit the evaluation of gene pathways, and thus advance the study of nutrient-gene interactions in cancer prevention. In the present study, we have proposed a chain reaction model to simulate gene pathways, in which the gene expression changes through the pathway are represented by the species undergoing a set of chemical reactions. We have also developed a numerical tool to solve for the species changes due to the chain reactions over time. Through this approach we can examine the impact of nutrient-containing diets on the gene pathway; moreover, transformation of genes over time with a nutrient treatment can be observed numerically, which is very difficult to achieve experimentally. We apply this approach to microarray analysis data from an experiment which involved the effects of three polyphenols (nutrient treatments), epigallo-catechin-3-O-gallate (EGCG), genistein, and resveratrol, in a study of nutrient-gene interaction in the estrogen synthesis pathway during puberty. RESULTS: In this preliminary study, the estrogen synthesis pathway was simulated by a chain reaction model. By

  11. Final Report - Spacially-Resolved Diagnostics and Modeling of Micro-Discharges

    International Nuclear Information System (INIS)

    Donnelly, Vincent M.; Economou, Demetre J.

    2012-01-01

    predictions. Finally, laser scattering experiments were performed at pressures of 100s of Torr in argon or nitrogen. Laser Thomson Scattering (LTS) and Rotational Raman Scattering were employed in a novel, backscattering, confocal configuration. LTS allows direct and simultaneous measurement of both electron density (ne) and electron temperature (Te). For 50 mA current and over the pressure range of 300-700 Torr, LTS yielded Te = 0.9 ± 0.3 eV and ne = (6 ± 3) 1013 cm-3, in reasonable agreement with the predictions of a mathematical model. Rotational Raman spectroscopy (RRS) was employed for absolute calibration of the LTS signal. RRS was also applied to measure the 3D gas temperature (Tg) in nitrogen DC microdischarges. In addition, diode laser absorption spectroscopy was employed to measure the density of argon metastables (1s5 in Paschen notations) in argon microdischarges. The gas temperature, extracted from the width of the absorption profile, was compared with Tg values obtained by optical emission spectroscopy.

  12. Modeling of problems of projection: A non-countercyclic approach

    Directory of Open Access Journals (Sweden)

    Jason Ginsburg

    2016-06-01

    Full Text Available This paper describes a computational implementation of the recent Problems of Projection (POP approach to the study of language (Chomsky 2013; 2015. While adopting the basic proposals of POP, notably with respect to how labeling occurs, we a attempt to formalize the basic proposals of POP, and b develop new proposals that overcome some problems with POP that arise with respect to cyclicity, labeling, and wh-movement operations. We show how this approach accounts for simple declarative sentences, ECM constructions, and constructions that involve long-distance movement of a wh-phrase (including the that-trace effect. We implemented these proposals with a computer model that automatically constructs step-by-step derivations of target sentences, thus making it possible to verify that these proposals work.

  13. Model predictive control approach for a CPAP-device

    Directory of Open Access Journals (Sweden)

    Scheel Mathias

    2017-09-01

    Full Text Available The obstructive sleep apnoea syndrome (OSAS is characterized by a collapse of the upper respiratory tract, resulting in a reduction of the blood oxygen- and an increase of the carbon dioxide (CO2 - concentration, which causes repeated sleep disruptions. The gold standard to treat the OSAS is the continuous positive airway pressure (CPAP therapy. The continuous pressure keeps the upper airway open and prevents the collapse of the upper respiratory tract and the pharynx. Most of the available CPAP-devices cannot maintain the pressure reference [1]. In this work a model predictive control approach is provided. This control approach has the possibility to include the patient’s breathing effort into the calculation of the control variable. Therefore a patient-individualized control strategy can be developed.

  14. Optimizing nitrogen fertilizer use: Current approaches and simulation models

    International Nuclear Information System (INIS)

    Baethgen, W.E.

    2000-01-01

    Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)

  15. Anomalous superconductivity in the tJ model; moment approach

    DEFF Research Database (Denmark)

    Sørensen, Mads Peter; Rodriguez-Nunez, J.J.

    1997-01-01

    By extending the moment approach of Nolting (Z, Phys, 225 (1972) 25) in the superconducting phase, we have constructed the one-particle spectral functions (diagonal and off-diagonal) for the tJ model in any dimensions. We propose that both the diagonal and the off-diagonal spectral functions...... Hartree shift which in the end result enlarges the bandwidth of the free carriers allowing us to take relative high values of J/t and allowing superconductivity to live in the T-c-rho phase diagram, in agreement with numerical calculations in a cluster, We have calculated the static spin susceptibility......, chi(T), and the specific heat, C-v(T), within the moment approach. We find that all the relevant physical quantities show the signature of superconductivity at T-c in the form of kinks (anomalous behavior) or jumps, for low density, in agreement with recent published literature, showing a generic...

  16. CM5: A pre-Swarm magnetic field model based upon the comprehensive modeling approach

    DEFF Research Database (Denmark)

    Sabaka, T.; Olsen, Nils; Tyler, Robert

    2014-01-01

    We have developed a model based upon the very successful Comprehensive Modeling (CM) approach using recent CHAMP, Ørsted, SAC-C and observatory hourly-means data from September 2000 to the end of 2013. This CM, called CM5, was derived from the algorithm that will provide a consistent line of Leve...

  17. A multi-region approach to modeling subsurface flow

    International Nuclear Information System (INIS)

    Gwo, J.P.; Yeh, G.T.; Wilson, G.V.

    1990-01-01

    In this approach the media are assumed to contain n pore-regions at any physical point. Each region has different pore size and hydrologic parameters. Inter-region exchange is approximated by a linear transfer process. Based on the mass balance principle, a system of equations governing the flow and mass exchange in structured or aggregated soils is derived. This system of equations is coupled through linear transfer terms representing the interchange among different pore regions. A numerical MUlti-Region Flow (MURF) model, using the Galerkin finite element method to facilitate the treatment of local and field-scale heterogeneities, is developed to solve the system of equations. A sparse matrix solver is used to solve the resulting matrix equation, which makes the application of MURF to large field problems feasible in terms of CPU time and storage limitations. MURF is first verified by applying it to a ponding infiltration problem over a hill slope, which is a single-region problem and has been previously simulated by a single-region model. Very good agreement is obtained between the results from the two different models. The MURF code is thus partially verified. It is then applied to a two-region fractured medium to investigate the effects of multi-region approach on the flow field. The results are comparable to that obtained by other investigators. (Author) (15 refs., 6 figs., tab.)

  18. Spintronic device modeling and evaluation using modular approach to spintronics

    Science.gov (United States)

    Ganguly, Samiran

    Spintronics technology finds itself in an exciting stage today. Riding on the backs of rapid growth and impressive advances in materials and phenomena, it has started to make headway in the memory industry as solid state magnetic memories (STT-MRAM) and is considered a possible candidate to replace the CMOS when its scaling reaches physical limits. It is necessary to bring all these advances together in a coherent fashion to explore and evaluate the potential of spintronic devices. This work creates a framework for this exploration and evaluation based on Modular Approach to Spintronics, which encapsulate the physics of transport of charge and spin through materials and the phenomenology of magnetic dynamics and interaction in benchmarked elemental modules. These modules can then be combined together to form spin-circuit models of complex spintronic devices and structures which can be simulated using SPICE like circuit simulators. In this work we demonstrate how Modular Approach to Spintronics can be used to build spin-circuit models of functional spintronic devices of all types: memory, logic, and oscillators. We then show how Modular Approach to Spintronics can help identify critical factors behind static and dynamic dissipation in spintronic devices and provide remedies by exploring the use of various alternative materials and phenomena. Lastly, we show the use of Modular Approach to Spintronics in exploring new paradigms of computing enabled by the inherent physics of spintronic devices. We hope that this work will encourage more research and experiments that will establish spintronics as a viable technology for continued advancement of electronics.

  19. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Stinis, Panos [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-08-07

    This is the final report for the work conducted at the University of Minnesota (during the period 12/01/12-09/18/14) by PI Panos Stinis as part of the "Collaboratory on Mathematics for Mesoscopic Modeling of Materials" (CM4). CM4 is a multi-institution DOE-funded project whose aim is to conduct basic and applied research in the emerging field of mesoscopic modeling of materials.

  20. A parsimonious approach to modeling animal movement data.

    Directory of Open Access Journals (Sweden)

    Yann Tremblay

    Full Text Available Animal tracking is a growing field in ecology and previous work has shown that simple speed filtering of tracking data is not sufficient and that improvement of tracking location estimates are possible. To date, this has required methods that are complicated and often time-consuming (state-space models, resulting in limited application of this technique and the potential for analysis errors due to poor understanding of the fundamental framework behind the approach. We describe and test an alternative and intuitive approach consisting of bootstrapping random walks biased by forward particles. The model uses recorded data accuracy estimates, and can assimilate other sources of data such as sea-surface temperature, bathymetry and/or physical boundaries. We tested our model using ARGOS and geolocation tracks of elephant seals that also carried GPS tags in addition to PTTs, enabling true validation. Among pinnipeds, elephant seals are extreme divers that spend little time at the surface, which considerably impact the quality of both ARGOS and light-based geolocation tracks. Despite such low overall quality tracks, our model provided location estimates within 4.0, 5.5 and 12.0 km of true location 50% of the time, and within 9, 10.5 and 20.0 km 90% of the time, for above, equal or below average elephant seal ARGOS track qualities, respectively. With geolocation data, 50% of errors were less than 104.8 km (<0.94 degrees, and 90% were less than 199.8 km (<1.80 degrees. Larger errors were due to lack of sea-surface temperature gradients. In addition we show that our model is flexible enough to solve the obstacle avoidance problem by assimilating high resolution coastline data. This reduced the number of invalid on-land location by almost an order of magnitude. The method is intuitive, flexible and efficient, promising extensive utilization in future research.