Approaches for scalable modeling and emulation of cyber systems : LDRD final report.
Energy Technology Data Exchange (ETDEWEB)
Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.
2009-09-01
The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.
Energy Technology Data Exchange (ETDEWEB)
Ian Sue Wing
2006-04-18
The research supported by this award pursued three lines of inquiry: (1) The construction of dynamic general equilibrium models to simulate the accumulation and substitution of knowledge, which has resulted in the preparation and submission of several papers: (a) A submitted pedagogic paper which clarifies the structure and operation of computable general equilibrium (CGE) models (C.2), and a review article in press which develops a taxonomy for understanding the representation of technical change in economic and engineering models for climate policy analysis (B.3). (b) A paper which models knowledge directly as a homogeneous factor, and demonstrates that inter-sectoral reallocation of knowledge is the key margin of adjustment which enables induced technical change to lower the costs of climate policy (C.1). (c) An empirical paper which estimates the contribution of embodied knowledge to aggregate energy intensity in the U.S. (C.3), followed by a companion article which embeds these results within a CGE model to understand the degree to which autonomous energy efficiency improvement (AEEI) is attributable to technical change as opposed to sub-sectoral shifts in industrial composition (C.4) (d) Finally, ongoing theoretical work to characterize the precursors and implications of the response of innovation to emission limits (E.2). (2) Data development and simulation modeling to understand how the characteristics of discrete energy supply technologies determine their succession in response to emission limits when they are embedded within a general equilibrium framework. This work has produced two peer-reviewed articles which are currently in press (B.1 and B.2). (3) Empirical investigation of trade as an avenue for the transmission of technological change to developing countries, and its implications for leakage, which has resulted in an econometric study which is being revised for submission to a journal (E.1). As work commenced on this topic, the U.S. withdrawal
Energy Technology Data Exchange (ETDEWEB)
Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith
2008-09-01
The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.
International Nuclear Information System (INIS)
Walton, S.
1987-01-01
The Committee, asked to provide an assessment of computer-assisted modeling of molecular structure, has highlighted the signal successes and the significant limitations for a broad panoply of technologies and has projected plausible paths of development over the next decade. As with any assessment of such scope, differing opinions about present or future prospects were expressed. The conclusions and recommendations, however, represent a consensus of our views of the present status of computational efforts in this field
Repository simulation model: Final report
International Nuclear Information System (INIS)
1988-03-01
This report documents the application of computer simulation for the design analysis of the nuclear waste repository's waste handling and packaging operations. The Salt Repository Simulation Model was used to evaluate design alternatives during the conceptual design phase of the Salt Repository Project. Code development and verification was performed by the Office of Nuclear Waste Isolation (ONWL). The focus of this report is to relate the experience gained during the development and application of the Salt Repository Simulation Model to future repository design phases. Design of the repository's waste handling and packaging systems will require sophisticated analysis tools to evaluate complex operational and logistical design alternatives. Selection of these design alternatives in the Advanced Conceptual Design (ACD) and License Application Design (LAD) phases must be supported by analysis to demonstrate that the repository design will cost effectively meet DOE's mandated emplacement schedule and that uncertainties in the performance of the repository's systems have been objectively evaluated. Computer simulation of repository operations will provide future repository designers with data and insights that no other analytical form of analysis can provide. 6 refs., 10 figs
International Nuclear Information System (INIS)
Mizutani, T.
2000-01-01
There were basically three theoretical projects supported by this grant: (1) Use of confined quark models to study low energy hadronic processes; (2) Production of strangeness by Electromagnetic Probes; and (3) Diffractive dissociative production of vector mesons by virtual photons on nucleons. Each of them is summarized in the paper
Energy Technology Data Exchange (ETDEWEB)
Curtis, Peter [The Ohio State Univ., Columbus, OH (United States); Bohrer, Gil [The Ohio State Univ., Columbus, OH (United States); Gough, Christopher [Virginia Commonwealth Univ., Richmond, VA (United States); Nadelhoffer, Knute [Univ. of Michigan, Ann Arbor, MI (United States)
2015-03-12
At the University of Michigan Biological Station (UMBS) AmeriFlux sites (US-UMB and US-UMd), long-term C cycling measurements and a novel ecosystem-scale experiment are revealing physical, biological, and ecological mechanisms driving long-term trajectories of C cycling, providing new data for improving modeling forecasts of C storage in eastern forests. Our findings provide support for previously untested hypotheses that stand-level structural and biological properties constrain long-term trajectories of C storage, and that remotely sensed canopy structural parameters can substantially improve model forecasts of forest C storage. Through the Forest Accelerated Succession ExperimenT (FASET), we are directly testing the hypothesis that forest C storage will increase due to increasing structural and biological complexity of the emerging tree communities. Support from this project, 2011-2014, enabled us to incorporate novel physical and ecological mechanisms into ecological, meteorological, and hydrological models to improve forecasts of future forest C storage in response to disturbance, succession, and current and long-term climate variation
HEDR modeling approach: Revision 1
International Nuclear Information System (INIS)
Shipler, D.B.; Napier, B.A.
1994-05-01
This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies
International Nuclear Information System (INIS)
Shipler, D.B.; Napier, B.A.
1992-07-01
This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered
Temperature Buffer Test. Final THM modelling
International Nuclear Information System (INIS)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel
2012-01-01
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
Temperature Buffer Test. Final THM modelling
Energy Technology Data Exchange (ETDEWEB)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)
2012-01-15
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
Multiscale approach to equilibrating model polymer melts
DEFF Research Database (Denmark)
Svaneborg, Carsten; Ali Karimi-Varzaneh, Hossein; Hojdis, Nils
2016-01-01
We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed...
Report on Approaches to Database Translation. Final Report.
Gallagher, Leonard; Salazar, Sandra
This report describes approaches to database translation (i.e., transferring data and data definitions from a source, either a database management system (DBMS) or a batch file, to a target DBMS), and recommends a method for representing the data structures of newly-proposed network and relational data models in a form suitable for database…
Innovative approaches to inertial confinement fusion reactors: Final report
International Nuclear Information System (INIS)
Bourque, R.F.; Schultz, K.R.
1986-11-01
Three areas of innovative approaches to inertial confinement fusion (ICF) reactor design are given. First, issues pertaining to the Cascade reactor concept are discussed. Then, several innovative concepts are presented which attempt to directly recover the blast energy from a fusion target. Finally, the Turbostar concept for direct recovery of that energy is evaluated. The Cascade issues discussed are combustion of the carbon granules in the event of air ingress, the use of alternate granule materials, and the effect of changes in carbon flow on details of the heat exchanger. Carbon combustion turns out to be a minor problem. Four ICF innovative concepts were considered: a turbine with ablating surfaces, a liquid piston system, a wave generator, and a resonating pump. In the final analysis, none show any real promise. The Turbostar concept of direct recovery is a very interesting idea and appeared technically viable. However, it shows no efficiency gain or any decrease in capital cost compared to reactors with conventional thermal conversion systems. Attempts to improve it by placing a close-in lithium sphere around the target to increase gas generation increased efficiency only slightly. It is concluded that these direct conversion techniques require thermalization of the x-ray and debris energy, and are Carnot limited. They therefore offer no advantage over existing and proposed methods of thermal energy conversion or direct electrical conversion
Fleet replacement modeling : final report, July 2009.
2009-07-01
This project focused on two interrelated areas in equipment replacement modeling for fleets. The first area was research-oriented and addressed a fundamental assumption in engineering economic replacement modeling that all assets providing a similar ...
An Intelligent Systems Approach to Reservoir Characterization. Final Report
International Nuclear Information System (INIS)
Shahab D. Mohaghegh; Jaime Toro; Thomas H. Wilson; Emre Artun; Alejandro Sanchez; Sandeep Pyakurel
2005-01-01
Today, the major challenge in reservoir characterization is integrating data coming from different sources in varying scales, in order to obtain an accurate and high-resolution reservoir model. The role of seismic data in this integration is often limited to providing a structural model for the reservoir. Its relatively low resolution usually limits its further use. However, its areal coverage and availability suggest that it has the potential of providing valuable data for more detailed reservoir characterization studies through the process of seismic inversion. In this paper, a novel intelligent seismic inversion methodology is presented to achieve a desirable correlation between relatively low-frequency seismic signals, and the much higher frequency wireline-log data. Vertical seismic profile (VSP) is used as an intermediate step between the well logs and the surface seismic. A synthetic seismic model is developed by using real data and seismic interpretation. In the example presented here, the model represents the Atoka and Morrow formations, and the overlying Pennsylvanian sequence of the Buffalo Valley Field in New Mexico. Generalized regression neural network (GRNN) is used to build two independent correlation models between; (1) Surface seismic and VSP, (2) VSP and well logs. After generating virtual VSP's from the surface seismic, well logs are predicted by using the correlation between VSP and well logs. The values of the density log, which is a surrogate for reservoir porosity, are predicted for each seismic trace through the seismic line with a classification approach having a correlation coefficient of 0.81. The same methodology is then applied to real data taken from the Buffalo Valley Field, to predict inter-well gamma ray and neutron porosity logs through the seismic line of interest. The same procedure can be applied to a complete 3D seismic block to obtain 3D distributions of reservoir properties with less uncertainty than the geostatistical
Final Project Report Load Modeling Transmission Research
Energy Technology Data Exchange (ETDEWEB)
Lesieutre, Bernard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bravo, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yinger, Robert [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chassin, Dave [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Huang, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lu, Ning [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hiskens, Ian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Venkataramanan, Giri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-03-31
The research presented in this report primarily focuses on improving power system load models to better represent their impact on system behavior. The previous standard load model fails to capture the delayed voltage recovery events that are observed in the Southwest and elsewhere. These events are attributed to stalled air conditioner units after a fault. To gain a better understanding of their role in these events and to guide modeling efforts, typical air conditioner units were testing in laboratories. Using data obtained from these extensive tests, new load models were developed to match air conditioner behavior. An air conditioner model is incorporated in the new WECC composite load model. These models are used in dynamic studies of the West and can impact power transfer limits for California. Unit-level and systemlevel solutions are proposed as potential solutions to the delayed voltage recovery problem.
Mathematical models for atmospheric pollutants. Final report
International Nuclear Information System (INIS)
Drake, R.L.; Barrager, S.M.
1979-08-01
The present and likely future roles of mathematical modeling in air quality decisions are described. The discussion emphasizes models and air pathway processes rather than the chemical and physical behavior of specific anthropogenic emissions. Summarized are the characteristics of various types of models used in the decision-making processes. Specific model subclasses are recommended for use in making air quality decisions that have site-specific, regional, national, or global impacts. The types of exposure and damage models that are currently used to predict the effects of air pollutants on humans, other animals, plants, ecosystems, property, and materials are described. The aesthetic effects of odor and visibility and the impact of pollutants on weather and climate are also addressed. Technical details of air pollution meteorology, chemical and physical properties of air pollutants, solution techniques, and air quality models are discussed in four appendices bound in separate volumes
Material Modelling - Composite Approach
DEFF Research Database (Denmark)
Nielsen, Lauge Fuglsang
1997-01-01
is successfully justified comparing predicted results with experimental data obtained in the HETEK-project on creep, relaxation, and shrinkage of very young concretes cured at a temperature of T = 20^o C and a relative humidity of RH = 100%. The model is also justified comparing predicted creep, shrinkage......, and internal stresses caused by drying shrinkage with experimental results reported in the literature on the mechanical behavior of mature concretes. It is then concluded that the model presented applied in general with respect to age at loading.From a stress analysis point of view the most important finding...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...
Respiratory trace deposition models. Final report
International Nuclear Information System (INIS)
Yeh, H.C.
1980-03-01
Respiratory tract characteristics of four mammalian species (human, dog, rat and Syrian hamster) were studied, using replica lung casts. An in situ casting techniques was developed for making the casts. Based on an idealized branch model, over 38,000 records of airway segment diameters, lengths, branching angles and gravity angles were obtained from measurements of two humans, two Beagle dogs, two rats and one Syrian hamster. From examination of the trimmed casts and morphometric data, it appeared that the structure of the human airway is closer to a dichotomous structure, whereas for dog, rat and hamster, it is monopodial. Flow velocity in the trachea and major bronchi in living Beagle dogs was measured using an implanted, subminiaturized, heated film anemometer. A physical model was developed to simulate the regional deposition characteristics proposed by the Task Group on Lung Dynamics of the ICRP. Various simulation modules for the nasopharyngeal (NP), tracheobronchial (TB) and pulmonary (P) compartments were designed and tested. Three types of monodisperse aerosols were developed for animal inhalation studies. Fifty Syrian hamsters and 50 rats were exposed to five different sizes of monodisperse fused aluminosilicate particles labeled with 169 Yb. Anatomical lung models were developed for four species (human, Beagle dog, rat and Syrian hamster) that were based on detailed morphometric measurements of replica lung casts. Emphasis was placed on developing a lobar typical-path lung model and on developing a modeling technique which could be applied to various mammalian species. A set of particle deposition equations for deposition caused by inertial impaction, sedimentation, and diffusion were developed. Theoretical models of particle deposition were developed based on these equations and on the anatomical lung models
Final model of multicriterionevaluation of animal welfare
DEFF Research Database (Denmark)
Bonde, Marianne; Botreau, R; Bracke, MBM
One major objective of Welfare Quality® is to propose harmonized methods for the overall assessment of animal welfare on farm and at slaughter that are science based and meet societal concerns. Welfare is a multidimensional concept and its assessment requires measures of different aspects. Welfar......, acceptable welfare and not classified. This evaluation model is tuned according to the views of experts from animal and social sciences, and stakeholders....... Quality® proposes a formal evaluation model whereby the data on animals or their environment are transformed into value scores that reflect compliance with 12 subcriteria and 4 criteria of good welfare. Each animal unit is then allocated to one of four categories: excellent welfare, enhanced welfare...
Learning Approaches - Final Report Sub-Project 4
DEFF Research Database (Denmark)
Dirckinck-Holmfeld, Lone; Rodríguez Illera, José Luis; Escofet, Anna
2007-01-01
The overall aim of Subproject 4 is to apply learning approaches that are appropriate and applicable using ICT. The task is made up of two components 4.1 dealing with learning approaches (see deliverable 4.1), and component 4.2 application of ICT (see deliverable 4.2, deliverable 4.3 & deliverable...
Applied approach slab settlement research, design/construction : final report.
2013-08-01
Approach embankment settlement is a pervasive problem in Oklahoma and many other states. The bump and/or abrupt slope change poses a danger to traffic and can cause increased dynamic loads on the bridge. Frequent and costly maintenance may be needed ...
A Conceptual Modeling Approach for OLAP Personalization
Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan
Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.
Evaporator modeling - A hybrid approach
International Nuclear Information System (INIS)
Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun
2009-01-01
In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis
Final Report on the Fuel Saving Effectiveness of Various Driver Feedback Approaches
Energy Technology Data Exchange (ETDEWEB)
Gonder, J.; Earleywine, M.; Sparks, W.
2011-03-01
This final report quantifies the fuel-savings opportunities from specific driving behavior changes, identifies factors that influence drivers' receptiveness to adopting fuel-saving behaviors, and assesses various driver feedback approaches.
Washington State Nursing Home Administrator Model Curriculum. Final Report.
Cowan, Florence Kelly
The course outlines presented in this final report comprise a proposed Fort Steilacoom Community College curriculum to be used as a statewide model two-year associate degree curriculum for nursing home administrators. The eight courses described are introduction to nursing, home administration, financial management of nursing homes, nursing home…
Final Report Fermionic Symmetries and Self consistent Shell Model
International Nuclear Information System (INIS)
Zamick, Larry
2008-01-01
In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.
Model validation studies of solar systems, Phase III. Final report
Energy Technology Data Exchange (ETDEWEB)
Lantz, L.J.; Winn, C.B.
1978-12-01
Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.
Calculation of extreme wind atlases using mesoscale modeling. Final report
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Badger, Jake
This is the final report of the project PSO-10240 "Calculation of extreme wind atlases using mesoscale modeling". The overall objective is to improve the estimation of extreme winds by developing and applying new methodologies to confront the many weaknesses in the current methodologies as explai...
Photovoltaic subsystem marketing and distribution model: programming manual. Final report
Energy Technology Data Exchange (ETDEWEB)
1982-07-01
Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.
International Nuclear Information System (INIS)
Amaro, J. E.; Barbaro, M. B.; Caballero, J. A.; Donnelly, T. W.; Udias, J. M.
2007-01-01
The semi-relativistic approach to electron and neutrino quasielastic scattering from nuclei is extended to include final-state interactions. Starting with the usual nonrelativistic continuum shell model, the problem is relativized by using the semi-relativistic expansion of the current in powers of the initial nucleon momentum and relativistic kinematics. Two different approaches are considered for the final-state interactions: the Smith-Wambach 2p-2h damping model and the Dirac-equation-based potential extracted from a relativistic mean-field plus the Darwin factor. Using the latter, the scaling properties of (e,e ' ) and (ν μ ,μ - ) cross sections for intermediate momentum transfers are investigated
THE QUANTITATIVE MODEL OF THE FINALIZATIONS IN MEN’S COMPETITIVE HANDBALL AND THEIR EFFICIENCY
Directory of Open Access Journals (Sweden)
Eftene Alexandru
2009-10-01
Full Text Available In the epistemic steps, we approach a competitive performance behavior model build after a quantitativeanalysis of certain data collected from the official International Handball Federation protocols on theperformance of the first four teams of the World Men's Handball Championship - Croatia 2009, duringsemifinals and finals.This model is a part of the integrative (global model of the handball game, which will be graduallyinvestigated during the following research.I have started the construction of this model from the premise that the finalization represents theessence of the game.The components of our model, in a prioritized order: shot at the goal from 9m- 15p; shot at the goalfrom 6m- 12p; shot at the goal from 7m- 12p; fast break shot at the goal - 11,5p; wing shot at the goal - 8,5p;penetration shot at the goal - 7p;
A Bayesian approach to model uncertainty
International Nuclear Information System (INIS)
Buslik, A.
1994-01-01
A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given
Energy Technology Data Exchange (ETDEWEB)
Anthofer, Anton Philipp; Schubert, Johannes [VPC GmbH, Dresden (Germany)
2017-11-15
The German Act on Reorganization of Responsibility for Nuclear Disposal (Entsorgungsuebergangsgesetz (EntsorgUebG)) adopted in June 2017 provides the energy utilities with the new option of transferring responsibility for their waste packages to the Federal Government. This is conditional on the waste packages being approved for delivery to the Konrad final repository. A comprehensive approach starts with the dismantling of nuclear facilities and extends from waste disposal and packaging planning to final repository documentation. Waste package quality control measures are planned and implemented as early as in the process qualification stage so that the production of waste packages that are suitable for final deposition can be ensured. Optimization of cask and loading configuration can save container and repository volume. Workflow planning also saves time, expenditure and exposure time for personnel at the facilities. VPC has evaluated this experience and developed it into a comprehensive approach.
A Bayesian approach for quantification of model uncertainty
International Nuclear Information System (INIS)
Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.
2010-01-01
In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.
System Behavior Models: A Survey of Approaches
2016-06-01
OF FIGURES Spiral Model .................................................................................................3 Figure 1. Approaches in... spiral model was chosen for researching and structuring this thesis, shown in Figure 1. This approach allowed multiple iterations of source material...applications and refining through iteration. 3 Spiral Model Figure 1. D. SCOPE The research is limited to a literature review, limited
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite ident...
Microwave modeling of laser plasma interactions. Final report
International Nuclear Information System (INIS)
1983-08-01
For a large laser fusion targets and nanosecond pulse lengths, stimulated Brillouin scattering (SBS) and self-focusing are expected to be significant problems. The goal of the contractual effort was to examine certain aspects of these physical phenomena in a wavelength regime (lambda approx.5 cm) more amenable to detailed diagnostics than that characteristic of laser fusion (lambda approx.1 micron). The effort was to include the design, fabrication and operation of a suitable experimental apparatus. In addition, collaboration with Dr. Neville Luhmann and his associates at UCLA and with Dr. Curt Randall of LLNL, on analysis and modelling of the UCLA experiments was continued. Design and fabrication of the TRW experiment is described under ''Experiment Design'' and ''Experimental Apparatus''. The design goals for the key elements of the experimental apparatus were met, but final integration and operation of the experiment was not accomplished. Some theoretical considerations on the interaction between Stimulated Brillouin Scattering and Self-Focusing are also presented
A model-driven approach to information security compliance
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
Global energy modeling - A biophysical approach
Energy Technology Data Exchange (ETDEWEB)
Dale, Michael
2010-09-15
This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.
A Unified Approach to Modeling and Programming
DEFF Research Database (Denmark)
Madsen, Ole Lehrmann; Møller-Pedersen, Birger
2010-01-01
of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...
EXPERIMENTS AND COMPUTATIONAL MODELING OF PULVERIZED-COAL IGNITION; FINAL
International Nuclear Information System (INIS)
Samuel Owusu-Ofori; John C. Chen
1999-01-01
Under typical conditions of pulverized-coal combustion, which is characterized by fine particles heated at very high rates, there is currently a lack of certainty regarding the ignition mechanism of bituminous and lower rank coals as well as the ignition rate of reaction. furthermore, there have been no previous studies aimed at examining these factors under various experimental conditions, such as particle size, oxygen concentration, and heating rate. Finally, there is a need to improve current mathematical models of ignition to realistically and accurately depict the particle-to-particle variations that exist within a coal sample. Such a model is needed to extract useful reaction parameters from ignition studies, and to interpret ignition data in a more meaningful way. The authors propose to examine fundamental aspects of coal ignition through (1) experiments to determine the ignition temperature of various coals by direct measurement, and (2) modeling of the ignition process to derive rate constants and to provide a more insightful interpretation of data from ignition experiments. The authors propose to use a novel laser-based ignition experiment to achieve their first objective. Laser-ignition experiments offer the distinct advantage of easy optical access to the particles because of the absence of a furnace or radiating walls, and thus permit direct observation and particle temperature measurement. The ignition temperature of different coals under various experimental conditions can therefore be easily determined by direct measurement using two-color pyrometry. The ignition rate-constants, when the ignition occurs heterogeneously, and the particle heating rates will both be determined from analyses based on these measurements
A Multivariate Approach to Functional Neuro Modeling
DEFF Research Database (Denmark)
Mørch, Niels J.S.
1998-01-01
by the application of linear and more flexible, nonlinear microscopic regression models to a real-world dataset. The dependency of model performance, as quantified by generalization error, on model flexibility and training set size is demonstrated, leading to the important realization that no uniformly optimal model......, provides the basis for a generalization theoretical framework relating model performance to model complexity and dataset size. Briefly summarized the major topics discussed in the thesis include: - An introduction of the representation of functional datasets by pairs of neuronal activity patterns...... exists. - Model visualization and interpretation techniques. The simplicity of this task for linear models contrasts the difficulties involved when dealing with nonlinear models. Finally, a visualization technique for nonlinear models is proposed. A single observation emerges from the thesis...
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...
Energy Technology Data Exchange (ETDEWEB)
Kessinger, Glen Frank; Nelson, Lee Orville; Grandy, Jon Drue; Zuck, Larry Douglas; Kong, Peter Chuen Sun; Anderson, Gail
1999-08-01
The purpose of LDRD #2349, Characterize and Model Final Waste Formulations and Offgas Solids from Thermal Treatment Processes, was to develop a set of tools that would allow the user to, based on the chemical composition of a waste stream to be immobilized, predict the durability (leach behavior) of the final waste form and the phase assemblages present in the final waste form. The objectives of the project were: • investigation, testing and selection of thermochemical code • development of auxiliary thermochemical database • synthesis of materials for leach testing • collection of leach data • using leach data for leach model development • thermochemical modeling The progress toward completion of these objectives and a discussion of work that needs to be completed to arrive at a logical finishing point for this project will be presented.
Geometrical approach to fluid models
International Nuclear Information System (INIS)
Kuvshinov, B.N.; Schep, T.J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notion of invariance is introduced in terms of Lie derivatives and a general procedure for the construction of local and integral fluid invariants is presented. The solutions of the equations for invariant fields can be written in terms of Lagrange variables. A generalization of the Hamiltonian formalism for finite-dimensional systems to continuous media is proposed. Analogously to finite-dimensional systems, Hamiltonian fluids are introduced as systems that annihilate an exact two-form. It is shown that Euler and ideal, charged fluids satisfy this local definition of a Hamiltonian structure. A new class of scalar invariants of Hamiltonian fluids is constructed that generalizes the invariants that are related with gauge transformations and with symmetries (Noether). copyright 1997 American Institute of Physics
Current approaches to gene regulatory network modelling
Directory of Open Access Journals (Sweden)
Brazma Alvis
2007-09-01
Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.
An object-oriented approach to energy-economic modeling
Energy Technology Data Exchange (ETDEWEB)
Wise, M.A.; Fox, J.A.; Sands, R.D.
1993-12-01
In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.
International Nuclear Information System (INIS)
Murphey, W.M.; Moran, B.W.; Fattah, A.
1996-01-01
The International Atomic Energy Agency (IAEA) is currently pursuing development of an international safeguards approach for the final disposal of spent fuel in geological repositories through consultants meetings and through the Program for Development of Safeguards for Final Disposal of Spent Fuel in Geological Repositories (SAGOR). The consultants meetings provide policy guidance to IAEA; SAGOR recommends effective approaches that can be efficiently implemented by IAEA. The SAGOR program, which is a collaboration of eight Member State Support Programs (MSSPs), was initiated in July 1994 and has identified 15 activities in each of three areas (i.e. conditioning facilities, active repositories, and closed repositories) that must be performed to ensure an efficient, yet effective safeguards approach. Two consultants meetings have been held: the first in May 1991 and the last in November 1995. For nuclear materials emplaced in a geological repository, the safeguards objectives were defined to be (1) to detect the diversion of spent fuel, whether concealed or unconcealed, from the repository and (2) to detect undeclared activities of safeguards concern (e.g., tunneling, underground reprocessing, or substitution in containers)
International Nuclear Information System (INIS)
Carr, D.; Hertel, B.; Jewett, M.; Janke, R.; Conner, B.
1996-01-01
The remedial strategy for addressing contaminated environmental media was recently finalized for the US Department of Energy's (DOE) Fernald Environmental Management Project (FEMP) following almost 10 years of detailed technical analysis. The FEMP represents one of the first major nuclear facilities to successfully complete the Remedial Investigation/Feasibility Study (RI/FS) phase of the environmental restoration process. A critical element of this success was the establishment of sensible cleanup levels for contaminated soil and groundwater both on and off the FEMP property. These cleanup levels were derived based upon a strict application of Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) regulations and guidance, coupled with positive input from the regulatory agencies and the local community regarding projected future land uses for the site. The approach for establishing the cleanup levels was based upon a Feasibility Study (FS) strategy that examined a bounding range of viable future land uses for the site. Within each land use, the cost and technical implications of a range of health-protective cleanup levels for the environmental media were analyzed. Technical considerations in driving these cleanup levels included: direct exposure routes to viable human receptors; cross- media impacts to air, surface water, and groundwater; technical practicality of attaining the levels; volume of affected media; impact to sensitive environmental receptors or ecosystems; and cost. This paper will discuss the technical approach used to support the finalization of the cleanup levels for the site. The final cleanup levels provide the last remaining significant piece to the puzzle of establishing a final site-wide remedial strategy for the FEMP, and positions the facility for the expedient completion of site-wide remedial activities
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Service creation: a model-based approach
Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis
1999-01-01
This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed
Models of galaxies - The modal approach
International Nuclear Information System (INIS)
Lin, C.C.; Lowe, S.A.
1990-01-01
The general viability of the modal approach to the spiral structure in normal spirals and the barlike structure in certain barred spirals is discussed. The usefulness of the modal approach in the construction of models of such galaxies is examined, emphasizing the adoption of a model appropriate to observational data for both the spiral structure of a galaxy and its basic mass distribution. 44 refs
Implementation of Reseptive Esteemy Approach Model in Learning Reading Literature
Directory of Open Access Journals (Sweden)
Titin Nurhayatin
2017-03-01
Full Text Available Research on the implementation of aesthetic model of receptive aesthetic approach in learning to read the literature on the background of the low quality of results and learning process of Indonesian language, especially the study of literature. Students as prospective teachers of Indonesian language are expected to have the ability to speak, have literature, and their learning in a balanced manner in accordance with the curriculum demands. This study examines the effectiveness, quality, acceptability, and sustainability of the aesthetic approach of receptions in improving students' literary skills. Based on these problems, this study is expected to produce a learning model that contributes high in improving the quality of results and the process of learning literature. This research was conducted on the students of Language Education Program, Indonesian Literature and Regional FKIP Pasundan University. The research method used is experiment with randomized type pretest-posttest control group design. Based on preliminary and final test data obtained in the experimental class the average preliminary test was 55.86 and the average final test was 76.75. From the preliminary test data in the control class the average score was 55.07 and the average final test was 68.76. These data suggest that there is a greater increase in grades in the experimental class using the aesthetic approach of the reception compared with the increase in values in the control class using a conventional approach. The results show that the aesthetic approach of receptions is more effective than the conventional approach in literary reading. Based on observations, acceptance, and views of sustainability, the aesthetic approach of receptions in literary learning is expected to be an alternative and solution in overcoming the problems of literary learning and improving the quality of Indonesian learning outcomes and learning process.
Modeling the Pan-Arctic terrestrial and atmospheric water cycle. Final report; FINAL
International Nuclear Information System (INIS)
Gutowski, W.J. Jr.
2001-01-01
This report describes results of DOE grant DE-FG02-96ER61473 to Iowa State University (ISU). Work on this grant was performed at Iowa State University and at the University of New Hampshire in collaboration with Dr. Charles Vorosmarty and fellow scientists at the University of New Hampshire's (UNH's) Institute for the Study of the Earth, Oceans, and Space, a subcontractor to the project. Research performed for the project included development, calibration and validation of a regional climate model for the pan-Arctic, modeling river networks, extensive hydrologic database development, and analyses of the water cycle, based in part on the assembled databases and models. Details appear in publications produced from the grant
Application of various FLD modelling approaches
Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.
2005-07-01
This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.
Distress modeling for DARWin-ME : final report.
2013-12-01
Distress prediction models, or transfer functions, are key components of the Pavement M-E Design and relevant analysis. The accuracy of such models depends on a successful process of calibration and subsequent validation of model coefficients in the ...
Risk Modelling for Passages in Approach Channel
Directory of Open Access Journals (Sweden)
Leszek Smolarek
2013-01-01
Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.
A comprehensive dynamic modeling approach for giant magnetostrictive material actuators
International Nuclear Information System (INIS)
Gu, Guo-Ying; Zhu, Li-Min; Li, Zhi; Su, Chun-Yi
2013-01-01
In this paper, a comprehensive modeling approach for a giant magnetostrictive material actuator (GMMA) is proposed based on the description of nonlinear electromagnetic behavior, the magnetostrictive effect and frequency response of the mechanical dynamics. It maps the relationships between current and magnetic flux at the electromagnetic part to force and displacement at the mechanical part in a lumped parameter form. Towards this modeling approach, the nonlinear hysteresis effect of the GMMA appearing only in the electrical part is separated from the linear dynamic plant in the mechanical part. Thus, a two-module dynamic model is developed to completely characterize the hysteresis nonlinearity and the dynamic behaviors of the GMMA. The first module is a static hysteresis model to describe the hysteresis nonlinearity, and the cascaded second module is a linear dynamic plant to represent the dynamic behavior. To validate the proposed dynamic model, an experimental platform is established. Then, the linear dynamic part and the nonlinear hysteresis part of the proposed model are identified in sequence. For the linear part, an approach based on axiomatic design theory is adopted. For the nonlinear part, a Prandtl–Ishlinskii model is introduced to describe the hysteresis nonlinearity and a constrained quadratic optimization method is utilized to identify its coefficients. Finally, experimental tests are conducted to demonstrate the effectiveness of the proposed dynamic model and the corresponding identification method. (paper)
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches
Set-Theoretic Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester Allan
Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...
Mathematical Modeling Approaches in Plant Metabolomics.
Fürtauer, Lisa; Weiszmann, Jakob; Weckwerth, Wolfram; Nägele, Thomas
2018-01-01
The experimental analysis of a plant metabolome typically results in a comprehensive and multidimensional data set. To interpret metabolomics data in the context of biochemical regulation and environmental fluctuation, various approaches of mathematical modeling have been developed and have proven useful. In this chapter, a general introduction to mathematical modeling is presented and discussed in context of plant metabolism. A particular focus is laid on the suitability of mathematical approaches to functionally integrate plant metabolomics data in a metabolic network and combine it with other biochemical or physiological parameters.
Energy Technology Data Exchange (ETDEWEB)
Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.
1998-12-01
The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.
SLS Navigation Model-Based Design Approach
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and
Stochastic approaches to inflation model building
International Nuclear Information System (INIS)
Ramirez, Erandy; Liddle, Andrew R.
2005-01-01
While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered
Model validation: a systemic and systematic approach
International Nuclear Information System (INIS)
Sheng, G.; Elzas, M.S.; Cronhjort, B.T.
1993-01-01
The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)
Dynamic process model of a plutonium oxalate precipitator. Final report
Energy Technology Data Exchange (ETDEWEB)
Miller, C.L.; Hammelman, J.E.; Borgonovi, G.M.
1977-11-01
In support of LLL material safeguards program, a dynamic process model was developed which simulates the performance of a plutonium (IV) oxalate precipitator. The plutonium oxalate precipitator is a component in the plutonium oxalate process for making plutonium oxide powder from plutonium nitrate. The model is based on state-of-the-art crystallization descriptive equations, the parameters of which are quantified through the use of batch experimental data. The dynamic model predicts performance very similar to general Hanford oxalate process experience. The utilization of such a process model in an actual plant operation could promote both process control and material safeguards control by serving as a baseline predictor which could give early warning of process upsets or material diversion. The model has been incorporated into a FORTRAN computer program and is also compatible with the DYNSYS 2 computer code which is being used at LLL for process modeling efforts.
Dynamic process model of a plutonium oxalate precipitator. Final report
International Nuclear Information System (INIS)
Miller, C.L.; Hammelman, J.E.; Borgonovi, G.M.
1977-11-01
In support of LLL material safeguards program, a dynamic process model was developed which simulates the performance of a plutonium (IV) oxalate precipitator. The plutonium oxalate precipitator is a component in the plutonium oxalate process for making plutonium oxide powder from plutonium nitrate. The model is based on state-of-the-art crystallization descriptive equations, the parameters of which are quantified through the use of batch experimental data. The dynamic model predicts performance very similar to general Hanford oxalate process experience. The utilization of such a process model in an actual plant operation could promote both process control and material safeguards control by serving as a baseline predictor which could give early warning of process upsets or material diversion. The model has been incorporated into a FORTRAN computer program and is also compatible with the DYNSYS 2 computer code which is being used at LLL for process modeling efforts
Variational approach to chiral quark models
Energy Technology Data Exchange (ETDEWEB)
Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira
1987-03-01
A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation.
A variational approach to chiral quark models
International Nuclear Information System (INIS)
Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira.
1987-01-01
A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation. (author)
Mathematical Models of Elementary Mathematics Learning and Performance. Final Report.
Suppes, Patrick
This project was concerned with the development of mathematical models of elementary mathematics learning and performance. Probabilistic finite automata and register machines with a finite number of registers were developed as models and extensively tested with data arising from the elementary-mathematics strand curriculum developed by the…
Regional forecasting with global atmospheric models; Final report
Energy Technology Data Exchange (ETDEWEB)
Crowley, T.J.; Smith, N.R. [Applied Research Corp., College Station, TX (United States)
1994-05-01
The purpose of the project was to conduct model simulations for past and future climate change with respect to the proposed Yucca Mtn. repository. The authors report on three main topics, one of which is boundary conditions for paleo-hindcast studies. These conditions are necessary for the conduction of three to four model simulations. The boundary conditions have been prepared for future runs. The second topic is (a) comparing the atmospheric general circulation model (GCM) with observations and other GCMs; and (b) development of a better precipitation data base for the Yucca Mtn. region for comparisons with models. These tasks have been completed. The third topic is preliminary assessments of future climate change. Energy balance model (EBM) simulations suggest that the greenhouse effect will likely dominate climate change at Yucca Mtn. for the next 10,000 years. The EBM study should improve rational choice of GCM CO{sub 2} scenarios for future climate change.
A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...
A hybrid modeling approach for option pricing
Hajizadeh, Ehsan; Seifi, Abbas
2011-11-01
The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.
Heat transfer modeling an inductive approach
Sidebotham, George
2015-01-01
This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
Predictive Software Cost Model Study. Volume I. Final Technical Report.
1980-06-01
development phase to identify computer resources necessary to support computer programs after transfer of program manangement responsibility and system... classical model development with refinements specifically applicable to avionics systems. The refinements are the result of the Phase I literature search
Nonperturbative approach to the attractive Hubbard model
International Nuclear Information System (INIS)
Allen, S.; Tremblay, A.-M. S.
2001-01-01
A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one)
Modeling future power plant location patterns. Final report
International Nuclear Information System (INIS)
Eagles, T.W.; Cohon, J.L.; ReVelle, C.
1979-04-01
The locations of future energy facilities must be specified to assess the potential environmental impact of those facilities. A computer model was developed to generate probable locations for the energy facilities needed to meet postulated future energy requirements. The model is designed to cover a very large geographical region. The regional demand for baseload electric generating capacity associated with a postulated demand growth rate over any desired time horizon is specified by the user as an input to the model. The model uses linear programming to select the most probable locations within the region, based on physical and political factors. The linear program is multi-objective, with four objective functions based on transmission, coal supply, population proximity, and water supply considerations. Minimizing each objective function leads to a distinct set of locations. The user can select the objective function or weighted combination of objective functions most appropriate to his interest. Users with disparate interests can use the model to see the locational changes which result from varying weighting of the objective functions. The model has been implemented in a six-state mid-Atlantic region. The year 2000 was chosen as the study year, and a test scenario postulating 2.25% growth in baseload generating capacity between 1977 and 2000 was chosen. The scenario stipulatedthat this capacity be 50% nuclear and 50% coal-fired. Initial utility reaction indicates the objective based on transmission costs is most important for such a large-scale analysis
Alligator Rivers Analogue project. Hydrogeological modelling. Final Report - Volume 6
International Nuclear Information System (INIS)
Townley, L.R.; Trefry, M.G.; Barr, A.D.; Braumiller, S.
1992-01-01
This volume describes hydrogeological modelling carried out as part of the Alligator Rivers Analogue Project. Hydrogeology has played a key integrating role in the Project, largely because water movement is believed to have controlled the evolution of the Koongarra uranium Orebody and therefore affects field observations of all types at all scales. Aquifer testing described uses the concept of transmissivity in its interpretation of aquifer response to pumping. The concept of an aquifer, a layer transmitting significant quantities of water in a mainly horizontal direction, seems hard to accept in an environment as heterogeneous as that at Koongarra. But modelling of aquifers both in one dimension and two dimensionally in plan has contributed significantly to our understanding of the site. A one-dimensional model with three layers (often described as a quasi two dimensional model) was applied to flow between the Fault and Koongarra Creek. Being a transient model, this model was able to show that reverse flows can indeed occur back towards the Fault, but only if there is distributed recharge over the orebody as well as a mechanism for the Fault, or a region near the Fault, to remove water from the simulated cross-section. The model also showed clearly that the response of the three-layered system, consisting of a highly weathered zone, a fractured transmissive zone and a less conductive lower schist zone, is governed mainly by the transmissivity and storage coefficient of the middle layer. The storage coefficient of the higher layer has little effect. A two-dimensional model in plan used a description of anisotropy to show that reverse flows can also occur even without a conducting Fault. Modelling of a three-dimensional region using discrete fractures showed that it is certainly possible to simulate systems like that observed at Koongarra, but that large amounts of data are probably needed to obtain realistic descriptions of the fracture networks. Inverse modelling
Alligator Rivers Analogue project. Hydrogeological modelling. Final Report - Volume 6
Energy Technology Data Exchange (ETDEWEB)
Townley, L R; Trefry, M G; Barr, A D [CSIRO Div of Water Resources, PO Wembley, WA (Australia); Braumiller, S [Univ of Arizona, Tucson, AZ (United States). Dept of Hydrology and Water Resources; Kawanishi, M [Central Research Institute of Electric Power Industry, Abiko-Shi, Chiba-Ken (Japan); and others
1993-12-31
This volume describes hydrogeological modelling carried out as part of the Alligator Rivers Analogue Project. Hydrogeology has played a key integrating role in the Project, largely because water movement is believed to have controlled the evolution of the Koongarra uranium Orebody and therefore affects field observations of all types at all scales. Aquifer testing described uses the concept of transmissivity in its interpretation of aquifer response to pumping. The concept of an aquifer, a layer transmitting significant quantities of water in a mainly horizontal direction, seems hard to accept in an environment as heterogeneous as that at Koongarra. But modelling of aquifers both in one dimension and two dimensionally in plan has contributed significantly to our understanding of the site. A one-dimensional model with three layers (often described as a quasi two dimensional model) was applied to flow between the Fault and Koongarra Creek. Being a transient model, this model was able to show that reverse flows can indeed occur back towards the Fault, but only if there is distributed recharge over the orebody as well as a mechanism for the Fault, or a region near the Fault, to remove water from the simulated cross-section. The model also showed clearly that the response of the three-layered system, consisting of a highly weathered zone, a fractured transmissive zone and a less conductive lower schist zone, is governed mainly by the transmissivity and storage coefficient of the middle layer. The storage coefficient of the higher layer has little effect. A two-dimensional model in plan used a description of anisotropy to show that reverse flows can also occur even without a conducting Fault. Modelling of a three-dimensional region using discrete fractures showed that it is certainly possible to simulate systems like that observed at Koongarra, but that large amounts of data are probably needed to obtain realistic descriptions of the fracture networks. Inverse modelling
Quasirelativistic quark model in quasipotential approach
Matveev, V A; Savrin, V I; Sissakian, A N
2002-01-01
The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons
A multiscale modeling approach for biomolecular systems
Energy Technology Data Exchange (ETDEWEB)
Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)
2015-04-15
This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.
Popularity Modeling for Mobile Apps: A Sequential Approach.
Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong
2015-07-01
The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.
Modeling Results For the ITER Cryogenic Fore Pump. Final Report
Energy Technology Data Exchange (ETDEWEB)
Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)
2014-03-31
A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.
A new emergency response model for MACCS. Final report
International Nuclear Information System (INIS)
Chanin, D.I.
1992-01-01
Under DOE sponsorship, as directed by the Los Alamos National Laboratory (LANL), the MACCS code (version 1.5.11.1) [Ch92] was modified to implement a series of improvements in its modeling of emergency response actions. The purpose of this effort has been to aid the Westinghouse Savannah River Company (WSRC) in its performance of the Level III analysis for the Savannah River Site (SRS) probabilistic risk analysis (PRA) of K Reactor [Wo90]. To ensure its usefulness to WSRC, and facilitate the new model's eventual merger with other MACCS enhancements, close cooperation with WSRC and the MACCS development team at Sandia National Laboratories (SNL) was maintained throughout the project. These improvements are intended to allow a greater degree of flexibility in modeling the mitigative actions of evacuation and sheltering. The emergency response model in MACCS version 1.5.11.1 was developed to support NRC analyses of consequences from severe accidents at commercial nuclear power plants. The NRC code imposes unnecessary constraints on DOE safety analyses, particularly for consequences to onsite worker populations, and it has therefore been revamped. The changes to the code have been implemented in a manner that preserves previous modeling capabilities and therefore prior analyses can be repeated with the new code
A new approach for developing adjoint models
Farrell, P. E.; Funke, S. W.
2011-12-01
Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and
Eutrophication Modeling Using Variable Chlorophyll Approach
International Nuclear Information System (INIS)
Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.
2016-01-01
In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.
Model of the final borehole geometry for helical laser drilling
Kroschel, Alexander; Michalowski, Andreas; Graf, Thomas
2018-05-01
A model for predicting the borehole geometry for laser drilling is presented based on the calculation of a surface of constant absorbed fluence. It is applicable to helical drilling of through-holes with ultrashort laser pulses. The threshold fluence describing the borehole surface is fitted for best agreement with experimental data in the form of cross-sections of through-holes of different shapes and sizes in stainless steel samples. The fitted value is similar to ablation threshold fluence values reported for laser ablation models.
Diagnostic modeling of the ARM experimental configuration. Final report
Energy Technology Data Exchange (ETDEWEB)
Somerville, R.C.J.
1998-04-01
A major accomplishment of this work was to demonstrate the viability of using in-situ data in both mid-continent North America (SGP CART site) and Tropical Western Pacific (TOGA-COARE) locations to provide the horizontal advective flux convergences which force and constrain the Single-Column Model (SCM) which was the main theoretical tool of this work. The author has used TOGA-COARE as a prototype for the ARM TWP site. Results show that SCMs can produce realistic budgets over the ARM sites without relying on parameterization-dependent operational numerical weather prediction objective analyses. The single-column model is diagnostic rather than prognostic. It is numerically integrated in time as an initial value problem which is forced and constrained by observational data. The input is an observed initial state, plus observationally derived estimates of the time-dependent advection terms in the conservation equations, provided at all model layers. Its output is a complete heat and water budget, including temperature and moisture profiles, clouds and their radiative properties, diabatic heating terms, surface energy balance components, and hydrologic cycle elements, all specified as functions of time. These SCM results should be interpreted in light of the original motivation and purpose of ARM and its goal to improve the treatment of cloud-radiation interactions in climate models.
Efforts - Final technical report on task 4. Physical modelling calidation
DEFF Research Database (Denmark)
Andreasen, Jan Lasson; Olsson, David Dam; Christensen, T. W.
The present report is documentation for the work carried out in Task 4 at DTU Physical modelling-validation on the Brite/Euram project No. BE96-3340, contract No. BRPR-CT97-0398, with the title Enhanced Framework for forging design using reliable three-dimensional simulation (EFFORTS). The report...
Traffic congestion forecasting model for the INFORM System. Final report
Energy Technology Data Exchange (ETDEWEB)
Azarm, A.; Mughabghab, S.; Stock, D.
1995-05-01
This report describes a computerized traffic forecasting model, developed by Brookhaven National Laboratory (BNL) for a portion of the Long Island INFORM Traffic Corridor. The model has gone through a testing phase, and currently is able to make accurate traffic predictions up to one hour forward in time. The model will eventually take on-line traffic data from the INFORM system roadway sensors and make projections as to future traffic patterns, thus allowing operators at the New York State Department of Transportation (D.O.T.) INFORM Traffic Management Center to more optimally manage traffic. It can also form the basis of a travel information system. The BNL computer model developed for this project is called ATOP for Advanced Traffic Occupancy Prediction. The various modules of the ATOP computer code are currently written in Fortran and run on PC computers (pentium machine) faster than real time for the section of the INFORM corridor under study. The following summarizes the various routines currently contained in the ATOP code: Statistical forecasting of traffic flow and occupancy using historical data for similar days and time (long term knowledge), and the recent information from the past hour (short term knowledge). Estimation of the empirical relationships between traffic flow and occupancy using long and short term information. Mechanistic interpolation using macroscopic traffic models and based on the traffic flow and occupancy forecasted (item-1), and the empirical relationships (item-2) for the specific highway configuration at the time of simulation (construction, lane closure, etc.). Statistical routine for detection and classification of anomalies and their impact on the highway capacity which are fed back to previous items.
Hong, Sehee; Kim, Soyoung
2018-01-01
There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.
Information-preserving models of physics and computation: Final report
International Nuclear Information System (INIS)
1986-01-01
This research pertains to discrete dynamical systems, as embodied by cellular automata, reversible finite-difference equations, and reversible computation. The research has strengthened the cross-fertilization between physics, computer science and discrete mathematics. It has shown that methods and concepts of physics can be exported to computation. Conversely, fully discrete dynamical systems have been shown to be fruitful for representing physical phenomena usually described with differential equations - cellular automata for fluid dynamics has been the most noted example of such a representation. At the practical level, the fully discrete representation approach suggests innovative uses of computers for scientific computing. The originality of these uses lies in their non-numerical nature: they avoid the inaccuracies of floating-point arithmetic and bypass the need for numerical analysis. 38 refs
Evolutionary modeling-based approach for model errors correction
Directory of Open Access Journals (Sweden)
S. Q. Wan
2012-08-01
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Final Report: Center for Programming Models for Scalable Parallel Computing
Energy Technology Data Exchange (ETDEWEB)
Mellor-Crummey, John [William Marsh Rice University
2011-09-13
As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.
Computational modeling of drug-resistant bacteria. Final report
International Nuclear Information System (INIS)
2015-01-01
Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.
Computational modeling of drug-resistant bacteria. Final report
Energy Technology Data Exchange (ETDEWEB)
MacDougall, Preston [Middle Tennessee State Univ., Murfreesboro, TN (United States)
2015-03-12
Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.
MODELS OF TECHNOLOGY ADOPTION: AN INTEGRATIVE APPROACH
Directory of Open Access Journals (Sweden)
Andrei OGREZEANU
2015-06-01
Full Text Available The interdisciplinary study of information technology adoption has developed rapidly over the last 30 years. Various theoretical models have been developed and applied such as: the Technology Acceptance Model (TAM, Innovation Diffusion Theory (IDT, Theory of Planned Behavior (TPB, etc. The result of these many years of research is thousands of contributions to the field, which, however, remain highly fragmented. This paper develops a theoretical model of technology adoption by integrating major theories in the field: primarily IDT, TAM, and TPB. To do so while avoiding mess, an approach that goes back to basics in independent variable type’s development is proposed; emphasizing: 1 the logic of classification, and 2 psychological mechanisms behind variable types. Once developed these types are then populated with variables originating in empirical research. Conclusions are developed on which types are underpopulated and present potential for future research. I end with a set of methodological recommendations for future application of the model.
Interfacial Fluid Mechanics A Mathematical Modeling Approach
Ajaev, Vladimir S
2012-01-01
Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail. Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also: Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...
Advanced numerical modelling of a fire. Final report
International Nuclear Information System (INIS)
Heikkilae, L.; Keski-Rahkonen, O.
1996-03-01
Experience and probabilistic risk assessments show that fires present a major hazard in a nuclear power plant (NPP). The PALOME project (1988-92) improved the quality of numerical simulation of fires to make it a useful tool for fire safety analysis. Some of the most advanced zone model fire simulation codes were acquired. The performance of the codes was studied through literature and personal interviews in earlier studies and BRI2 code from the Japanese Building Research Institute was selected for further use. In PALOME 2 project this work was continued. Information obtained from large-scale fire tests at the German HDR facility allowed reliable prediction of the rate of heat release and was used for code validation. BRI2 code was validated particularly by participation in the CEC standard problem 'Prediction of effects caused by a cable fire experiment within the HDR-facility'. Participation in the development of a new field model code SOFIE specifically for fire applications as British-Swedish-Finnish cooperation was one of the goals of the project. SOFIE code was implemented at VTT and the first results of validation simulations were obtained. Well instrumented fire tests on electronic cabinets were carried out to determine source terms for simulation of room fires and to estimate fire spread to adjacent cabinets. The particular aim of this study was to measure the rate of heat release from a fire in an electronic cabinet. From the three tests, differing mainly in the amount of the fire load, data was obtained for source terms in numerical modelling of fires in rooms containing electronic cabinets. On the basis of these tests also a simple natural ventilation model was derived. (19 refs.)
Theory, Modeling and Simulation Annual Report 2000; FINAL
International Nuclear Information System (INIS)
Dixon, David A; Garrett, Bruce C; Straatsma, TP; Jones, Donald R; Studham, Scott; Harrison, Robert J; Nichols, Jeffrey A
2001-01-01
This annual report describes the 2000 research accomplishments for the Theory, Modeling, and Simulation (TM and S) directorate, one of the six research organizations in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory (PNNL). EMSL is a U.S. Department of Energy (DOE) national scientific user facility and is the centerpiece of the DOE commitment to providing world-class experimental, theoretical, and computational capabilities for solving the nation's environmental problems
Experimental Benchmarking of Fire Modeling Simulations. Final Report
International Nuclear Information System (INIS)
Greiner, Miles; Lopez, Carlos
2003-01-01
A series of large-scale fire tests were performed at Sandia National Laboratories to simulate a nuclear waste transport package under severe accident conditions. The test data were used to benchmark and adjust the Container Analysis Fire Environment (CAFE) computer code. CAFE is a computational fluid dynamics fire model that accurately calculates the heat transfer from a large fire to a massive engulfed transport package. CAFE will be used in transport package design studies and risk analyses
A new modelling approach for zooplankton behaviour
Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.
We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.
Continuum modeling an approach through practical examples
Muntean, Adrian
2015-01-01
This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.
Mathematical modeling of the voloxidation process. Final report
International Nuclear Information System (INIS)
Stanford, T.G.
1979-06-01
A mathematical model of the voloxidation process, a head-end reprocessing step for the removal of volatile fission products from spent nuclear fuel, has been developed. Three types of voloxidizer operation have been considered; co-current operation in which the gas and solid streams flow in the same direction, countercurrent operation in which the gas and solid streams flow in opposite directions, and semi-batch operation in which the gas stream passes through the reactor while the solids remain in it and are processed batch wise. Because of the complexity of the physical ahd chemical processes which occur during the voloxidation process and the lack of currently available kinetic data, a global kinetic model has been adapted for this study. Test cases for each mode of operation have been simulated using representative values of the model parameters. To process 714 kgm/day of spent nuclear fuel, using an oxidizing atmosphere containing 20 mole percent oxygen, it was found that a reactor 0.7 m in diameter and 2.49 m in length would be required for both cocurrent and countercurrent modes of operation while for semibatch operation a 0.3 m 3 reactor and an 88200 sec batch processing time would be required
Global Environmental Change: An integrated modelling approach
International Nuclear Information System (INIS)
Den Elzen, M.
1993-01-01
Two major global environmental problems are dealt with: climate change and stratospheric ozone depletion (and their mutual interactions), briefly surveyed in part 1. In Part 2 a brief description of the integrated modelling framework IMAGE 1.6 is given. Some specific parts of the model are described in more detail in other Chapters, e.g. the carbon cycle model, the atmospheric chemistry model, the halocarbon model, and the UV-B impact model. In Part 3 an uncertainty analysis of climate change and stratospheric ozone depletion is presented (Chapter 4). Chapter 5 briefly reviews the social and economic uncertainties implied by future greenhouse gas emissions. Chapters 6 and 7 describe a model and sensitivity analysis pertaining to the scientific uncertainties and/or lacunae in the sources and sinks of methane and carbon dioxide, and their biogeochemical feedback processes. Chapter 8 presents an uncertainty and sensitivity analysis of the carbon cycle model, the halocarbon model, and the IMAGE model 1.6 as a whole. Part 4 presents the risk assessment methodology as applied to the problems of climate change and stratospheric ozone depletion more specifically. In Chapter 10, this methodology is used as a means with which to asses current ozone policy and a wide range of halocarbon policies. Chapter 11 presents and evaluates the simulated globally-averaged temperature and sea level rise (indicators) for the IPCC-1990 and 1992 scenarios, concluding with a Low Risk scenario, which would meet the climate targets. Chapter 12 discusses the impact of sea level rise on the frequency of the Dutch coastal defence system (indicator) for the IPCC-1990 scenarios. Chapter 13 presents projections of mortality rates due to stratospheric ozone depletion based on model simulations employing the UV-B chain model for a number of halocarbon policies. Chapter 14 presents an approach for allocating future emissions of CO 2 among regions. (Abstract Truncated)
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Crime Modeling using Spatial Regression Approach
Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.
2018-01-01
Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Energy Technology Data Exchange (ETDEWEB)
Pakrasi, Himadri [Washington Univ., St. Louis, MO (United States)
2016-09-01
The overall objective of this project was to use a systems biology approach to evaluate the potentials of a number of cyanobacterial strains for photobiological production of advanced biofuels and/or their chemical precursors. Cyanobacteria are oxygen evolving photosynthetic prokaryotes. Among them, certain unicellular species such as Cyanothece can also fix N_{2}, a process that is exquisitely sensitive to oxygen. To accommodate such incompatible processes in a single cell, Cyanothece produces oxygen during the day, and creates an O_{2}-limited intracellular environment during the night to perform O_{2}-sensitive processes such as N_{2}-fixation. Thus, Cyanothece cells are natural bioreactors for the storage of captured solar energy with subsequent utilization at a different time during a diurnal cycle. Our studies include the identification of a novel, fast-growing, mixotrophic, transformable cyanobacterium. This strain has been sequenced and will be made available to the community. In addition, we have developed genome-scale models for a family of cyanobacteria to assess their metabolic repertoire. Furthermore, we developed a method for rapid construction of metabolic models using multiple annotation sources and a metabolic model of a related organism. This method will allow rapid annotation and screening of potential phenotypes based on the newly available genome sequences of many organisms.
Final model independent result of DAMA/LIBRA-phase1
Energy Technology Data Exchange (ETDEWEB)
Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)
2013-12-15
The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)
Modelling and Evaluation of Aircraft Emissions. Final report
International Nuclear Information System (INIS)
Savola, M.
1996-01-01
An application was developed to calculate the emissions and fuel consumption of a jet and turboprop powered aircraft in Finnair's scheduled and charter traffic both globally and in the Finnish flight information regions. The emissions calculated are nitrogen oxides, unburnt hydrocarbons and carbon monoxide. The study is based on traffic statistics of one week taken from three scheduled periods in 1993. Each flight was studied by dividing the flight profile into sections. The flight profile data are based on aircraft manufacturers' manuals, and they serve as initial data for engine manufacturers' emission calculation programs. In addition, the study includes separate calculations on air traffic emissions at airports during the so-called LTO cycle. The fuel consumption calculated for individual flights is 419,395 tonnes globally, and 146,142 tonnes in the Finnish flight information regions. According to Finnair's statistics the global fuel consumption is 0.97-fold compared with the result given by the model. The results indicate that in 1993 the global nitrogen oxide emissions amounted to 5,934 tonnes, the unburnt hydrocarbon emissions totalled 496 tonnes and carbon monoxide emissions 1,664 tonnes. The corresponding emissions in the Finnish flight information regions were as follows: nitrogen oxides 2,105 tonnes, unburnt hydrocarbons 177 tonnes and carbon monoxide 693 tonnes. (orig.)
Community Earth System Model (CESM) Tutorial 2016 Final Report
Energy Technology Data Exchange (ETDEWEB)
Lamarque, Jean-Francois [Univ. Corporation for Atmospheric Research (UCAR) and National Center for Atmospheric Research (NCAR) and Climate and Global Dynamics Laboratory (CGD), Boulder, CO (United States)
2017-05-09
For the 2016 tutorial, NCAR/CGD requested a total budget of $70,000 split equally between DOE and NSF. The funds were used to support student participation (travel, lodging, per diem, etc.). Lectures and practical session support was primarily provided by local participants at no additional cost (see list below). The seventh annual Community Earth System Model (CESM) tutorial (2016) for students and early career scientists was held 8 – 12 August 2016. As has been the case over the last few years, this event was extremely successful and there was greater demand than could be met. There was continued interest in support of the NSF’s EaSM Infrastructure awards, to train these awardees in the application of the CESM. Based on suggestions from previous tutorial participants, the 2016 tutorial experience again provided direct connection to Yellowstone for each individual participant (rather than pairs), and used the NCAR Mesa Library. The 2016 tutorial included lectures on simulating the climate system and practical sessions on running CESM, modifying components, and analyzing data. These were targeted to the graduate student level. In addition, specific talks (“Application” talks) were introduced this year to provide participants with some in-depth knowledge of some specific aspects of CESM.
Numerical modelling of diesel spray using the Eulerian multiphase approach
International Nuclear Information System (INIS)
Vujanović, Milan; Petranović, Zvonimir; Edelbauer, Wilfried; Baleta, Jakov; Duić, Neven
2015-01-01
Highlights: • Numerical model for fuel disintegration was presented. • Fuel liquid and vapour were calculated. • Good agreement with experimental data was shown for various combinations of injection and chamber pressure. - Abstract: This research investigates high pressure diesel fuel injection into the combustion chamber by performing computational simulations using the Euler–Eulerian multiphase approach. Six diesel-like conditions were simulated for which the liquid fuel jet was injected into a pressurised inert environment (100% N 2 ) through a 205 μm nozzle hole. The analysis was focused on the liquid jet and vapour penetration, describing spatial and temporal spray evolution. For this purpose, an Eulerian multiphase model was implemented, variations of the sub-model coefficients were performed, and their impact on the spray formation was investigated. The final set of sub-model coefficients was applied to all operating points. Several simulations of high pressure diesel injections (50, 80, and 120 MPa) combined with different chamber pressures (5.4 and 7.2 MPa) were carried out and results were compared to the experimental data. The predicted results share a similar spray cloud shape for all conditions with the different vapour and liquid penetration length. The liquid penetration is shortened with the increase in chamber pressure, whilst the vapour penetration is more pronounced by elevating the injection pressure. Finally, the results showed good agreement when compared to the measured data, and yielded the correct trends for both the liquid and vapour penetrations under different operating conditions
A nationwide modelling approach to decommissioning - 16182
International Nuclear Information System (INIS)
Kelly, Bernard; Lowe, Andy; Mort, Paul
2009-01-01
In this paper we describe a proposed UK national approach to modelling decommissioning. For the first time, we shall have an insight into optimizing the safety and efficiency of a national decommissioning strategy. To do this we use the General Case Integrated Waste Algorithm (GIA), a universal model of decommissioning nuclear plant, power plant, waste arisings and the associated knowledge capture. The model scales from individual items of plant through cells, groups of cells, buildings, whole sites and then on up to a national scale. We describe the national vision for GIA which can be broken down into three levels: 1) the capture of the chronological order of activities that an experienced decommissioner would use to decommission any nuclear facility anywhere in the world - this is Level 1 of GIA; 2) the construction of an Operational Research (OR) model based on Level 1 to allow rapid what if scenarios to be tested quickly (Level 2); 3) the construction of a state of the art knowledge capture capability that allows future generations to learn from our current decommissioning experience (Level 3). We show the progress to date in developing GIA in levels 1 and 2. As part of level 1, GIA has assisted in the development of an IMechE professional decommissioning qualification. Furthermore, we describe GIA as the basis of a UK-Owned database of decommissioning norms for such things as costs, productivity, durations etc. From level 2, we report on a pilot study that has successfully tested the basic principles for the OR numerical simulation of the algorithm. We then highlight the advantages of applying the OR modelling approach nationally. In essence, a series of 'what if...' scenarios can be tested that will improve the safety and efficiency of decommissioning. (authors)
Evertson, Carolyn M.; And Others
A summary is presented of the final report, "Effective Classroom Management and Instruction: An Exploration of Models." The final report presents a set of linked investigations of the effects of training teachers in effective classroom management practices in a series of school-based workshops. Four purposes were addressed by the study: (1) to…
A final size relation for epidemic models of vector-transmitted diseases
Fred Brauer
2017-01-01
We formulate and analyze an age of infection model for epidemics of diseases transmitted by a vector, including the possibility of direct transmission as well. We show how to determine a basic reproduction number. While there is no explicit final size relation as for diseases transmitted directly, we are able to obtain estimates for the final size of the epidemic.
Modeling in transport phenomena a conceptual approach
Tosun, Ismail
2007-01-01
Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to
Nuclear physics for applications. A model approach
International Nuclear Information System (INIS)
Prussin, S.G.
2007-01-01
Written by a researcher and teacher with experience at top institutes in the US and Europe, this textbook provides advanced undergraduates minoring in physics with working knowledge of the principles of nuclear physics. Simplifying models and approaches reveal the essence of the principles involved, with the mathematical and quantum mechanical background integrated in the text where it is needed and not relegated to the appendices. The practicality of the book is enhanced by numerous end-of-chapter problems and solutions available on the Wiley homepage. (orig.)
Pedagogic process modeling: Humanistic-integrative approach
Directory of Open Access Journals (Sweden)
Boritko Nikolaj M.
2007-01-01
Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .
2012-09-01
The final report for the Model Orlando Regionally Efficient Travel Management Coordination Center (MORE TMCC) presents the details of : the 2-year process of the partial deployment of the original MORE TMCC design created in Phase I of this project...
The place of quantitative energy models in a prospective approach
International Nuclear Information System (INIS)
Taverdet-Popiolek, N.
2009-01-01
Futurology above all depends on having the right mind set. Gaston Berger summarizes the prospective approach in 5 five main thrusts: prepare for the distant future, be open-minded (have a systems and multidisciplinary approach), carry out in-depth analyzes (draw out actors which are really determinant or the future, as well as established shed trends), take risks (imagine risky but flexible projects) and finally think about humanity, futurology being a technique at the service of man to help him build a desirable future. On the other hand, forecasting is based on quantified models so as to deduce 'conclusions' about the future. In the field of energy, models are used to draw up scenarios which allow, for instance, measuring medium or long term effects of energy policies on greenhouse gas emissions or global welfare. Scenarios are shaped by the model's inputs (parameters, sets of assumptions) and outputs. Resorting to a model or projecting by scenario is useful in a prospective approach as it ensures coherence for most of the variables that have been identified through systems analysis and that the mind on its own has difficulty to grasp. Interpretation of each scenario must be carried out in the light o the underlying framework of assumptions (the backdrop), developed during the prospective stage. When the horizon is far away (very long-term), the worlds imagined by the futurologist contain breaks (technological, behavioural and organizational) which are hard to integrate into the models. It is here that the main limit for the use of models in futurology is located. (author)
Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.
Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J
2016-01-01
Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
Final Report for Harvesting a New Wind Crop: Innovative Economic Approaches for Rural America
Energy Technology Data Exchange (ETDEWEB)
Susan Innis; Randy Udall; Project Officer - Keith Bennett
2005-09-30
Final Report for ''Harvesting a New Wind Crop: Innovative Economic Approaches for Rural America'': This project, ''Harvesting a New Wind Crop'', helped stimulate wind development by rural electric cooperatives and municipal utilities in Colorado. To date most of the wind power development in the United States has been driven by large investor-owned utilities serving major metropolitan areas. To meet the 5% by 2020 goal of the Wind Powering America program the 2,000 municipal and 900 rural electric cooperatives in the country must get involved in wind power development. Public power typically serves rural and suburban areas and can play a role in revitalizing communities by tapping into the economic development potential of wind power. One barrier to the involvement of public power in wind development has been the perception that wind power is more expensive than other generation sources. This project focused on two ways to reduce the costs of wind power to make it more attractive to public power entities. The first way was to develop a revenue stream from the sale of green tags. By selling green tags to entities that voluntarily support wind power, rural coops and munis can effectively reduce their cost of wind power. Western Resource Advocates (WRA) and the Community Office for Resource Efficiency (CORE) worked with Lamar Light and Power and Arkansas River Power Authority to develop a strategy to use green tags to help finance their wind project. These utilities are now selling their green tags to Community Energy, Inc., an independent for-profit marketer who in turn sells the tags to consumers around Colorado. The Lamar tags allow the University of Colorado-Boulder, the City of Boulder, NREL and other businesses to support wind power development and make the claim that they are ''wind-powered''. This urban-rural partnership is an important development for the state of Colorado's rural communities
Implicit moral evaluations: A multinomial modeling approach.
Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael
2017-01-01
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Knobbs, C. G.; Grayson, D. J.
2012-01-01
There is mounting evidence to show that engineers need more than technical skills to succeed in industry. This paper describes a curriculum innovation in which so-called "soft" skills, specifically inter-personal and intra-personal skills, were integrated into a final year mining engineering course. The instructional approach was…
A novel approach to pipeline tensioner modeling
Energy Technology Data Exchange (ETDEWEB)
O' Grady, Robert; Ilie, Daniel; Lane, Michael [MCS Software Division, Galway (Ireland)
2009-07-01
As subsea pipeline developments continue to move into deep and ultra-deep water locations, there is an increasing need for the accurate prediction of expected pipeline fatigue life. A significant factor that must be considered as part of this process is the fatigue damage sustained by the pipeline during installation. The magnitude of this installation-related damage is governed by a number of different agents, one of which is the dynamic behavior of the tensioner systems during pipe-laying operations. There are a variety of traditional finite element methods for representing dynamic tensioner behavior. These existing methods, while basic in nature, have been proven to provide adequate forecasts in terms of the dynamic variation in typical installation parameters such as top tension and sagbend/overbend strain. However due to the simplicity of these current approaches, some of them tend to over-estimate the frequency of tensioner pay out/in under dynamic loading. This excessive level of pay out/in motion results in the prediction of additional stress cycles at certain roller beds, which in turn leads to the prediction of unrealistic fatigue damage to the pipeline. This unwarranted fatigue damage then equates to an over-conservative value for the accumulated damage experienced by a pipeline weld during installation, and so leads to a reduction in the estimated fatigue life for the pipeline. This paper describes a novel approach to tensioner modeling which allows for greater control over the velocity of dynamic tensioner pay out/in and so provides a more accurate estimation of fatigue damage experienced by the pipeline during installation. The paper reports on a case study, as outlined in the proceeding section, in which a comparison is made between results from this new tensioner model and from a more conventional approach. The comparison considers typical installation parameters as well as an in-depth look at the predicted fatigue damage for the two methods
Agribusiness model approach to territorial food development
Directory of Open Access Journals (Sweden)
Murcia Hector Horacio
2011-04-01
Full Text Available
Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.
Sri Darwati
2012-01-01
The main problem of landfill management in Indonesia is the difficulty in getting a location for Final Processing Sites (FPS) due to limited land and high land prices. Besides, about 95% of existing landfills are uncontrolled dumping sites, which could potentially lead to water, soil and air pollution. Based on data from the Ministry of Environment (2010), The Act of the Republic of Indonesia Number 18 Year 2008 Concerning Solid Waste Management, prohibits open dumping at final processing sit...
Development of generalised model for grate combustion of biomass. Final report
Energy Technology Data Exchange (ETDEWEB)
Rosendahl, L.
2007-02-15
This project has been divided into two main parts, one of which has focused on modelling and one on designing and constructing a grate fired biomass test rig. The modelling effort has been defined due to a need for improved knowledge of the transport and conversion processes within the bed layer for two reasons: 1) to improve emission understanding and reduction measures and 2) to improve boundary conditions for CFD-based furnace modelling. The selected approach has been based on a diffusion coefficient formulation, where conservation equations for the concentration of fuel are solved in a spatially resolved grid, much in the same manner as in a finite volume CFD code. Within this porous layer of fuel, gas flows according to the Ergun equation. The diffusion coefficient links the properties of the fuel to the grate type and vibration mode, and is determined for each combination of fuel, grate and vibration mode. In this work, 3 grates have been tested as well as 4) types of fuel, drinking straw, wood beads, straw pellets and wood pellets. Although much useful information and knowledge has been obtained on transport processes in fuel layers, the model has proved to be less than perfect, and the recommendation is not to continue along this path. New visual data on the motion of straw on vibrating grates indicate that a diffusion governed motion does not very well represent the transport. Furthermore, it is very difficult to obtain the diffusion coefficient in other places than the surface layer of the grate, and it is not likely that this is representative for the motion within the layer. Finally, as the model complexity grows, model turnover time increases to a level where it is comparable to that of the full furnace model. In order to proceed and address the goals of the first paragraph, it is recommended to return to either a walking column approach or even some other, relatively simple method of prediction, and combine this with a form of randomness, to mimic the
Energy and Development. A Modelling Approach
International Nuclear Information System (INIS)
Van Ruijven, B.J.
2008-01-01
policies have an important role. For instance, low energy taxes and subsidies in developing countries limit the opportunities to promote alternative energy options. A final issue in this thesis is the impact of the changing development context - depletion of fossil fuels and climate change - on the economic development of low-income regions. We developed a stylized population-economy-energy-climate model (SUSCLIME) in which automated agents can take policy-decisions and develop strategies to cope with resource depletion and climate change. From preliminary model experiments it appears that developing countries are more vulnerable to both resource depletion and climate change. A co-benefit of a long-term focus on avoiding climate change is that it also slows down fossil resource depletion. A short-term focus to reduce impacts from depletion of endogenous fossil resources has probably not much synergy with climate policy because imported fossil energy (or coal) is more attractive than developing alternatives.
Final Report for Bio-Inspired Approaches to Moving-Target Defense Strategies
Energy Technology Data Exchange (ETDEWEB)
Fink, Glenn A.; Oehmen, Christopher S.
2012-09-01
This report records the work and contributions of the NITRD-funded Bio-Inspired Approaches to Moving-Target Defense Strategies project performed by Pacific Northwest National Laboratory under the technical guidance of the National Security Agency’s R6 division. The project has incorporated a number of bio-inspired cyber defensive technologies within an elastic framework provided by the Digital Ants. This project has created the first scalable, real-world prototype of the Digital Ants Framework (DAF)[11] and integrated five technologies into this flexible, decentralized framework: (1) Ant-Based Cyber Defense (ABCD), (2) Behavioral Indicators, (3) Bioinformatic Clas- sification, (4) Moving-Target Reconfiguration, and (5) Ambient Collaboration. The DAF can be used operationally to decentralize many such data intensive applications that normally rely on collection of large amounts of data in a central repository. In this work, we have shown how these component applications may be decentralized and may perform analysis at the edge. Operationally, this will enable analytics to scale far beyond current limitations while not suffering from the bandwidth or computational limitations of centralized analysis. This effort has advanced the R6 Cyber Security research program to secure digital infrastructures by developing a dynamic means to adaptively defend complex cyber systems. We hope that this work will benefit both our client’s efforts in system behavior modeling and cyber security to the overall benefit of the nation.
Approaches and models of intercultural education
Directory of Open Access Journals (Sweden)
Iván Manuel Sánchez Fontalvo
2013-10-01
Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.
Transport modeling: An artificial immune system approach
Directory of Open Access Journals (Sweden)
Teodorović Dušan
2006-01-01
Full Text Available This paper describes an artificial immune system approach (AIS to modeling time-dependent (dynamic, real time transportation phenomenon characterized by uncertainty. The basic idea behind this research is to develop the Artificial Immune System, which generates a set of antibodies (decisions, control actions that altogether can successfully cover a wide range of potential situations. The proposed artificial immune system develops antibodies (the best control strategies for different antigens (different traffic "scenarios". This task is performed using some of the optimization or heuristics techniques. Then a set of antibodies is combined to create Artificial Immune System. The developed Artificial Immune transportation systems are able to generalize, adapt, and learn based on new knowledge and new information. Applications of the systems are considered for airline yield management, the stochastic vehicle routing, and real-time traffic control at the isolated intersection. The preliminary research results are very promising.
System approach to modeling of industrial technologies
Toropov, V. S.; Toropov, E. S.
2018-03-01
The authors presented a system of methods for modeling and improving industrial technologies. The system consists of information and software. The information part is structured information about industrial technologies. The structure has its template. The template has several essential categories used to improve the technological process and eliminate weaknesses in the process chain. The base category is the physical effect that takes place when the technical process proceeds. The programming part of the system can apply various methods of creative search to the content stored in the information part of the system. These methods pay particular attention to energy transformations in the technological process. The system application will allow us to systematize the approach to improving technologies and obtaining new technical solutions.
International Nuclear Information System (INIS)
Ghoniem, Nasr M.
2009-01-01
The following has been achieved: (1) Final design of a Deformable Grazing Incidence Mirror, (2) Formulation of a new approach to model surface roughening under laser illumination, and (3) Modeling of radiation hardening under IFE conditions. We discuss here progress made in each one of these areas. The objectives of the Grazing Incidence Metal Mirror (GIMM) are: (1) to reflect the incident laser beam into the direction of the target; (2) to focus the incident beam directly onto the target (3) to withstand the thermomechanical and damage induced by laser beams; (4) to correct the reflective surface so that the focus is permanently on the target; (5) to have a full range of motion so it can be placed anywhere relative to the target. The design was described in our progress report of the period August 15, 2003 through April 15, 2004. In the following, we describe further improvements of the final design.
ECOMOD - An ecological approach to radioecological modelling
International Nuclear Information System (INIS)
Sazykina, Tatiana G.
2000-01-01
A unified methodology is proposed to simulate the dynamic processes of radionuclide migration in aquatic food chains in parallel with their stable analogue elements. The distinguishing feature of the unified radioecological/ecological approach is the description of radionuclide migration along with dynamic equations for the ecosystem. The ability of the methodology to predict the results of radioecological experiments is demonstrated by an example of radionuclide (iron group) accumulation by a laboratory culture of the algae Platymonas viridis. Based on the unified methodology, the 'ECOMOD' radioecological model was developed to simulate dynamic radioecological processes in aquatic ecosystems. It comprises three basic modules, which are operated as a set of inter-related programs. The 'ECOSYSTEM' module solves non-linear ecological equations, describing the biomass dynamics of essential ecosystem components. The 'RADIONUCLIDE DISTRIBUTION' module calculates the radionuclide distribution in abiotic and biotic components of the aquatic ecosystem. The 'DOSE ASSESSMENT' module calculates doses to aquatic biota and doses to man from aquatic food chains. The application of the ECOMOD model to reconstruct the radionuclide distribution in the Chernobyl Cooling Pond ecosystem in the early period after the accident shows good agreement with observations
Modelling Approach In Islamic Architectural Designs
Directory of Open Access Journals (Sweden)
Suhaimi Salleh
2014-06-01
Full Text Available Architectural designs contribute as one of the main factors that should be considered in minimizing negative impacts in planning and structural development in buildings such as in mosques. In this paper, the ergonomics perspective is revisited which hence focuses on the conditional factors involving organisational, psychological, social and population as a whole. This paper tries to highlight the functional and architectural integration with ecstatic elements in the form of decorative and ornamental outlay as well as incorporating the building structure such as wall, domes and gates. This paper further focuses the mathematical aspects of the architectural designs such as polar equations and the golden ratio. These designs are modelled into mathematical equations of various forms, while the golden ratio in mosque is verified using two techniques namely, the geometric construction and the numerical method. The exemplary designs are taken from theSabah Bandaraya Mosque in Likas, Kota Kinabalu and the Sarawak State Mosque in Kuching,while the Universiti Malaysia Sabah Mosque is used for the Golden Ratio. Results show thatIslamic architectural buildings and designs have long had mathematical concepts and techniques underlying its foundation, hence, a modelling approach is needed to rejuvenate these Islamic designs.
Vibration Stabilization of a Mechanical Model of a X-Band Linear Collider Final Focus Magnet
Frisch, J; Decker, V; Hendrickson, L; Markiewicz, T W; Partridge, R; Seryi, Andrei
2004-01-01
The small beam sizes at the interaction point of a X-band linear collider require mechanical stabilization of the final focus magnets at the nanometer level. While passive systems provide adequate performance at many potential sites, active mechanical stabilization is useful if the natural or cultural ground vibration is higher than expected. A mechanical model of a room temperature linear collider final focus magnet has been constructed and actively stabilized with an accelerometer based system.
Vibration Stabilization of a Mechanical Model of a X-Band Linear Collider Final Focus Magnet
International Nuclear Information System (INIS)
Frisch, Josef; Chang, Allison; Decker, Valentin; Doyle, Eric; Eriksson, Leif; Hendrickson, Linda; Himel, Thomas; Markiewicz, Thomas; Partridge, Richard; Seryi, Andrei; SLAC
2006-01-01
The small beam sizes at the interaction point of a X-band linear collider require mechanical stabilization of the final focus magnets at the nanometer level. While passive systems provide adequate performance at many potential sites, active mechanical stabilization is useful if the natural or cultural ground vibration is higher than expected. A mechanical model of a room temperature linear collider final focus magnet has been constructed and actively stabilized with an accelerometer based system
Final report of the TRUE Block Scale project. 3. Modelling of flow and transport
Energy Technology Data Exchange (ETDEWEB)
Poteri, Antti [VTT Processes, Helsinki (Finland); Billaux, Daniel [Itasca Consultants SA, Ecully (France); Dershowitz, William [Golder Associates Inc., Redmond, WA (United States); Gomez-Hernandez, J. Jaime [Univ. Politecnica de Valencia (Spain). Dept. of Hydrahulic and Environmental Engineering; Cvetkovic, Vladimir [Royal Inst. of Tech., Stockholm (Sweden). Div. of Water Resources Engineering; Hautojaervi, Aimo [Posiva Oy, Olkiluoto (Finland); Holton, David [Serco Assurance, Harwell (United Kingdom); Medina, Agustin [UPC, Barcelona (Spain); Winberg, Anders (ed.) [Conterra AB, Uppsala (Sweden)
2002-12-01
A series of tracer experiments were performed as part of the TRUE Block Scale experiment over length scales ranging from 10 to 100 m. The in situ experimentation was preceded by a comprehensive iterative characterisation campaign - the results from one borehole was used to update descriptive models and provide the basis for continued characterisation. Apart from core drilling, various types of laboratory investigations, core logging, borehole TV imaging and various types of hydraulic tests (single hole and cross-hole) were performed. Based on the characterisation data a hydro structural model of the investigated rock volume was constructed including deterministic structures and a stochastic background fracture population, and their material properties. In addition, a generic microstructure conceptual model of the investigated structures was developed. Tracer tests with radioactive sorbing tracers performed in three flow paths were preceded by various pre-tests including tracer dilution tests, which were used to select suitable configurations of tracer injection and pumping in the established borehole array. The in situ experimentation was preceded by formulation of basic questions and associated hypotheses to be addressed by the tracer tests and the subsequent evaluation. The hypotheses included address of the validity of the hydro structural model, the effects of heterogeneity and block scale retention. Model predictions and subsequent evaluation modelling was performed using a wide variety of model concepts. These included stochastic continuum, discrete feature network and channel network models formulated in 3D, which also solved the flow problem. In addition, two 'single channel' approaches (Posiva Streamtube and LaSAR extended to the block scale) were employed. A common basis for transport was formulated. The difference between the approaches was found in how heterogeneity is accounted for, both in terms of number of different types of immobile zones
Final report of the TRUE Block Scale project. 3. Modelling of flow and transport
International Nuclear Information System (INIS)
Poteri, Antti; Billaux, Daniel; Dershowitz, William; Gomez-Hernandez, J. Jaime; Holton, David; Medina, Agustin; Winberg, Anders
2002-12-01
A series of tracer experiments were performed as part of the TRUE Block Scale experiment over length scales ranging from 10 to 100 m. The in situ experimentation was preceded by a comprehensive iterative characterisation campaign - the results from one borehole was used to update descriptive models and provide the basis for continued characterisation. Apart from core drilling, various types of laboratory investigations, core logging, borehole TV imaging and various types of hydraulic tests (single hole and cross-hole) were performed. Based on the characterisation data a hydro structural model of the investigated rock volume was constructed including deterministic structures and a stochastic background fracture population, and their material properties. In addition, a generic microstructure conceptual model of the investigated structures was developed. Tracer tests with radioactive sorbing tracers performed in three flow paths were preceded by various pre-tests including tracer dilution tests, which were used to select suitable configurations of tracer injection and pumping in the established borehole array. The in situ experimentation was preceded by formulation of basic questions and associated hypotheses to be addressed by the tracer tests and the subsequent evaluation. The hypotheses included address of the validity of the hydro structural model, the effects of heterogeneity and block scale retention. Model predictions and subsequent evaluation modelling was performed using a wide variety of model concepts. These included stochastic continuum, discrete feature network and channel network models formulated in 3D, which also solved the flow problem. In addition, two 'single channel' approaches (Posiva Streamtube and LaSAR extended to the block scale) were employed. A common basis for transport was formulated. The difference between the approaches was found in how heterogeneity is accounted for, both in terms of number of different types of immobile zones included
DECOVALEX III PROJECT. Modelling of FEBEX In-Situ Test. Task1 Final Report
Energy Technology Data Exchange (ETDEWEB)
Alonso, E.E.; Alcoverro, J. [Univ. Politecnica de Catalunya, Barcelona (Spain)] (comps.)
2005-02-15
Task 1 of DECOVALEX III was conceived as a benchmark exercise supported by all field and laboratory data generated during the performance of the FEBEX experiment designed to study thermo-hydro-mechanical and thermo-hydro-geochemical processes of the buffer and rock in the near field. The task was defined as a series of three successive blind prediction exercises (Parts A, B and C), which cover the behaviour of both the rock and bentonite barrier. Research teams participating in the FEBEX task were given, for each of the three parts, a set of field and laboratory data theoretically sufficient to generate a proper model and were asked to submit predictions, at given locations and time, for some of the measured variables. The merits and limitations of different modeling approaches were therefore established. The teams could perform additional calculations, once the actual 'solution' was disclosed. Final calculations represented the best approximation that a given team could provide, always within the general time constraints imposed by the General DECOVALEX III Organization. This report presents the works performed for Task 1. It contains the case definitions and evaluations of modelling results for Part A, B and C, and the overall evaluation of the works performed. The report is completed by a CD-ROM containing a set of final reports provided by the modeling teams participating in each of the three parts defined. These reports provide the necessary details to better understand the nature of the blind or final predictions included in this report. The report closes with a set of conclusions, which provides a summary of the main findings and highlights the lessons learned, some of which were summarized below. The best predictions of the water inflow into the excavated tunnel are found when the hydro geological model is properly calibrated on the basis of other known flow measurements in the same area. The particular idealization of the rock mass (equivalent
DECOVALEX III PROJECT. Modelling of FEBEX In-Situ Test. Task1 Final Report
International Nuclear Information System (INIS)
Alonso, E.E.; Alcoverro, J.
2005-02-01
Task 1 of DECOVALEX III was conceived as a benchmark exercise supported by all field and laboratory data generated during the performance of the FEBEX experiment designed to study thermo-hydro-mechanical and thermo-hydro-geochemical processes of the buffer and rock in the near field. The task was defined as a series of three successive blind prediction exercises (Parts A, B and C), which cover the behaviour of both the rock and bentonite barrier. Research teams participating in the FEBEX task were given, for each of the three parts, a set of field and laboratory data theoretically sufficient to generate a proper model and were asked to submit predictions, at given locations and time, for some of the measured variables. The merits and limitations of different modeling approaches were therefore established. The teams could perform additional calculations, once the actual 'solution' was disclosed. Final calculations represented the best approximation that a given team could provide, always within the general time constraints imposed by the General DECOVALEX III Organization. This report presents the works performed for Task 1. It contains the case definitions and evaluations of modelling results for Part A, B and C, and the overall evaluation of the works performed. The report is completed by a CD-ROM containing a set of final reports provided by the modeling teams participating in each of the three parts defined. These reports provide the necessary details to better understand the nature of the blind or final predictions included in this report. The report closes with a set of conclusions, which provides a summary of the main findings and highlights the lessons learned, some of which were summarized below. The best predictions of the water inflow into the excavated tunnel are found when the hydro geological model is properly calibrated on the basis of other known flow measurements in the same area. The particular idealization of the rock mass (equivalent porous media
Energy Technology Data Exchange (ETDEWEB)
Zhang, Ye [Univ. of Wyoming, Laramie, WY (United States)
2018-01-17
The critical component of a risk assessment study in evaluating GCS is an analysis of uncertainty in CO2 modeling. In such analyses, direct numerical simulation of CO2 flow and leakage requires many time-consuming model runs. Alternatively, analytical methods have been developed which allow fast and efficient estimation of CO2 storage and leakage, although restrictive assumptions on formation rock and fluid properties are employed. In this study, an intermediate approach is proposed based on the Design of Experiment and Response Surface methodology, which consists of using a limited number of numerical simulations to estimate a prediction outcome as a combination of the most influential uncertain site properties. The methodology can be implemented within a Monte Carlo framework to efficiently assess parameter and prediction uncertainty while honoring the accuracy of numerical simulations. The choice of the uncertain properties is flexible and can include geologic parameters that influence reservoir heterogeneity, engineering parameters that influence gas trapping and migration, and reactive parameters that influence the extent of fluid/rock reactions. The method was tested and verified on modeling long-term CO2 flow, non-isothermal heat transport, and CO2 dissolution storage by coupling two-phase flow with explicit miscibility calculation using an accurate equation of state that gives rise to convective mixing of formation brine variably saturated with CO2. All simulations were performed using three-dimensional high-resolution models including a target deep saline aquifer, overlying caprock, and a shallow aquifer. To evaluate the uncertainty in representing reservoir permeability, sediment hierarchy of a heterogeneous digital stratigraphy was mapped to create multiple irregularly shape stratigraphic models of decreasing geologic resolutions: heterogeneous (reference), lithofacies, depositional environment, and a (homogeneous) geologic formation. To ensure model
FINAL REPORT:Observation and Simulations of Transport of Molecules and Ions Across Model Membranes
Energy Technology Data Exchange (ETDEWEB)
MURAD, SOHAIL [University of Illinois at Chicago; JAMESON, CYNTHIA J [University of Illinois at Chicago
2013-10-22
During the this new grant we developed a robust methodology for investigating a wide range of properties of phospho-lipid bilayers. The approach developed is unique because despite using periodic boundary conditions, we can simulate an entire experiment or process in detail. For example, we can follow the entire permeation process in a lipid-membrane. This includes transport from the bulk aqueous phase to the lipid surface; permeation into the lipid; transport inside the lipid; and transport out of the lipid to the bulk aqueous phase again. We studied the transport of small gases in both the lipid itself and in model protein channels. In addition, we have examined the transport of nanocrystals through the lipid membrane, with the main goal of understanding the mechanical behavior of lipids under stress including water and ion leakage and lipid flip flop. Finally we have also examined in detail the deformation of lipids when under the influence of external fields, both mechanical and electrostatic (currently in progress). The important observations and conclusions from our studies are described in the main text of the report
An integrated approach to permeability modeling using micro-models
Energy Technology Data Exchange (ETDEWEB)
Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)
2008-10-15
An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.
Weller, J M; Henning, M; Civil, N; Lavery, L; Boyd, M J; Jolly, B
2013-09-01
When evaluating assessments, the impact on learning is often overlooked. Approaches to learning can be deep, surface and strategic. To provide insights into exam quality, we investigated the learning approaches taken by trainees preparing for the Australian and New Zealand College of Anaesthetists (ANZCA) Final Exam. The revised two-factor Study Process Questionnaire (R-SPQ-2F) was modified and validated for this context and was administered to ANZCA advanced trainees. Additional questions were asked about perceived value for anaesthetic practice, study time and approaches to learning for each exam component. Overall, 236 of 690 trainees responded (34%). Responses indicated both deep and surface approaches to learning with a clear preponderance of deep approaches. The anaesthetic viva was valued most highly and the multiple choice question component the least. Despite this, respondents spent the most time studying for the multiple choice questions. The traditionally low short answer questions pass rate could not be explained by limited study time, perceived lack of value or study approaches. Written responses suggested that preparation for multiple choice questions was characterised by a surface approach, with rote memorisation of past questions. Minimal reference was made to the ANZCA syllabus as a guide for learning. These findings indicate that, although trainees found the exam generally relevant to practice and adopted predominantly deep learning approaches, there was considerable variation between the four components. These results provide data with which to review the existing ANZCA Final Exam and comparative data for future studies of the revisions to the ANZCA curriculum and exam process.
Analytical approach to chromatic correction in the final focus system of circular colliders
Directory of Open Access Journals (Sweden)
Yunhai Cai
2016-11-01
Full Text Available A conventional final focus system in particle accelerators is systematically analyzed. We find simple relations between the parameters of two focus modules in the final telescope. Using the relations, we derive the chromatic Courant-Snyder parameters for the telescope. The parameters are scaled approximately according to (L^{*}/β_{y}^{*}δ, where L^{*} is the distance from the interaction point to the first quadrupole, β_{y}^{*} the vertical beta function at the interaction point, and δ the relative momentum deviation. Most importantly, we show how to compensate its chromaticity order by order in δ by a traditional correction module flanked by an asymmetric pair of harmonic multipoles. The method enables a circular Higgs collider with 2% momentum aperture and illuminates a path forward to 4% in the future.
Risk communication: a mental models approach
National Research Council Canada - National Science Library
Morgan, M. Granger (Millett Granger)
2002-01-01
... information about risks. The procedure uses approaches from risk and decision analysis to identify the most relevant information; it also uses approaches from psychology and communication theory to ensure that its message is understood. This book is written in nontechnical terms, designed to make the approach feasible for anyone willing to try it. It is illustrat...
A Systems Approach to Bio-Oil Stabilization - Final Technical Report
Energy Technology Data Exchange (ETDEWEB)
Brown, Robert C; Meyer, Terrence; Fox, Rodney; Submramaniam, Shankar; Shanks, Brent; Smith, Ryan G
2011-12-23
CFD model at all flow speeds. This study shows that fully-resolved direct numerical simulation (DNS) is successful in calculating the filter efficiency at all speeds. Aldehydes and acids are thought to play key roles in the stability of bio-oils, so the catalytic stabilization of bio-oils was focused on whether a reaction approach could be employed that simultaneously addressed these two types of molecules in bio-oil. Our approach to post treatment was simultaneous hydrogenation and esterification using bifunctional metal/acidic heterogeneous catalyst in which reactive aldehydes were reduced to alcohols, creating a high enough alcohol concentration so that the carboxylic acids could be esterified.
A Multi-Model Approach for System Diagnosis
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur
2007-01-01
A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....
A Workflow-Oriented Approach To Propagation Models In Heliophysics
Directory of Open Access Journals (Sweden)
Gabriele Pierantoni
2014-01-01
Full Text Available The Sun is responsible for the eruption of billions of tons of plasma andthe generation of near light-speed particles that propagate throughout the solarsystem and beyond. If directed towards Earth, these events can be damaging toour tecnological infrastructure. Hence there is an effort to understand the causeof the eruptive events and how they propagate from Sun to Earth. However, thephysics governing their propagation is not well understood, so there is a need todevelop a theoretical description of their propagation, known as a PropagationModel, in order to predict when they may impact Earth. It is often difficultto define a single propagation model that correctly describes the physics ofsolar eruptive events, and even more difficult to implement models capable ofcatering for all these complexities and to validate them using real observational data.In this paper, we envisage that workflows offer both a theoretical andpractical framerwork for a novel approach to propagation models. We definea mathematical framework that aims at encompassing the different modalitieswith which workflows can be used, and provide a set of generic building blockswritten in the TAVERNA workflow language that users can use to build theirown propagation models. Finally we test both the theoretical model and thecomposite building blocks of the workflow with a real Science Use Case that wasdiscussed during the 4th CDAW (Coordinated Data Analysis Workshop eventheld by the HELIO project. We show that generic workflow building blocks canbe used to construct a propagation model that succesfully describes the transitof solar eruptive events toward Earth and predict a correct Earth-impact time
International Nuclear Information System (INIS)
Tentner, A.M.; Parma, E.; Wei, T.; Wigeland, R.
2010-01-01
An important goal of the US DOE reactor development program is to conceptualize advanced safety design features for a demonstration Sodium Fast Reactor (SFR). The treatment of severe accidents is one of the key safety issues in the design approach for advanced SFR systems. It is necessary to develop an in-depth understanding of the risk of severe accidents for the SFR so that appropriate risk management measures can be implemented early in the design process. This report presents the results of a review of the SFR features and phenomena that directly influence the sequence of events during a postulated severe accident. The report identifies the safety features used or proposed for various SFR designs in the US and worldwide for the prevention and/or mitigation of Core Disruptive Accidents (CDA). The report provides an overview of the current SFR safety approaches and the role of severe accidents. Mutual understanding of these design features and safety approaches is necessary for future collaborations between the US and its international partners as part of the GEN IV program. The report also reviews the basis for an integrated safety approach to severe accidents for the SFR that reflects the safety design knowledge gained in the US during the Advanced Liquid Metal Reactor (ALMR) and Integral Fast Reactor (IFR) programs. This approach relies on inherent reactor and plant safety performance characteristics to provide additional safety margins. The goal of this approach is to prevent development of severe accident conditions, even in the event of initiators with safety system failures previously recognized to lead directly to reactor damage.
Energy Technology Data Exchange (ETDEWEB)
Tentner, A. M.; Parma, E.; Wei, T.; Wigeland, R.; Nuclear Engineering Division; SNL; INL
2010-03-01
An important goal of the US DOE reactor development program is to conceptualize advanced safety design features for a demonstration Sodium Fast Reactor (SFR). The treatment of severe accidents is one of the key safety issues in the design approach for advanced SFR systems. It is necessary to develop an in-depth understanding of the risk of severe accidents for the SFR so that appropriate risk management measures can be implemented early in the design process. This report presents the results of a review of the SFR features and phenomena that directly influence the sequence of events during a postulated severe accident. The report identifies the safety features used or proposed for various SFR designs in the US and worldwide for the prevention and/or mitigation of Core Disruptive Accidents (CDA). The report provides an overview of the current SFR safety approaches and the role of severe accidents. Mutual understanding of these design features and safety approaches is necessary for future collaborations between the US and its international partners as part of the GEN IV program. The report also reviews the basis for an integrated safety approach to severe accidents for the SFR that reflects the safety design knowledge gained in the US during the Advanced Liquid Metal Reactor (ALMR) and Integral Fast Reactor (IFR) programs. This approach relies on inherent reactor and plant safety performance characteristics to provide additional safety margins. The goal of this approach is to prevent development of severe accident conditions, even in the event of initiators with safety system failures previously recognized to lead directly to reactor damage.
Investigation of tt in the full hadronic final state at CDF with a neural network approach
Sidoti, A; Busetto, G; Castro, A; Dusini, S; Lazzizzera, I; Wyss, J
2001-01-01
In this work we present the results of a neural network (NN) approach to the measurement of the tt production cross-section and top mass in the all-hadronic channel, analyzing data collected at the Collider Detector at Fermilab (CDF) experiment. We have used a hardware implementation of a feedforward neural network, TOTEM, the product of a collaboration of INFN (Istituto Nazionale Fisica Nucleare)-IRST (Istituto per la Ricerca Scientifica e Tecnologica)-University of Trento, Italy. Particular attention has been paid to the evaluation of the systematics specifically related to the NN approach. The results are consistent with those obtained at CDF by conventional data selection techniques. (38 refs).
300 Area dangerous waste tank management system: Compliance plan approach. Final report
International Nuclear Information System (INIS)
1996-03-01
In its Dec. 5, 1989 letter to DOE-Richland (DOE-RL) Operations, the Washington State Dept. of Ecology requested that DOE-RL prepare ''a plant evaluating alternatives for storage and/or treatment of hazardous waste in the 300 Area...''. This document, prepared in response to that letter, presents the proposed approach to compliance of the 300 Area with the federal Resource Conservation and Recovery Act and Washington State's Chapter 173-303 WAC, Dangerous Waste Regulations. It also contains 10 appendices which were developed as bases for preparing the compliance plan approach. It refers to the Radioactive Liquid Waste System facilities and to the radioactive mixed waste
International Nuclear Information System (INIS)
Vladimir M Zamansky; Mark S. Sheldon; Vitali V. Lissianski; Peter M. Maly; David K. Moyeda; Antonio Marquez; W. Randall Seeker
2000-01-01
high efficiency of biomass in reburning are low fuel-N content and high content of alkali metals in ash. These results indicate that the efficiency of biomass as a reburning fuel may be predicted based on its ultimate, proximate, and ash analyses. The results of experimental and kinetic modeling studies were utilized in applying a validated methodology for reburning system design to biomass reburning in a typical coal-fired boiler. Based on the trends in biomass reburning performance and the characteristics of the boiler under study, a preliminary process design for biomass reburning was developed. Physical flow models were applied to specific injection parameters and operating scenarios, to assess the mixing performance of reburning fuel and overfire air jets which is of paramount importance in achieving target NO(sub x) control performance. The two preliminary cases studied showed potential as candidate reburning designs, and demonstrated that similar mixing performance could be achieved in operation with different quantities of reburning fuel. Based upon this preliminary evaluation, EER has determined that reburning and advanced reburning technologies can be successfully applied using biomass. Pilot-scale studies on biomass reburning conducted by EER have indicated that biomass is an excellent reburning fuel. This generic design study provides a template approach for future demonstrations in specific installations
Vector-model-supported approach in prostate plan optimization
International Nuclear Information System (INIS)
Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi
2017-01-01
Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration
Vector-model-supported approach in prostate plan optimization
Energy Technology Data Exchange (ETDEWEB)
Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)
2017-07-01
Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration
Learner-Centered Instruction (LCI): Volume 7. Evaluation of the LCI Approach. Final Report.
Pieper, William J.; And Others
An evaluation of the learner-centered instruction (LCI) approach to training was conducted by comparing the LCI F-111A weapons control systems mechanic/technician course with the conventional Air Force course for the same Air Force specialty code (AFSC) on the following dimensions; job performance of course graduates, man-hour and dollar costs of…
Computational and Game-Theoretic Approaches for Modeling Bounded Rationality
L. Waltman (Ludo)
2011-01-01
textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic
International Nuclear Information System (INIS)
Drake, R.L.; McNaughton, D.J.; Huang, C.
1979-08-01
Models that are available for the analysis of airborne pollutants are summarized. In addition, recommendations are given concerning the use of particular models to aid in particular air quality decision making processes. The air quality models are characterized in terms of time and space scales, steady state or time dependent processes, reference frames, reaction mechanisms, treatment of turbulence and topography, and model uncertainty. Using these characteristics, the models are classified in the following manner: simple deterministic models, such as air pollution indices, simple area source models and rollback models; statistical models, such as averaging time models, time series analysis and multivariate analysis; local plume and puff models; box and multibox models; finite difference or grid models; particle models; physical models, such as wind tunnels and liquid flumes; regional models; and global models
A Dynamic Approach to Modeling Dependence Between Human Failure Events
Energy Technology Data Exchange (ETDEWEB)
Boring, Ronald Laurids [Idaho National Laboratory
2015-09-01
In practice, most HRA methods use direct dependence from THERP—the notion that error be- gets error, and one human failure event (HFE) may increase the likelihood of subsequent HFEs. In this paper, we approach dependence from a simulation perspective in which the effects of human errors are dynamically modeled. There are three key concepts that play into this modeling: (1) Errors are driven by performance shaping factors (PSFs). In this context, the error propagation is not a result of the presence of an HFE yielding overall increases in subsequent HFEs. Rather, it is shared PSFs that cause dependence. (2) PSFs have qualities of lag and latency. These two qualities are not currently considered in HRA methods that use PSFs. Yet, to model the effects of PSFs, it is not simply a matter of identifying the discrete effects of a particular PSF on performance. The effects of PSFs must be considered temporally, as the PSFs will have a range of effects across the event sequence. (3) Finally, there is the concept of error spilling. When PSFs are activated, they not only have temporal effects but also lateral effects on other PSFs, leading to emergent errors. This paper presents the framework for tying together these dynamic dependence concepts.
A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models
International Nuclear Information System (INIS)
Troffaes, Matthias C.M.; Walter, Gero; Kelly, Dana
2014-01-01
In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model
A Discrete Monetary Economic Growth Model with the MIU Approach
Directory of Open Access Journals (Sweden)
Wei-Bin Zhang
2008-01-01
Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.
Search for the standard model Higgs boson in tau final states
Abazov, V.M.; et al., [Unknown; Ancu, L.S.; de Jong, S.J.; Filthaut, F.; Galea, C.F.; Hegeman, J.G.; Houben, P.; Meijer, M.M.; Svoisky, P.; van den Berg, P.J.; van Leeuwen, W.M.
2009-01-01
We present a search for the standard model Higgs boson using hadronically decaying tau leptons, in 1 fb(-1) of data collected with the D0 detector at the Fermilab Tevatron p(p)over-bar collider. We select two final states: tau(+/-) plus missing transverse energy and b jets, and tau(+)tau(-) plus
EPA announced the availability of the final report, An Exploratory Study: Assessment of Modeled Dioxin Exposure in Ceramic Art Studios. This report investigates the potential dioxin exposure to artists/hobbyists who use ball clay to make pottery and related products. Derm...
International Nuclear Information System (INIS)
1979-01-01
Analytical procedures were refined for the Structural Assessment Approach for assessing the Material Control and Accounting systems at facilities that contain special nuclear material. Requirements were established for an efficient, feasible algorithm to be used in evaluating system performance measures that involve the probability of detection. Algorithm requirements to calculate the probability of detection for a given type of adversary and the target set are described
Mathematical Modelling Approach in Mathematics Education
Arseven, Ayla
2015-01-01
The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…
Rival approaches to mathematical modelling in immunology
Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.
2007-08-01
In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.
A hybrid agent-based approach for modeling microbiological systems.
Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing
2008-11-21
Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.
Energy Technology Data Exchange (ETDEWEB)
Knio, Omar [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science
2017-05-05
The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solution can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.
Numerical modelling approach for mine backfill
Indian Academy of Sciences (India)
Muhammad Zaka Emad
2017-07-24
Jul 24, 2017 ... conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. The .... placed in an ore pass that leads the ore to the ore bin and crusher, from ... 1 year, depending on the mine plan.
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
OILMAP: A global approach to spill modeling
International Nuclear Information System (INIS)
Spaulding, M.L.; Howlett, E.; Anderson, E.; Jayko, K.
1992-01-01
OILMAP is an oil spill model system suitable for use in both rapid response mode and long-range contingency planning. It was developed for a personal computer and employs full-color graphics to enter data, set up spill scenarios, and view model predictions. The major components of OILMAP include environmental data entry and viewing capabilities, the oil spill models, and model prediction display capabilities. Graphic routines are provided for entering wind data, currents, and any type of geographically referenced data. Several modes of the spill model are available. The surface trajectory mode is intended for quick spill response. The weathering model includes the spreading, evaporation, entrainment, emulsification, and shoreline interaction of oil. The stochastic and receptor models simulate a large number of trajectories from a single site for generating probability statistics. Each model and the algorithms they use are described. Several additional capabilities are planned for OILMAP, including simulation of tactical spill response and subsurface oil transport. 8 refs
Relaxed memory models: an operational approach
Boudol , Gérard; Petri , Gustavo
2009-01-01
International audience; Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allow...
Do recommender systems benefit users? a modeling approach
Yeung, Chi Ho
2016-04-01
Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.
Modeling composting kinetics: A review of approaches
Hamelers, H.V.M.
2004-01-01
Composting kinetics modeling is necessary to design and operate composting facilities that comply with strict market demands and tight environmental legislation. Current composting kinetics modeling can be characterized as inductive, i.e. the data are the starting point of the modeling process and
Conformally invariant models: A new approach
International Nuclear Information System (INIS)
Fradkin, E.S.; Palchik, M.Ya.; Zaikin, V.N.
1996-02-01
A pair of mathematical models of quantum field theory in D dimensions is analyzed, particularly, a model of a charged scalar field defined by two generations of secondary fields in the space of even dimensions D>=4 and a model of a neutral scalar field defined by two generations of secondary fields in two-dimensional space. 6 refs
Low-pressure approach to the formation and study of exciplex systems. Final report
International Nuclear Information System (INIS)
Sanzone, G.
1981-06-01
Under this contract, the following goals were set. (1) Development and construction of an experimental system for the study of the kinetics of excimers, and demonstrate the validity of the low-pressure approach to such studies. The apparatus was to consist of the following: (a) cluster-molecular-beam source of van der Waals dimers and higher oligomers; (b) modulated-beam mass spectrometer; (c) low-energy electron beam for the production of excimers; (d) vacuum-ultraviolet to Visible detection and photon-counting system to monitor excimer emission; (e) flash-excited tunable laser for studies of resonant self-absorptions. (2) Form Ar 2 in its van der Waals ground state. (3) Produce Ar 2 * by electron bombardment of Ar 2 . (4) Perform fluorescence and photon absorption studies of Ar 2 *. At the end of the contract period, goals 1 and 2 have been met; experiments 3 and 4 have been designed
Comparison of tree types of models for the prediction of final academic achievement
Directory of Open Access Journals (Sweden)
Silvana Gasar
2002-12-01
Full Text Available For efficient prevention of inappropriate secondary school choices and by that academic failure, school counselors need a tool for the prediction of individual pupil's final academic achievements. Using data mining techniques on pupils' data base and expert modeling, we developed several models for the prediction of final academic achievement in an individual high school educational program. For data mining, we used statistical analyses, clustering and two machine learning methods: developing classification decision trees and hierarchical decision models. Using an expert system shell DEX, an expert system, based on a hierarchical multi-attribute decision model, was developed manually. All the models were validated and evaluated from the viewpoint of their applicability. The predictive accuracy of DEX models and decision trees was equal and very satisfying, as it reached the predictive accuracy of an experienced counselor. With respect on the efficiency and difficulties in developing models, and relatively rapid changing of our education system, we propose that decision trees are used in further development of predictive models.
A systemic approach to modelling of radiobiological effects
International Nuclear Information System (INIS)
Obaturov, G.M.
1988-01-01
Basic principles of the systemic approach to modelling of the radiobiological effects at different levels of cell organization have been formulated. The methodology is proposed for theoretical modelling of the effects at these levels
Energy Technology Data Exchange (ETDEWEB)
Quinn, John
2009-11-30
Work related to this project introduced the idea of an effective monopole strength Q* that acted as the effective angular momentum of the lowest shell of composite Fermions (CF). This allowed us to predict the angular momentum of the lowest band of energy states for any value of the applied magnetic field simply by determining N{sub QP} the number of quasielectrons (QE) or quasiholes (QH) in a partially filled CF shell and adding angular momenta of the N{sub QP} Fermions excitations. The approach reported treated the filled CF level as a vacuum state which could support QE and QH excitations. Numerical diagonalization of small systems allowed us to determine the angular momenta, the energy, and the pair interaction energies of these elementary excitations. The spectra of low energy states could then be evaluated in a Fermi liquid-like picture, treating the much smaller number of quasiparticles and their interactions instead of the larger system of N electrons with Coulomb interactions.
Energy Technology Data Exchange (ETDEWEB)
Fox, C.D. [Fort William First Nation, Thunder Bay, ON (Canada)
2006-03-13
This report presented a comprehensive approach to energy conservation programming for the Fort William First Nation, located in Thunder Bay, Ontario. The report outlined the historical context of the relationship between the Canadian government and Aboriginal people. The Aboriginal community in Ontario was described with reference to the difference between First Nations population, Metis, and Inuit. Statistics on the Aboriginal population in Ontario was broken down. Different Aboriginal organizations as well as organizations serving Aboriginal peoples were identified and described. The report also described the political process and administrative protocol for energy conservation and energy efficiency. Energy conservation in the Aboriginal community was also explained. Last, the report provided several recommendations related to awareness and education; translation; incentives; delivery mechanisms; and pilot projects. The report concluded with an agreement to hold a provincial conference in Toronto on the issues raised in the report. The report concluded that an Aboriginal unit within the Bureau of Conservation of the Ontario Power Authority was envisioned to plan, develop, implement, manage and monitor the deliverables resulting from the report.
Serpentinization reaction pathways: implications for modeling approach
Energy Technology Data Exchange (ETDEWEB)
Janecky, D.R.
1986-01-01
Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.
Consumer preference models: fuzzy theory approach
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
A visual approach for modeling spatiotemporal relations
R.L. Guimarães (Rodrigo); C.S.S. Neto; L.F.G. Soares
2008-01-01
htmlabstractTextual programming languages have proven to be difficult to learn and to use effectively for many people. For this sake, visual tools can be useful to abstract the complexity of such textual languages, minimizing the specification efforts. In this paper we present a visual approach for
PRODUCT TRIAL PROCESSING (PTP): A MODEL APPROACH ...
African Journals Online (AJOL)
Admin
This study is a theoretical approach to consumer's processing of product trail, and equally explored ... consumer's first usage experience with a company's brand or product that is most important in determining ... product, what it is really marketing is the expected ..... confidence, thus there is a positive relationship between ...
Three dimensional global modeling of atmospheric CO2. Final technical report
International Nuclear Information System (INIS)
Fung, I.; Hansen, J.; Rind, D.
1983-01-01
A modeling effort has been initiated to study the prospects of extracting information on carbon dioxide sources and sinks from observed CO 2 variations. The approach uses a three-dimensional global transport model, based on winds from a 3-D general circulation model (GCM), to advect CO 2 noninteractively, i.e., as a tracer, with specified sources and sinks of CO 2 at the surface. This report identifies the 3-D model employed in this study and discusses biosphere, ocean and fossil fuel sources and sinks. Some preliminary model results are presented. 14 figures
Nonlinear Modeling of the PEMFC Based On NNARX Approach
Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo
2015-01-01
Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...
Development of a Conservative Model Validation Approach for Reliable Analysis
2015-01-01
CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account
Comparison of two novel approaches to model fibre reinforced concrete
Radtke, F.K.F.; Simone, A.; Sluys, L.J.
2009-01-01
We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity
Energy Technology Data Exchange (ETDEWEB)
Lovley, Derek R.
2012-10-31
This project successfully accomplished its goal of coupling genome-scale metabolic models with hydrological and geochemical models to predict the activity of subsurface microorganisms during uranium bioremediation. Furthermore, it was demonstrated how this modeling approach can be used to develop new strategies to optimize bioremediation. The approach of coupling genome-scale metabolic models with reactive transport modeling is now well enough established that it has been adopted by other DOE investigators studying uranium bioremediation. Furthermore, the basic principles developed during our studies will be applicable to much broader investigations of microbial activities, not only for other types of bioremediation, but microbial metabolism in diversity of environments. This approach has the potential to make an important contribution to predicting the impact of environmental perturbations on the cycling of carbon and other biogeochemical cycles.
Validated Models for Radiation Response and Signal Generation in Scintillators: Final Report
Energy Technology Data Exchange (ETDEWEB)
Kerisit, Sebastien N.; Gao, Fei; Xie, YuLong; Campbell, Luke W.; Van Ginhoven, Renee M.; Wang, Zhiguo; Prange, Micah P.; Wu, Dangxin
2014-12-01
This Final Report presents work carried out at Pacific Northwest National Laboratory (PNNL) under the project entitled “Validated Models for Radiation Response and Signal Generation in Scintillators” (Project number: PL10-Scin-theor-PD2Jf) and led by Drs. Fei Gao and Sebastien N. Kerisit. This project was divided into four tasks: 1) Electronic response functions (ab initio data model) 2) Electron-hole yield, variance, and spatial distribution 3) Ab initio calculations of information carrier properties 4) Transport of electron-hole pairs and scintillation efficiency Detailed information on the results obtained in each of the four tasks is provided in this Final Report. Furthermore, published peer-reviewed articles based on the work carried under this project are included in Appendix. This work was supported by the National Nuclear Security Administration, Office of Nuclear Nonproliferation Research and Development (DNN R&D/NA-22), of the U.S. Department of Energy (DOE).
Modeling thrombin generation: plasma composition based approach.
Brummel-Ziedins, Kathleen E; Everse, Stephen J; Mann, Kenneth G; Orfeo, Thomas
2014-01-01
Thrombin has multiple functions in blood coagulation and its regulation is central to maintaining the balance between hemorrhage and thrombosis. Empirical and computational methods that capture thrombin generation can provide advancements to current clinical screening of the hemostatic balance at the level of the individual. In any individual, procoagulant and anticoagulant factor levels together act to generate a unique coagulation phenotype (net balance) that is reflective of the sum of its developmental, environmental, genetic, nutritional and pharmacological influences. Defining such thrombin phenotypes may provide a means to track disease progression pre-crisis. In this review we briefly describe thrombin function, methods for assessing thrombin dynamics as a phenotypic marker, computationally derived thrombin phenotypes versus determined clinical phenotypes, the boundaries of normal range thrombin generation using plasma composition based approaches and the feasibility of these approaches for predicting risk.
Final Technical Report -- Bridging the PSI Knowledge Gap: A Multiscale Approach
Energy Technology Data Exchange (ETDEWEB)
Whyte, Dennis [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
2014-12-12
The Plasma Surface Interactions (PSI) Science Center formed by the grant undertook a multidisciplinary set of studies on the complex interface between the plasma and solid states of matter. The strategy of the center was to combine and integrate the experimental, diagnostic and modeling toolkits from multiple institutions towards specific PSI problems. In this way the Center could tackle integrated science issues which were not addressable by single institutions, as well as evolve the underlying science of the PSI in a more general way than just for fusion applications. The overall strategy proved very successful. The research result and highlights of the MIT portion of the Center are primarily described. A particular highlight is the study of tungsten nano-tendril growth in the presence of helium plasmas. The Center research provided valuable new insights to the mechanisms controlling the nano-tendrils by developing coupled modeling and in situ diagnostic methods which could be directly compared. For example, the role of helium accumulation in tungsten distortion in the surface was followed with unique in situ helium concentration diagnostics developed. These depth-profiled, time-resolved helium concentration measurements continue to challenge the numerical models of nano-tendrils. The Center team also combined its expertise on tungsten nano-tendrils to demonstrate for the first time the growth of the tendrils in a fusion environment on the Alcator C-Mod fusion experiment, thus having significant impact on the broader fusion research effort. A new form of isolated nano-tendril “columns” were identified which are now being used to understand the underlying mechanisms controlling the tendril growth. The Center also advanced PSI science on a broader front with a particular emphasis on developing a wide range of in situ PSI diagnostic tools at the DIONISOS facility at MIT. For example the strong suppression of sputtering by the certain combination of light
A simple approach to modeling ductile failure.
Energy Technology Data Exchange (ETDEWEB)
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
ICFD modeling of final settlers - developing consistent and effective simulation model structures
DEFF Research Database (Denmark)
Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham
CFD concept. The case of secondary settling tanks (SSTs) is used to demonstrate the methodological steps using the validated CFD model with the hindered-transientcompression settling velocity model by (10). Factor screening and latin hypercube sampling (LSH) are used to degenerate a 2-D axi-symmetrical CFD...... of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Results suggest that the iCFD model developed...... the feed-layer. These scenarios were inspired by literature (1; 2; 9). As for the D0--iCFD model, values of SSRE obtained are below 1 with an average SSRE=0.206. The simulation model thus can predict the solids distribution inside the tank with a satisfactory accuracy. Averaged relative errors of 8.1 %, 3...
Analysis of Final Energy Demand by Sector in Malaysia using MAED Model
International Nuclear Information System (INIS)
Kumar, M.; Muhammed Zulfakar Mohd Zolkaffly; Alawiah Musa
2011-01-01
Energy supply security is important in ensuring a long term supply to fulfill the growing energy demand. This paper presents the use of IAEA energy planning tool, Model for Analysis of Energy Demand (MAED) to analyze, simulate and compare final energy demand by five different sectors in Malaysia under some assumptions, bounds and restrictions and the outcome can be used for planning of energy supply in future. (author)
1993-1994 Final technical report for establishing the SECME Model in the District of Columbia
Energy Technology Data Exchange (ETDEWEB)
Vickers, R.G.
1995-12-31
This is the final report for a program to establish the SECME Model in the District of Columbia. This program has seen the development of a partnership between the District of Columbia Public Schools, the University of the District of Columbia, the Department of Energy, and SECME. This partnership has demonstrated positive achievement in mathematics and science education and learning in students within the District of Columbia.
1993-1994 Final technical report for establishing the SECME Model in the District of Columbia
International Nuclear Information System (INIS)
Vickers, R.G.
1995-01-01
This is the final report for a program to establish the SECME Model in the District of Columbia. This program has seen the development of a partnership between the District of Columbia Public Schools, the University of the District of Columbia, the Department of Energy, and SECME. This partnership has demonstrated positive achievement in mathematics and science education and learning in students within the District of Columbia
A new approach for modeling composite materials
Alcaraz de la Osa, R.; Moreno, F.; Saiz, J. M.
2013-03-01
The increasing use of composite materials is due to their ability to tailor materials for special purposes, with applications evolving day by day. This is why predicting the properties of these systems from their constituents, or phases, has become so important. However, assigning macroscopical optical properties for these materials from the bulk properties of their constituents is not a straightforward task. In this research, we present a spectral analysis of three-dimensional random composite typical nanostructures using an Extension of the Discrete Dipole Approximation (E-DDA code), comparing different approaches and emphasizing the influences of optical properties of constituents and their concentration. In particular, we hypothesize a new approach that preserves the individual nature of the constituents introducing at the same time a variation in the optical properties of each discrete element that is driven by the surrounding medium. The results obtained with this new approach compare more favorably with the experiment than previous ones. We have also applied it to a non-conventional material composed of a metamaterial embedded in a dielectric matrix. Our version of the Discrete Dipole Approximation code, the EDDA code, has been formulated specifically to tackle this kind of problem, including materials with either magnetic and tensor properties.
Energy Technology Data Exchange (ETDEWEB)
McFarlane, Karis J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-10-28
Boreal peatlands contain large amounts of old carbon, protected by anaerobic and cold conditions. Climate change could result in favorable conditions for the microbial decomposition and release of this old peat carbon as CO2 or CH4 back into the atmosphere. Our goal was to test the potential for this positive biological feedback to climate change at SPRUCE (Spruce and Peatland Response Under Climatic and Environmental Change), a manipulation experiment funded by DOE and occurring in a forested bog in Minnesota. Taking advantage of LLNL’s capabilities and expertise in chemical and isotopic signatures we found that carbon emissions from peat were dominated by recently fixed photosynthates, even after short-term experimental warming. We also found that subsurface hydrologic transport was surprisingly rapid at SPRUCE, supplying microbes with young dissolved organic carbon (DOC). We also identified which microbes oxidize CH4 to CO2 at SPRUCE and found that the most active of these also fix N2 (which means they can utilize atmospheric N, making it accessible for other microbes and plants). These results reflect important interactions between hydrology, carbon cycling, and nitrogen cycling present at the bog and relevant to interpreting experimental results and modeling the wetland response to experimental treatments. LLNL involvement at SPRUCE continues through collaborations and a small contract with ORNL, the lead lab for the SPRUCE experiment.
An Integrated Approach to Modeling Evacuation Behavior
2011-02-01
A spate of recent hurricanes and other natural disasters have drawn a lot of attention to the evacuation decision of individuals. Here we focus on evacuation models that incorporate two economic phenomena that seem to be increasingly important in exp...
Infectious disease modeling a hybrid system approach
Liu, Xinzhi
2017-01-01
This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.
On Combining Language Models: Oracle Approach
National Research Council Canada - National Science Library
Hacioglu, Kadri; Ward, Wayne
2001-01-01
In this paper, we address the of combining several language models (LMs). We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle...
Advanced language modeling approaches, case study: Expert search
Hiemstra, Djoerd
2008-01-01
This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the
Approaches to modelling hydrology and ecosystem interactions
Silberstein, Richard P.
2014-05-01
As the pressures of industry, agriculture and mining on groundwater resources increase there is a burgeoning un-met need to be able to capture these multiple, direct and indirect stresses in a formal framework that will enable better assessment of impact scenarios. While there are many catchment hydrological models and there are some models that represent ecological states and change (e.g. FLAMES, Liedloff and Cook, 2007), these have not been linked in any deterministic or substantive way. Without such coupled eco-hydrological models quantitative assessments of impacts from water use intensification on water dependent ecosystems under changing climate are difficult, if not impossible. The concept would include facility for direct and indirect water related stresses that may develop around mining and well operations, climate stresses, such as rainfall and temperature, biological stresses, such as diseases and invasive species, and competition such as encroachment from other competing land uses. Indirect water impacts could be, for example, a change in groundwater conditions has an impact on stream flow regime, and hence aquatic ecosystems. This paper reviews previous work examining models combining ecology and hydrology with a view to developing a conceptual framework linking a biophysically defensable model that combines ecosystem function with hydrology. The objective is to develop a model capable of representing the cumulative impact of multiple stresses on water resources and associated ecosystem function.
Constructing a justice model based on Sen's capability approach
Yüksel, Sevgi; Yuksel, Sevgi
2008-01-01
The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
An ontology-based approach for modelling architectural styles
Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm
2007-01-01
peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...
Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches
Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia
2017-10-01
With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.
A final state interaction model for K and eta decay into three pions
International Nuclear Information System (INIS)
Angus, A.G.
1973-07-01
The Khuri-Treiman model is adapted in a relativistic formalism with the electromagnetic mass differences of the pions in the final state taken into account to produce new predictions for the relative decay rates and the slope parameters of the four reactions K→3x and the two reactions eta→3x. The pion-pion interaction is investigated in terms of the N/D method and as well as the normal pure pole approximations for the N functions. The Khuri-Treiman equations are solved for the best solutions from both the pure pole and the mixed pole and cut models. (author)
Mathematical modelling a case studies approach
Illner, Reinhard; McCollum, Samantha; Roode, Thea van
2004-01-01
Mathematical modelling is a subject without boundaries. It is the means by which mathematics becomes useful to virtually any subject. Moreover, modelling has been and continues to be a driving force for the development of mathematics itself. This book explains the process of modelling real situations to obtain mathematical problems that can be analyzed, thus solving the original problem. The presentation is in the form of case studies, which are developed much as they would be in true applications. In many cases, an initial model is created, then modified along the way. Some cases are familiar, such as the evaluation of an annuity. Others are unique, such as the fascinating situation in which an engineer, armed only with a slide rule, had 24 hours to compute whether a valve would hold when a temporary rock plug was removed from a water tunnel. Each chapter ends with a set of exercises and some suggestions for class projects. Some projects are extensive, as with the explorations of the predator-prey model; oth...
The simplified models approach to constraining supersymmetry
Energy Technology Data Exchange (ETDEWEB)
Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)
2015-07-01
The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.
Lightweight approach to model traceability in a CASE tool
Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita
2017-07-01
A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.
New approaches for modeling type Ia supernovae
International Nuclear Information System (INIS)
Zingale, Michael; Almgren, Ann S.; Bell, John B.; Day, Marcus S.; Rendleman, Charles A.; Woosley, Stan
2007-01-01
Type Ia supernovae (SNe Ia) are the largest thermonuclear explosions in the Universe. Their light output can be seen across great distances and has led to the discovery that the expansion rate of the Universe is accelerating. Despite the significance of SNe Ia, there are still a large number of uncertainties in current theoretical models. Computational modeling offers the promise to help answer the outstanding questions. However, even with today's supercomputers, such calculations are extremely challenging because of the wide range of length and timescales. In this paper, we discuss several new algorithms for simulations of SNe Ia and demonstrate some of their successes
Chancroid transmission dynamics: a mathematical modeling approach.
Bhunu, C P; Mushayabasa, S
2011-12-01
Mathematical models have long been used to better understand disease transmission dynamics and how to effectively control them. Here, a chancroid infection model is presented and analyzed. The disease-free equilibrium is shown to be globally asymptotically stable when the reproduction number is less than unity. High levels of treatment are shown to reduce the reproduction number suggesting that treatment has the potential to control chancroid infections in any given community. This result is also supported by numerical simulations which show a decline in chancroid cases whenever the reproduction number is less than unity.
Fuzzy Investment Portfolio Selection Models Based on Interval Analysis Approach
Directory of Open Access Journals (Sweden)
Haifeng Guo
2012-01-01
Full Text Available This paper employs fuzzy set theory to solve the unintuitive problem of the Markowitz mean-variance (MV portfolio model and extend it to a fuzzy investment portfolio selection model. Our model establishes intervals for expected returns and risk preference, which can take into account investors' different investment appetite and thus can find the optimal resolution for each interval. In the empirical part, we test this model in Chinese stocks investment and find that this model can fulfill different kinds of investors’ objectives. Finally, investment risk can be decreased when we add investment limit to each stock in the portfolio, which indicates our model is useful in practice.
A kinetic approach to magnetospheric modeling
International Nuclear Information System (INIS)
Whipple, E.C. Jr.
1979-01-01
The earth's magnetosphere is caused by the interaction between the flowing solar wind and the earth's magnetic dipole, with the distorted magnetic field in the outer parts of the magnetosphere due to the current systems resulting from this interaction. It is surprising that even the conceptually simple problem of the collisionless interaction of a flowing plasma with a dipole magnetic field has not been solved. A kinetic approach is essential if one is to take into account the dispersion of particles with different energies and pitch angles and the fact that particles on different trajectories have different histories and may come from different sources. Solving the interaction problem involves finding the various types of possible trajectories, populating them with particles appropriately, and then treating the electric and magnetic fields self-consistently with the resulting particle densities and currents. This approach is illustrated by formulating a procedure for solving the collisionless interaction problem on open field lines in the case of a slowly flowing magnetized plasma interacting with a magnetic dipole
A kinetic approach to magnetospheric modeling
Whipple, E. C., Jr.
1979-01-01
The earth's magnetosphere is caused by the interaction between the flowing solar wind and the earth's magnetic dipole, with the distorted magnetic field in the outer parts of the magnetosphere due to the current systems resulting from this interaction. It is surprising that even the conceptually simple problem of the collisionless interaction of a flowing plasma with a dipole magnetic field has not been solved. A kinetic approach is essential if one is to take into account the dispersion of particles with different energies and pitch angles and the fact that particles on different trajectories have different histories and may come from different sources. Solving the interaction problem involves finding the various types of possible trajectories, populating them with particles appropriately, and then treating the electric and magnetic fields self-consistently with the resulting particle densities and currents. This approach is illustrated by formulating a procedure for solving the collisionless interaction problem on open field lines in the case of a slowly flowing magnetized plasma interacting with a magnetic dipole.
Fractal approach to computer-analytical modelling of tree crown
International Nuclear Information System (INIS)
Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.
1993-09-01
In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs
An interdisciplinary approach to modeling tritium transfer into the environment
International Nuclear Information System (INIS)
Galeriu, D; Melintescu, A.
2005-01-01
More robust radiological assessment models are required to support the safety case for the nuclear industry. Heavy water reactors, fuel processing plants, radiopharmaceutical factories, and the future fusion reactor, all have large tritium loads. While of low probability, large accidental tritium releases cannot be ignored. For Romania that uses CANDU600 for nuclear energy, tritium is the national radionuclide. Tritium enters directly into the life cycle in many physicochemical forms. Tritiated water (HTO) is leaked from most nuclear installations but is partially converted into organically bound tritium (OBT) through plant and animal metabolic processes. Hydrogen and carbon are elemental components of major nutrients and animal tissues and their radioisotopes must be modeled differently from those of most other radionuclides. Tritium transfer from atmosphere to plant and conversion into organically bound tritium strongly depend on plant characteristics, season, and weather conditions. In order to cope with this large variability and avoid expensive calibration experiments, we developed a model using knowledge of plant physiology, agrometeorology, soil sciences, hydrology, and climatology. The transfer of tritiated water to plant was modeled with resistance approach including sparse canopy. The canopy resistance was modeled using the Jarvis-Calvet approach modified in order to make direct use of the canopy photosynthesis rate. The crop growth model WOFOST was used for photosynthesis rate both for canopy resistance and formation of organically bound tritium. Using this formalism, the tritium transfer parameters were directly linked to processes and parameters known from agricultural sciences. Model predictions for tritium in wheat were close to a factor two, according to experimental data without any calibration. The model was also tested on rice and soybean and can be applied for various plants and environmental conditions. For sparse canopy, the model used coupled
A novel approach to modeling atmospheric convection
Goodman, A.
2016-12-01
The inadequate representation of clouds continues to be a large source of uncertainty in the projections from global climate models (GCMs). With continuous advances in computational power, however, the ability for GCMs to explicitly resolve cumulus convection will soon be realized. For this purpose, Jung and Arakawa (2008) proposed the Vector Vorticity Model (VVM), in which vorticity is the predicted variable instead of momentum. This has the advantage of eliminating the pressure gradient force within the framework of an anelastic system. However, the VVM was designed for use on a planar quadrilateral grid, making it unsuitable for implementation in global models discretized on the sphere. Here we have proposed a modification to the VVM where instead the curl of the horizontal vorticity is the primary predicted variable. This allows us to maintain the benefits of the original VVM while working within the constraints of a non-quadrilateral mesh. We found that our proposed model produced results from a warm bubble simulation that were consistent with the VVM. Further improvements that can be made to the VVM are also discussed.
INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...
Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
A new approach to model mixed hydrates
Czech Academy of Sciences Publication Activity Database
Hielscher, S.; Vinš, Václav; Jäger, A.; Hrubý, Jan; Breitkopf, C.; Span, R.
2018-01-01
Roč. 459, March (2018), s. 170-185 ISSN 0378-3812 R&D Projects: GA ČR(CZ) GA17-08218S Institutional support: RVO:61388998 Keywords : gas hydrate * mixture * modeling Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 https://www.sciencedirect.com/science/article/pii/S0378381217304983
Energy and development : A modelling approach
van Ruijven, B.J.|info:eu-repo/dai/nl/304834521
2008-01-01
Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used explore
Modeling Approaches for Describing Microbial Population Heterogeneity
DEFF Research Database (Denmark)
Lencastre Fernandes, Rita
environmental conditions. Three cases are presented and discussed in this thesis. Common to all is the use of S. cerevisiae as model organism, and the use of cell size and cell cycle position as single-cell descriptors. The first case focuses on the experimental and mathematical description of a yeast...
Modelling of air quality for Winter and Summer episodes in Switzerland. Final report
Energy Technology Data Exchange (ETDEWEB)
Andreani-Aksoyoglu, S.; Keller, J.; Barmpadimos, L.; Oderbolz, D.; Tinguely, M.; Prevot, A. [Paul Scherrer Institute (PSI), Laboratory of Atmospheric Chemistry, Villigen (Switzerland); Alfarra, R. [University of Manchester, Manchester (United Kingdom); Sandradewi, J. [Jisca Sandradewi, Hoexter (Germany)
2009-05-15
This final report issued by the General Energy Research Department and its Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute (PSI) reports on the results obtained from the modelling of regional air quality for three episodes, January-February 2006, June 2006 and January 2007. The focus of the calculations is on particulate matter concentrations, as well as on ozone levels in summer. The model results were compared with the aerosol data collected by an Aerosol Mass Spectrometer (AMS), which was operated during all three episodes as well as with the air quality monitoring data from further monitoring programs. The air quality model used in this study is described and the results obtained for various types of locations - rural, city, high-altitude and motorway-near - are presented and discussed. The models used are described.
How is the Current Nano/Microscopic Knowledge Implemented in Model Approaches?
International Nuclear Information System (INIS)
Rotenberg, Benjamin
2013-01-01
The recent developments of experimental techniques have opened new opportunities and challenges for the modelling and simulation of clay materials, on various scales. In this communication, several aspects of the interaction between experimental and modelling approaches will be presented and dis-cussed. What levels of modelling are available depending on the target property and what experimental input is required? How can experimental information be used to validate models? What knowledge can modelling on different scale bring to the knowledge on the physical properties of clays? Finally, what can we do when experimental information is not available? Models implement the current nano/microscopic knowledge using experimental input, taking advantage of multi-scale approaches, and providing data or insights complementary to experiments. Future work will greatly benefit from the recent experimental developments, in particular for 3D-imaging on intermediate scales, and should also address other properties, e.g. mechanical or thermal properties. (authors)
Integration models: multicultural and liberal approaches confronted
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
Modelling thermal plume impacts - Kalpakkam approach
International Nuclear Information System (INIS)
Rao, T.S.; Anup Kumar, B.; Narasimhan, S.V.
2002-01-01
A good understanding of temperature patterns in the receiving waters is essential to know the heat dissipation from thermal plumes originating from coastal power plants. The seasonal temperature profiles of the Kalpakkam coast near Madras Atomic Power Station (MAPS) thermal out fall site are determined and analysed. It is observed that the seasonal current reversal in the near shore zone is one of the major mechanisms for the transport of effluents away from the point of mixing. To further refine our understanding of the mixing and dilution processes, it is necessary to numerically simulate the coastal ocean processes by parameterising the key factors concerned. In this paper, we outline the experimental approach to achieve this objective. (author)
Continued development of modeling tools and theory for rf heating. Final report
International Nuclear Information System (INIS)
Smithe, D.N.
1998-01-01
The work performed during the grant has been reported long before this date, specifically in: (1) the grant's annual performance report for 1991, MRC/WDC-R-277; (2) the published AIP Conference Proceedings number-sign 244, Radio Frequency Power in Plasmas, Charleston, SC 1991, ''Evaluation of Wave Dispersion, Mode-Conversion, and Damping for ECRH with Exact Relativistic Corrections,'' by D.N. Smithe and P.L. Colestock; and (3) an unpublished paper entitled ''Temperature Anisotropy and Rotation Upgrades to the ICRF Modules in SNAP and TRANSP'', presented at the 1992 ICRF Modeling and Theory Workshop, at the Princeton Plasma Physics Laboratory. This final report contains copies of number (1). The specifics of the grant's final months' activities, which to the authors recollection have never been reported to the DOE, are as follows. The original grant, which was to terminate August 15, 1991, was extended without additional funds to October 31, 1992. The primary reason for the extension was to permit attendance at the 1992 ICRF Modeling and Theory Workshop at the Princeton Plasma Physics Laboratory (PPPL), which was finally held August 17--18, 1992, after having been rescheduled several times during the summer of 1992. The body of this report contains copies of the 1991 annual report, which gives detailed discussion of the work accomplished
Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach
Energy Technology Data Exchange (ETDEWEB)
Liao, James C. [Univ. of California, Los Angeles, CA (United States)
2016-10-01
Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.
Nuclear security assessment with Markov model approach
International Nuclear Information System (INIS)
Suzuki, Mitsutoshi; Terao, Norichika
2013-01-01
Nuclear security risk assessment with the Markov model based on random event is performed to explore evaluation methodology for physical protection in nuclear facilities. Because the security incidences are initiated by malicious and intentional acts, expert judgment and Bayes updating are used to estimate scenario and initiation likelihood, and it is assumed that the Markov model derived from stochastic process can be applied to incidence sequence. Both an unauthorized intrusion as Design Based Threat (DBT) and a stand-off attack as beyond-DBT are assumed to hypothetical facilities, and performance of physical protection and mitigation and minimization of consequence are investigated to develop the assessment methodology in a semi-quantitative manner. It is shown that cooperation between facility operator and security authority is important to respond to the beyond-DBT incidence. (author)
An Approach for Modeling Supplier Resilience
2016-04-30
interests include resilience modeling of supply chains, reliability engineering, and meta- heuristic optimization. [m.hosseini@ou.edu] Abstract...be availability , or the extent to which the products produced by the supply chain are available for use (measured as a ratio of uptime to total time...of the use of the product). Available systems are important in many industries, particularly in the Department of Defense, where weapons systems
Tumour resistance to cisplatin: a modelling approach
International Nuclear Information System (INIS)
Marcu, L; Bezak, E; Olver, I; Doorn, T van
2005-01-01
Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure
Tumour resistance to cisplatin: a modelling approach
Energy Technology Data Exchange (ETDEWEB)
Marcu, L [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia); Bezak, E [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia); Olver, I [Faculty of Medicine, University of Adelaide, North Terrace, SA 5000 (Australia); Doorn, T van [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia)
2005-01-07
Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure.
ISM Approach to Model Offshore Outsourcing Risks
Directory of Open Access Journals (Sweden)
Sunand Kumar
2014-07-01
Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing. To this effect, authors have identified various risks through extant review of literature. From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled. Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.
Remote sensing approach to structural modelling
International Nuclear Information System (INIS)
El Ghawaby, M.A.
1989-01-01
Remote sensing techniques are quite dependable tools in investigating geologic problems, specially those related to structural aspects. The Landsat imagery provides discrimination between rock units, detection of large scale structures as folds and faults, as well as small scale fabric elements such as foliation and banding. In order to fulfill the aim of geologic application of remote sensing, some essential surveying maps might be done from images prior to the structural interpretation: land-use, land-form drainage pattern, lithological unit and structural lineament maps. Afterwards, the field verification should lead to interpretation of a comprehensive structural model of the study area to apply for the target problem. To deduce such a model, there are two ways of analysis the interpreter may go through: the direct and the indirect methods. The direct one is needed in cases where the resources or the targets are controlled by an obvious or exposed structural element or pattern. The indirect way is necessary for areas where the target is governed by a complicated structural pattern. Some case histories of structural modelling methods applied successfully for exploration of radioactive minerals, iron deposits and groundwater aquifers in Egypt are presented. The progress in imagery, enhancement and integration of remote sensing data with the other geophysical and geochemical data allow a geologic interpretation to be carried out which become better than that achieved with either of the individual data sets. 9 refs
A moving approach for the Vector Hysteron Model
Energy Technology Data Exchange (ETDEWEB)
Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)
2016-04-01
A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.
Meson dynamics beyond the quark model: a study of final state interactions
International Nuclear Information System (INIS)
Au, K.L.; Pennington, M.R.; Morgan, D.
1986-09-01
A scalar glueball is predicted in the 1 GeV mass region. The present analysis is concerned with experimental evidence for such a state. Recent high statistics results on central dimeson production at the ISR enable the authors to perform an extensive new coupled channel analysis of I = O S-wave ππ and KK-bar final states. This unambiguously reveals three resonances in the 1 GeV region - S 1 (991), S 2 (988) and epsilon(900) - where the naive quark model expects just two. These new features are discussed including how they may be confirmed experimentally and their present interpretation. The S 1 (991) is a plausible candidate for the scalar glueball. Other production reactions are examined (heavy flavour decays and γγ reactions) which lead to the same final states. (author)
Model-Assisted Estimation of Tropical Forest Biomass Change: A Comparison of Approaches
Directory of Open Access Journals (Sweden)
Nikolai Knapp
2018-05-01
Full Text Available Monitoring of changes in forest biomass requires accurate transfer functions between remote sensing-derived changes in canopy height (ΔH and the actual changes in aboveground biomass (ΔAGB. Different approaches can be used to accomplish this task: direct approaches link ΔH directly to ΔAGB, while indirect approaches are based on deriving AGB stock estimates for two points in time and calculating the difference. In some studies, direct approaches led to more accurate estimations, while, in others, indirect approaches led to more accurate estimations. It is unknown how each approach performs under different conditions and over the full range of possible changes. Here, we used a forest model (FORMIND to generate a large dataset (>28,000 ha of natural and disturbed forest stands over time. Remote sensing of forest height was simulated on these stands to derive canopy height models for each time step. Three approaches for estimating ΔAGB were compared: (i the direct approach; (ii the indirect approach and (iii an enhanced direct approach (dir+tex, using ΔH in combination with canopy texture. Total prediction accuracies of the three approaches measured as root mean squared errors (RMSE were RMSEdirect = 18.7 t ha−1, RMSEindirect = 12.6 t ha−1 and RMSEdir+tex = 12.4 t ha−1. Further analyses revealed height-dependent biases in the ΔAGB estimates of the direct approach, which did not occur with the other approaches. Finally, the three approaches were applied on radar-derived (TanDEM-X canopy height changes on Barro Colorado Island (Panama. The study demonstrates the potential of forest modeling for improving the interpretation of changes observed in remote sensing data and for comparing different methodologies.
A model independent search for new physics in final states containing leptons at the D0 experiment
International Nuclear Information System (INIS)
Piper, Joel Michael
2009-01-01
The standard model is known to be the low energy limit of a more general theory. Several consequences of the standard model point to a strong probability of new physics becoming experimentally visible in high energy collisions of a few TeV, resulting in high momentum objects. The specific signatures of these collisions are topics of much debate. Rather than choosing a specific signature, this analysis broadly searches the data, preferring breadth over sensitivity. In searching for new physics, several different approaches are used. These include the comparison of data with standard model background expectation in overall number of events, comparisons of distributions of many kinematic variables, and finally comparisons on the tails of distributions that sum the momenta of the objects in an event. With 1.07 fb -1 at the D0 experiment, we find no evidence of physics beyond the standard model. Several discrepancies from the standard model were found, but none of these provide a compelling case for new physics.
A model independent search for new physics in final states containing leptons at the D0 experiment
Energy Technology Data Exchange (ETDEWEB)
Piper, Joel Michael [Michigan State Univ., East Lansing, MI (United States)
2009-01-01
The standard model is known to be the low energy limit of a more general theory. Several consequences of the standard model point to a strong probability of new physics becoming experimentally visible in high energy collisions of a few TeV, resulting in high momentum objects. The specific signatures of these collisions are topics of much debate. Rather than choosing a specific signature, this analysis broadly searches the data, preferring breadth over sensitivity. In searching for new physics, several different approaches are used. These include the comparison of data with standard model background expectation in overall number of events, comparisons of distributions of many kinematic variables, and finally comparisons on the tails of distributions that sum the momenta of the objects in an event. With 1.07 fb^{-}1 at the D0 experiment, we find no evidence of physics beyond the standard model. Several discrepancies from the standard model were found, but none of these provide a compelling case for new physics.
Engineering approach to modeling of piled systems
International Nuclear Information System (INIS)
Coombs, R.F.; Silva, M.A.G. da
1980-01-01
Available methods of analysis of piled systems subjected to dynamic excitation invade areas of mathematics usually beyond the reach of a practising engineer. A simple technique that avoids that conflict is proposed, at least for preliminary studies, and its application, compared with other methods, is shown to be satisfactory. A corrective factor for parameters currently used to represent transmitting boundaries is derived for a finite strip that models an infinite layer. The influence of internal damping on the dynamic stiffness of the layer and on radiation damping is analysed. (Author) [pt
Jackiw-Pi model: A superfield approach
Gupta, Saurabh
2014-12-01
We derive the off-shell nilpotent and absolutely anticommuting Becchi-Rouet-Stora-Tyutin (BRST) as well as anti-BRST transformations s ( a) b corresponding to the Yang-Mills gauge transformations of 3D Jackiw-Pi model by exploiting the "augmented" super-field formalism. We also show that the Curci-Ferrari restriction, which is a hallmark of any non-Abelian 1-form gauge theories, emerges naturally within this formalism and plays an instrumental role in providing the proof of absolute anticommutativity of s ( a) b .
Applied Regression Modeling A Business Approach
Pardoe, Iain
2012-01-01
An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a
International Nuclear Information System (INIS)
Vermeul, Vince R; Cole, Charles R; Bergeron, Marcel P; Thorne, Paul D; Wurstner, Signe K
2001-01-01
The baseline three-dimensional transient inverse model for the estimation of site-wide scale flow parameters, including their uncertainties, using data on the transient behavior of the unconfined aquifer system over the entire historical period of Hanford operations, has been modified to account for the effects of basalt intercommunication between the Hanford unconfined aquifer and the underlying upper basalt confined aquifer. Both the baseline and alternative conceptual models (ACM-1) considered only the groundwater flow component and corresponding observational data in the 3-Dl transient inverse calibration efforts. Subsequent efforts will examine both groundwater flow and transport. Comparisons of goodness of fit measures and parameter estimation results for the ACM-1 transient inverse calibrated model with those from previous site-wide groundwater modeling efforts illustrate that the new 3-D transient inverse model approach will strengthen the technical defensibility of the final model(s) and provide the ability to incorporate uncertainty in predictions related to both conceptual model and parameter uncertainty
Modeling Saturn's Inner Plasmasphere: Cassini's Closest Approach
Moore, L.; Mendillo, M.
2005-05-01
Ion densities from the three-dimensional Saturn-Thermosphere-Ionosphere-Model (STIM, Moore et al., 2004) are extended above the plasma exobase using the formalism of Pierrard and Lemaire (1996, 1998), which evaluates the balance of gravitational, centrifugal and electric forces on the plasma. The parameter space of low-energy ionospheric contributions to Saturn's plasmasphere is explored by comparing results that span the observed extremes of plasma temperature, 650 K to 1700 K, and a range of velocity distributions, Lorentzian (or Kappa) to Maxwellian. Calculations are made for plasma densities along the path of the Cassini spacecraft's orbital insertion on 1 July 2004. These calculations neglect any ring or satellite sources of plasma, which are most likely minor contributors at 1.3 Saturn radii. Modeled densities will be compared with Cassini measurements as they become available. Moore, L.E., M. Mendillo, I.C.F. Mueller-Wodarg, and D.L. Murr, Icarus, 172, 503-520, 2004. Pierrard, V. and J. Lemaire, J. Geophys. Res., 101, 7923-7934, 1996. Pierrard, V. and J. Lemaire, J. Geophys. Res., 103, 4117, 1998.
Keyring models: An approach to steerability
Miller, Carl A.; Colbeck, Roger; Shi, Yaoyun
2018-02-01
If a measurement is made on one half of a bipartite system, then, conditioned on the outcome, the other half has a new reduced state. If these reduced states defy classical explanation—that is, if shared randomness cannot produce these reduced states for all possible measurements—the bipartite state is said to be steerable. Determining which states are steerable is a challenging problem even for low dimensions. In the case of two-qubit systems, a criterion is known for T-states (that is, those with maximally mixed marginals) under projective measurements. In the current work, we introduce the concept of keyring models—a special class of local hidden state models. When the measurements made correspond to real projectors, these allow us to study steerability beyond T-states. Using keyring models, we completely solve the steering problem for real projective measurements when the state arises from mixing a pure two-qubit state with uniform noise. We also give a partial solution in the case when the uniform noise is replaced by independent depolarizing channels.
Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches
Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem
2014-01-01
Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…
A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING
ANTOULAS, AC; WILLEMS, JC
1993-01-01
The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body
Fast algorithms for transport models. Final report, June 1, 1993--May 31, 1994
International Nuclear Information System (INIS)
Manteuffel, T.
1994-12-01
The focus of this project is the study of multigrid and multilevel algorithms for the numerical solution of Boltzmann models of the transport of neutral and charged particles. In previous work a fast multigrid algorithm was developed for the numerical solution of the Boltzmann model of neutral particle transport in slab geometry assuming isotropic scattering. The new algorithm is extremely fast in the thick diffusion limit; the multigrid v-cycle convergence factor approaches zero as the mean-free-path between collisions approaches zero, independent of the mesh. Also, a fast multilevel method was developed for the numerical solution of the Boltzmann model of charged particle transport in the thick Fokker-Plank limit for slab geometry. Parallel implementations were developed for both algorithms
Energy Technology Data Exchange (ETDEWEB)
Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy
2008-09-01
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model
A Networks Approach to Modeling Enzymatic Reactions.
Imhof, P
2016-01-01
Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes. © 2016 Elsevier Inc. All rights reserved.
Carbonate rock depositional models: A microfacies approach
Energy Technology Data Exchange (ETDEWEB)
Carozzi, A.V.
1988-01-01
Carbonate rocks contain more than 50% by weight carbonate minerals such as calcite, dolomite, and siderite. Understanding how these rocks form can lead to more efficient methods of petroleum exploration. Micofacies analysis techniques can be used as a method of predicting models of sedimentation for carbonate rocks. Micofacies in carbonate rocks can be seen clearly only in thin sections under a microscope. This section analysis of carbonate rocks is a tool that can be used to understand depositional environments, diagenetic evolution of carbonate rocks, and the formation of porosity and permeability in carbonate rocks. The use of micofacies analysis techniques is applied to understanding the origin and formation of carbonate ramps, carbonate platforms, and carbonate slopes and basins. This book will be of interest to students and professionals concerned with the disciplines of sedimentary petrology, sedimentology, petroleum geology, and palentology.
Final-year diagnostic radiography students' perception of role models within the profession.
Conway, Alinya; Lewis, Sarah; Robinson, John
2008-01-01
Within a clinical education setting, the value of role models and prescribed mentors can be seen as an important influence in shaping the student's future as a diagnostic radiographer. A study was undertaken to create a new understanding of how diagnostic radiography students perceive role models and professional behavior in the workforce. The study aimed to determine the impact of clinical education in determining modeling expectations, role model identification and attributes, and the integration of academic education and "hands-on" clinical practice in preparing diagnostic radiography students to enter the workplace. Thirteen final-year (third-year) diagnostic radiography students completed an hour-long interview regarding their experiences and perceptions of role models while on clinical placement. The key concepts that emerged illustrated that students gravitate toward radiographers who enjoy sharing practical experiences with students and are good communicators. Unique to diagnostic radiography, students made distinctions about the presence of role models in private versus public service delivery. This study gives insight to clinical educators in diagnostic radiography and wider allied health into how students perceive role models, interact with preceptors, and combine real-life experiences with formal learning.
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico; Bianchi, Mauro
2004-01-01
In this paper, we propose a general fuzzy inference approach to building a model of hazardous road transport that relates given traffic, weather, and vehicle-speed conditions to the accident rate. The development of the model is discussed in detail, and its validation is provided with reference to literature data regarding the transport of spent nuclear fuel to its final confinement repository
Risk prediction model: Statistical and artificial neural network approach
Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim
2017-04-01
Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.
International Nuclear Information System (INIS)
Wanne, Toivo; Johansson, Erik; Potyondy, David
2004-02-01
SKB is planning to perform a large-scale pillar stability experiment called APSE (Aespoe Pillar Stability Experiment) at Aespoe HRL. The study is focused on understanding and control of progressive rock failure in hard crystalline rock and damage caused by high stresses. The elastic thermo-mechanical modeling was carried out in three dimensions because of the complex test geometry and in-situ stress tensor by using a finite-difference modeling software FLAC3D. Cracking and damage formation were modeled in the area of interest (pillar between two large scale holes) in two dimensions by using the Particle Flow Code (PFC), which is based on particle mechanics. FLAC and PFC were coupled to minimize the computer resources and the computing time. According to the modeling the initial temperature rises from 15 deg C to about 65 deg C in the pillar area during the heating period of 120 days. The rising temperature due to thermal expansion induces stresses in the pillar area and after 120 days heating the stresses have increased about 33% from the excavation induced maximum stress of 150 MPa to 200 MPa in the end of the heating period. The results from FLAC3D model showed that only regions where the crack initiation stress has exceeded were identified and they extended to about two meters down the hole wall. These could be considered the areas where damage may occur during the in-situ test. When the other hole is pressurized with a 0.8 MPa confining pressure it yields that 5 MPa more stress is needed to damage the rock than without confining pressure. This makes the damaged area in some degree smaller. High compressive stresses in addition to some tensile stresses might induce some AE (acoustic emission) activity in the upper part of the hole from the very beginning of the test and are thus potential areas where AE activities may be detected. Monitoring like acoustic emissions will be measured during the test execution. The 2D coupled PFC-FLAC modeling indicated that
Energy Technology Data Exchange (ETDEWEB)
Wanne, Toivo; Johansson, Erik; Potyondy, David [Saanio and Riekkola Oy, Helsinki (Finland)
2004-02-01
SKB is planning to perform a large-scale pillar stability experiment called APSE (Aespoe Pillar Stability Experiment) at Aespoe HRL. The study is focused on understanding and control of progressive rock failure in hard crystalline rock and damage caused by high stresses. The elastic thermo-mechanical modeling was carried out in three dimensions because of the complex test geometry and in-situ stress tensor by using a finite-difference modeling software FLAC3D. Cracking and damage formation were modeled in the area of interest (pillar between two large scale holes) in two dimensions by using the Particle Flow Code (PFC), which is based on particle mechanics. FLAC and PFC were coupled to minimize the computer resources and the computing time. According to the modeling the initial temperature rises from 15 deg C to about 65 deg C in the pillar area during the heating period of 120 days. The rising temperature due to thermal expansion induces stresses in the pillar area and after 120 days heating the stresses have increased about 33% from the excavation induced maximum stress of 150 MPa to 200 MPa in the end of the heating period. The results from FLAC3D model showed that only regions where the crack initiation stress has exceeded were identified and they extended to about two meters down the hole wall. These could be considered the areas where damage may occur during the in-situ test. When the other hole is pressurized with a 0.8 MPa confining pressure it yields that 5 MPa more stress is needed to damage the rock than without confining pressure. This makes the damaged area in some degree smaller. High compressive stresses in addition to some tensile stresses might induce some AE (acoustic emission) activity in the upper part of the hole from the very beginning of the test and are thus potential areas where AE activities may be detected. Monitoring like acoustic emissions will be measured during the test execution. The 2D coupled PFC-FLAC modeling indicated that
A dual model approach to ground water recovery trench design
International Nuclear Information System (INIS)
Clodfelter, C.L.; Crouch, M.S.
1992-01-01
The design of trenches for contaminated ground water recovery must consider several variables. This paper presents a dual-model approach for effectively recovering contaminated ground water migrating toward a trench by advection. The approach involves an analytical model to determine the vertical influence of the trench and a numerical flow model to determine the capture zone within the trench and the surrounding aquifer. The analytical model is utilized by varying trench dimensions and head values to design a trench which meets the remediation criteria. The numerical flow model is utilized to select the type of backfill and location of sumps within the trench. The dual-model approach can be used to design a recovery trench which effectively captures advective migration of contaminants in the vertical and horizontal planes
Mathematic model of regional economy development by the final result of labor resources
Zaitseva, Irina; Malafeev, Oleg; Strekopytov, Sergei; Bondarenko, Galina; Lovyannikov, Denis
2018-04-01
This article presents the mathematic model of regional economy development based on the result of labor resources. The solution of a region development-planning problem is considered for the period of long-lasting planning taking into account the beginning and the end of the planned period. The challenge is to find the distribution of investments in the main and additional branches of the regional economy, which will provide simultaneous transaction of all major sectors of the regional economy from the given condition to the predetermined final state.
Final technical report for DE-SC00012633 AToM (Advanced Tokamak Modeling)
Energy Technology Data Exchange (ETDEWEB)
Holland, Christopher [Univ. of California, San Diego, CA (United States); Orlov, Dmitri [Univ. of California, San Diego, CA (United States); Izzo, Valerie [Univ. of California, San Diego, CA (United States)
2018-02-05
This final report for the AToM project documents contributions from University of California, San Diego researchers over the period of 9/1/2014 – 8/31/2017. The primary focus of these efforts was on performing validation studies of core tokamak transport models using the OMFIT framework, including development of OMFIT workflow scripts. Additional work was performed to develop tools for use of the nonlinear magnetohydrodynamics code NIMROD in OMFIT, and its use in the study of runaway electron dynamics in tokamak disruptions.
Uzikov, Yu N
2001-01-01
Experimental data on the \\pi\\pi\\to pn\\pi^+ reaction measured in an exclusive two-arm experiment at 800 MeV show a narrow peak arising from the strong proton-neutron final-state interaction. It was claimed, within the framework of a certain model, that this peak contained up to a 25 % spin-singlet final-state contribution. By comparing the data with those of \\pi\\pi\\to d\\pi^+ in a largely model-independent way, it is here demonstrated that at all the angles measured the whole of the peak could be explained as being due to spin-triplet final states, with the spin-singlet being at most a few percent. Good qualitative agreement with the measured proton analysing power is also found within this approach.
Virtuous organization: A structural equation modeling approach
Directory of Open Access Journals (Sweden)
Majid Zamahani
2013-02-01
Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.
A systemic approach for modeling soil functions
Vogel, Hans-Jörg; Bartke, Stephan; Daedlow, Katrin; Helming, Katharina; Kögel-Knabner, Ingrid; Lang, Birgit; Rabot, Eva; Russell, David; Stößel, Bastian; Weller, Ulrich; Wiesmeier, Martin; Wollschläger, Ute
2018-03-01
The central importance of soil for the functioning of terrestrial systems is increasingly recognized. Critically relevant for water quality, climate control, nutrient cycling and biodiversity, soil provides more functions than just the basis for agricultural production. Nowadays, soil is increasingly under pressure as a limited resource for the production of food, energy and raw materials. This has led to an increasing demand for concepts assessing soil functions so that they can be adequately considered in decision-making aimed at sustainable soil management. The various soil science disciplines have progressively developed highly sophisticated methods to explore the multitude of physical, chemical and biological processes in soil. It is not obvious, however, how the steadily improving insight into soil processes may contribute to the evaluation of soil functions. Here, we present to a new systemic modeling framework that allows for a consistent coupling between reductionist yet observable indicators for soil functions with detailed process understanding. It is based on the mechanistic relationships between soil functional attributes, each explained by a network of interacting processes as derived from scientific evidence. The non-linear character of these interactions produces stability and resilience of soil with respect to functional characteristics. We anticipate that this new conceptional framework will integrate the various soil science disciplines and help identify important future research questions at the interface between disciplines. It allows the overwhelming complexity of soil systems to be adequately coped with and paves the way for steadily improving our capability to assess soil functions based on scientific understanding.
Modeling of phase equilibria with CPA using the homomorph approach
DEFF Research Database (Denmark)
Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios
2011-01-01
For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...
Modular Modelling and Simulation Approach - Applied to Refrigeration Systems
DEFF Research Database (Denmark)
Sørensen, Kresten Kjær; Stoustrup, Jakob
2008-01-01
This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
The Intersystem Model of Psychotherapy: An Integrated Systems Treatment Approach
Weeks, Gerald R.; Cross, Chad L.
2004-01-01
This article introduces the intersystem model of psychotherapy and discusses its utility as a truly integrative and comprehensive approach. The foundation of this conceptually complex approach comes from dialectic metatheory; hence, its derivation requires an understanding of both foundational and integrational constructs. The article provides a…
Bystander Approaches: Empowering Students to Model Ethical Sexual Behavior
Lynch, Annette; Fleming, Wm. Michael
2005-01-01
Sexual violence on college campuses is well documented. Prevention education has emerged as an alternative to victim-- and perpetrator--oriented approaches used in the past. One sexual violence prevention education approach focuses on educating and empowering the bystander to become a point of ethical intervention. In this model, bystanders to…
Modelling road accidents: An approach using structural time series
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Numerical approaches to expansion process modeling
Directory of Open Access Journals (Sweden)
G. V. Alekseev
2017-01-01
Full Text Available Forage production is currently undergoing a period of intensive renovation and introduction of the most advanced technologies and equipment. More and more often such methods as barley toasting, grain extrusion, steaming and grain flattening, boiling bed explosion, infrared ray treatment of cereals and legumes, followed by flattening, and one-time or two-time granulation of the purified whole grain without humidification in matrix presses By grinding the granules. These methods require special apparatuses, machines, auxiliary equipment, created on the basis of different methods of compiled mathematical models. When roasting, simulating the heat fields arising in the working chamber, provide such conditions, the decomposition of a portion of the starch to monosaccharides, which makes the grain sweetish, but due to protein denaturation the digestibility of the protein and the availability of amino acids decrease somewhat. Grain is roasted mainly for young animals in order to teach them to eat food at an early age, stimulate the secretory activity of digestion, better development of the masticatory muscles. In addition, the high temperature is detrimental to bacterial contamination and various types of fungi, which largely avoids possible diseases of the gastrointestinal tract. This method has found wide application directly on the farms. Apply when used in feeding animals and legumes: peas, soy, lupine and lentils. These feeds are preliminarily ground, and then cooked or steamed for 1 hour for 30–40 minutes. In the feed mill. Such processing of feeds allows inactivating the anti-nutrients in them, which reduce the effectiveness of their use. After processing, legumes are used as protein supplements in an amount of 25–30% of the total nutritional value of the diet. But it is recommended to cook and steal a grain of good quality. A poor-quality grain that has been stored for a long time and damaged by pathogenic micro flora is subject to
Modelling and Generating Ajax Applications : A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction
Windfield and trajectory models for tornado-propelled objects. Final report
International Nuclear Information System (INIS)
Redmann, G.H.; Radbill, J.R.; Marte, J.E.; Dergarabedian, P.; Fendell, F.E.
1983-03-01
This is the final report of a three-phased research project to develop a six-degree-of-freedom mathematical model to predict the trajectories of tornado-propelled objects. The model is based on the meteorological, aerodynamic, and dynamic processes that govern the trajectories of missiles in a tornadic windfield. The aerodynamic coefficients for the postulated missiles were obtained from full-scale wind tunnel tests on a 12-inch pipe and car and from drop tests. Rocket sled tests were run whereby the 12-inch pipe and car were injected into a worst-case tornado windfield in order to verify the trajectory model. To simplify and facilitate the use of the trajectory model for design applications without having to run the computer program, this report gives the trajectory data for NRC-postulated missiles in tables based on given variables of initial conditions of injection and tornado windfield. Complete descriptions of the tornado windfield and trajectory models are presented. The trajectory model computer program is also included for those desiring to perform trajectory or sensitivity analyses beyond those included in the report or for those wishing to examine other missiles and use other variables
Understanding Gulf War Illness: An Integrative Modeling Approach
2017-10-01
using a novel mathematical model. The computational biology approach will enable the consortium to quickly identify targets of dysfunction and find... computer / mathematical paradigms for evaluation of treatment strategies 12-30 50% Develop pilot clinical trials on basis of animal studies 24-36 60...the goal of testing chemical treatments. The immune and autonomic biomarkers will be tested using a computational modeling approach allowing for a
The Role of Participatory Modeling in Landscape Approaches to Reconcile Conservation and Development
Directory of Open Access Journals (Sweden)
Marieke Sandker
2010-06-01
Full Text Available Conservation organizations are increasingly turning to landscape approaches to achieve a balance between conservation and development goals. We use six case studies in Africa and Asia to explore the role of participatory modeling with stakeholders as one of the steps towards implementing a landscape approach. The modeling was enthusiastically embraced by some stakeholders and led to impact in some cases. Different stakeholders valued the modeling exercise differently. Noteworthy was the difference between those stakeholders connected to the policy process and scientists; the presence of the former in the modeling activities is key to achieving policy impacts, and the latter were most critical of participatory modeling. Valued aspects of the modeling included stimulating cross-sector strategic thinking, and helping participants to confront the real drivers of change and to recognize trade-offs. The modeling was generally considered to be successful in building shared understanding of issues. This understanding was gained mainly in the discussions held in the process of building the model rather than in the model outputs. The model itself reflects but a few of the main elements of the usually rich discussions that preceded its finalization. Problems emerged when models became too complex. Key lessons for participatory modeling are the need for good facilitation in order to maintain a balance between "models as stories" and technical modeling, and the importance of inviting the appropriate stakeholders to achieve impact.
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
EPA announced the availability of the final report, An Approach to Using Toxicogenomic Data in U.S. EPA Human Health Risk Assessments: A Dibutyl Phthalate Case Study. This report outlines an approach to evaluate genomic data for use in risk assessment and a case study to ...
Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato.
Tran, Dinh T; Hertog, Maarten L A T M; Tran, Thi L H; Quyen, Nguyen T; Van de Poel, Bram; Mata, Clara I; Nicolaï, Bart M
2017-01-01
In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. "Savior") was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.
A novel approach to modeling and diagnosing the cardiovascular system
Energy Technology Data Exchange (ETDEWEB)
Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)
1995-07-01
A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Råde, Anders
2014-01-01
This study concerns different final thesis models in the research on teacher education in Europe and their orientation towards the academy and the teaching profession. In scientific journals, 33 articles support the occurrence of three models: the portfolio model, with a mainly teaching-professional orientation; the thesis model, with a mainly…
Tests of the Standard Model with Multi boson final states at the ATLAS Detector
Gonella, Giulia; The ATLAS collaboration
2018-01-01
Measurements of the cross sections of the production of pairs of electroweak gauge bosons at the LHC constitute stringent tests of the electroweak sector of the Standard Model and provide a model- independent means to search for new physics at the TeV scale. The ATLAS collaboration has performed detailed measurements of integrated and differential cross sections of the production of heavy di-boson pairs in fully-leptonic and semi-leptonic final states at centre-of-mass energies of 13 TeV. The results are compared to predictions and provide constraints on new physics, by setting limits on anomalous triple gauge couplings. Some analyses in this area will be reviewed and their main results summarised.
Comparison of source-term calculations using the AREST and SYVAC-Vault models: [Final report
International Nuclear Information System (INIS)
Apted, M.J.; Engel, D.W.; Garisto, N.C.; LeNeveu, D.M.
1988-07-01
A comparison of the calculated radionuclide release from a waste package in a geologic repository has been performed using the verified SYVAC-Vault Model and AREST Model. the purpose of this comparison is to further establish the credibility of these codes for predictive performance assessment and to identify improvements that may be required. A reference case for a Canadian conceptual design with spent fuel as the waste form was chosen to make an initial comparison. The results from the two models were in good agreement, including peak release rates, time to reach peak release, and long term release rates. Differences in results from the two models are attributed to differences in computational approaches. Studies of the effects of sorption, convective flow, distributed containment failure, and precipitation are identified as key areas for further comparisons and are currently in progress. 11 refs., 3 figs., 5 tabs
Synthesis of industrial applications of local approach to fracture models
International Nuclear Information System (INIS)
Eripret, C.
1993-03-01
This report gathers different applications of local approach to fracture models to various industrial configurations, such as nuclear pressure vessel steel, cast duplex stainless steels, or primary circuit welds such as bimetallic welds. As soon as models are developed on the basis of microstructural observations, damage mechanisms analyses, and fracture process, the local approach to fracture proves to solve problems where classical fracture mechanics concepts fail. Therefore, local approach appears to be a powerful tool, which completes the standard fracture criteria used in nuclear industry by exhibiting where and why those classical concepts become unvalid. (author). 1 tab., 18 figs., 25 refs
Mathematical models for therapeutic approaches to control HIV disease transmission
Roy, Priti Kumar
2015-01-01
The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...
International Nuclear Information System (INIS)
Spiessl, Sabine; Becker, Dirk-Alexander
2017-06-01
Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation
Energy Technology Data Exchange (ETDEWEB)
Spiessl, Sabine; Becker, Dirk-Alexander
2017-06-15
Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation
A probabilistic multi objective CLSC model with Genetic algorithm-ε_Constraint approach
Directory of Open Access Journals (Sweden)
Alireza TaheriMoghadam
2014-05-01
Full Text Available In this paper an uncertain multi objective closed-loop supply chain is developed. The first objective function is maximizing the total profit. The second objective function is minimizing the use of row materials. In the other word, the second objective function is maximizing the amount of remanufacturing and recycling. Genetic algorithm is used for optimization and for finding the pareto optimal line, Epsilon-constraint method is used. Finally a numerical example is solved with proposed approach and performance of the model is evaluated in different sizes. The results show that this approach is effective and useful for managerial decisions.
de Lusignan, Simon; Cashman, Josephine; Poh, Norman; Michalakidis, Georgios; Mason, Aaron; Desombre, Terry; Krause, Paul
2012-01-01
Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). These requirements and their associated models should become part of research study protocols.
A Model-Driven Approach for Telecommunications Network Services Definition
Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.
Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.
Energy Technology Data Exchange (ETDEWEB)
Balmain, Allan [University of California, San Francisco; Song, Ihn Young [University of California, San Francisco
2013-05-15
The ultimate goal of this project is to identify the combinations of genetic variants that confer an individual's susceptibility to the effects of low dose (0.1 Gy) gamma-radiation, in particular with regard to tumor development. In contrast to the known effects of high dose radiation in cancer induction, the responses to low dose radiation (defined as 0.1 Gy or less) are much less well understood, and have been proposed to involve a protective anti-tumor effect in some in vivo scientific models. These conflicting results confound attempts to develop predictive models of the risk of exposure to low dose radiation, particularly when combined with the strong effects of inherited genetic variants on both radiation effects and cancer susceptibility. We have used a Systems Genetics approach in mice that combines genetic background analysis with responses to low and high dose radiation, in order to develop insights that will allow us to reconcile these disparate observations. Using this comprehensive approach we have analyzed normal tissue gene expression (in this case the skin and thymus), together with the changes that take place in this gene expression architecture a) in response to low or high- dose radiation and b) during tumor development. Additionally, we have demonstrated that using our expression analysis approach in our genetically heterogeneous/defined radiation-induced tumor mouse models can uniquely identify genes and pathways relevant to human T-ALL, and uncover interactions between common genetic variants of genes which may lead to tumor susceptibility.
Modeling of delays in PKPD: classical approaches and a tutorial for delay differential equations.
Koch, Gilbert; Krzyzanski, Wojciech; Pérez-Ruixo, Juan Jose; Schropp, Johannes
2014-08-01
In pharmacokinetics/pharmacodynamics (PKPD) the measured response is often delayed relative to drug administration, individuals in a population have a certain lifespan until they maturate or the change of biomarkers does not immediately affects the primary endpoint. The classical approach in PKPD is to apply transit compartment models (TCM) based on ordinary differential equations to handle such delays. However, an alternative approach to deal with delays are delay differential equations (DDE). DDEs feature additional flexibility and properties, realize more complex dynamics and can complementary be used together with TCMs. We introduce several delay based PKPD models and investigate mathematical properties of general DDE based models, which serve as subunits in order to build larger PKPD models. Finally, we review current PKPD software with respect to the implementation of DDEs for PKPD analysis.
An approach for activity-based DEVS model specification
DEFF Research Database (Denmark)
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
2016-01-01
Creation of DEVS models has been advanced through Model Driven Architecture and its frameworks. The overarching role of the frameworks has been to help develop model specifications in a disciplined fashion. Frameworks can provide intermediary layers between the higher level mathematical models...... and their corresponding software specifications from both structural and behavioral aspects. Unlike structural modeling, developing models to specify behavior of systems is known to be harder and more complex, particularly when operations with non-trivial control schemes are required. In this paper, we propose specifying...... activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...
Modelling diversity in building occupant behaviour: a novel statistical approach
DEFF Research Database (Denmark)
Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm
2016-01-01
We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
A qualitative evaluation approach for energy system modelling frameworks
DEFF Research Database (Denmark)
Wiese, Frauke; Hilpert, Simon; Kaldemeyer, Cord
2018-01-01
properties define how useful it is in regard to the existing challenges. For energy system models, evaluation methods exist, but we argue that many decisions upon properties are rather made on the model generator or framework level. Thus, this paper presents a qualitative approach to evaluate frameworks...
Modeling Alaska boreal forests with a controlled trend surface approach
Mo Zhou; Jingjing Liang
2012-01-01
An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of
Towards modeling future energy infrastructures - the ELECTRA system engineering approach
DEFF Research Database (Denmark)
Uslar, Mathias; Heussen, Kai
2016-01-01
of the IEC 62559 use case template as well as needed changes to cope particularly with the aspects of controller conflicts and Greenfield technology modeling. From the original envisioned use of the standards, we show a possible transfer on how to properly deal with a Greenfield approach when modeling....
A Model-Driven Approach to e-Course Management
Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana
2018-01-01
This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…
Energy Technology Data Exchange (ETDEWEB)
Strout, Michelle [Colorado State Univ., Fort Collins, CO (United States)
2015-08-15
Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programs through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.
A study of multidimensional modeling approaches for data warehouse
Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani
2016-08-01
Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...... of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented...
Meta-analysis a structural equation modeling approach
Cheung, Mike W-L
2015-01-01
Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo
2013-11-26
... notice to enrollees about the result of any final internal adverse benefit determination, their external... OFFICE OF PERSONNEL MANAGEMENT Submission for Review: Request for External Review (3206-NEW); Model Notice of Final Internal Adverse Benefit Determination and Case Intake Form AGENCY: U.S. Office of...
A New Approach for Magneto-Static Hysteresis Behavioral Modeling
DEFF Research Database (Denmark)
Astorino, Antonio; Swaminathan, Madhavan; Antonini, Giulio
2016-01-01
in this paper is based on simple functions, which do not require calculus to be involved, thus assuring a very good efficiency in the algorithm. In addition, the proposed method enables initial magnetization curves, symmetric loops, minor loops, normal curves, and reversal curves of any order to be reproduced......, as demonstrated through the pertinent results provided in this paper. A model example based on the proposed modeling technique is also introduced and used as inductor core, in order to simulate an LR series circuit. Finally, the model ability to emulate hysteretic inductors is proved by the satisfactory agreement...
A Specific N=2 Supersymmetric Quantum Mechanical Model: Supervariable Approach
Directory of Open Access Journals (Sweden)
Aradhya Shukla
2017-01-01
Full Text Available By exploiting the supersymmetric invariant restrictions on the chiral and antichiral supervariables, we derive the off-shell nilpotent symmetry transformations for a specific (0 + 1-dimensional N=2 supersymmetric quantum mechanical model which is considered on a (1, 2-dimensional supermanifold (parametrized by a bosonic variable t and a pair of Grassmannian variables (θ,θ¯. We also provide the geometrical meaning to the symmetry transformations. Finally, we show that this specific N=2 SUSY quantum mechanical model is a model for Hodge theory.
Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''
Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.
2011-05-01
The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.
Study of GMSB models with photon final states using the ATLAS detector
Energy Technology Data Exchange (ETDEWEB)
Terwort, Mark
2009-11-30
Models with gauge mediated supersymmetry breaking (GMSB) provide a possible mechanism to mediate supersymmetry breaking to the electroweak scale. In these models the lightest-supersymmetric particle is the gravitino, while the next-to-lightest supersymmetric particle is either the lightest neutralino or a slepton. In the former case final states with large missing transverse energy from the gravitinos, multiple jets and two hard photons are expected in pp-collisions at the LHC. Depending on the lifetime of the neutralino the photons might not point back to the interaction vertex, which requires dedicated search strategies. Additionally, this feature can be used to measure the neutralino lifetime using either the timing information from the electromagnetic calorimeter or the reconstructed photon direction. Together with the measurements of kinematic endpoints in invariant mass distributions, the lifetime can be used as input for fits of the GMSB model and for the determination of the underlying parameters. The signal selection and the discovery potential for GMSB models with photons in the nal state are discussed using simulated data of the ATLAS detector. In addition, the measurement of supersymmetric particle masses and of the neutralino lifetime as well as the results of the global GMSB fits are presented. (orig.)
Learning the Task Management Space of an Aircraft Approach Model
Krall, Joseph; Menzies, Tim; Davies, Misty
2014-01-01
Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.
A novel approach of modeling continuous dark hydrogen fermentation.
Alexandropoulou, Maria; Antonopoulou, Georgia; Lyberatos, Gerasimos
2018-02-01
In this study a novel modeling approach for describing fermentative hydrogen production in a continuous stirred tank reactor (CSTR) was developed, using the Aquasim modeling platform. This model accounts for the key metabolic reactions taking place in a fermentative hydrogen producing reactor, using fixed stoichiometry but different reaction rates. Biomass yields are determined based on bioenergetics. The model is capable of describing very well the variation in the distribution of metabolic products for a wide range of hydraulic retention times (HRT). The modeling approach is demonstrated using the experimental data obtained from a CSTR, fed with food industry waste (FIW), operating at different HRTs. The kinetic parameters were estimated through fitting to the experimental results. Hydrogen and total biogas production rates were predicted very well by the model, validating the basic assumptions regarding the implicated stoichiometric biochemical reactions and their kinetic rates. Copyright © 2017 Elsevier Ltd. All rights reserved.
An integrated modeling approach to age invariant face recognition
Alvi, Fahad Bashir; Pears, Russel
2015-03-01
This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.
On a model-based approach to radiation protection
International Nuclear Information System (INIS)
Waligorski, M.P.R.
2002-01-01
There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)
Directory of Open Access Journals (Sweden)
Florian Lesaint
2014-02-01
Full Text Available Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US, some rats (sign-trackers come to approach and engage the conditioned stimulus (CS itself - a lever - more and more avidly, whereas other rats (goal-trackers learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in
Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B.; Robinson, Terry E.; Khamassi, Mehdi
2014-01-01
Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself – a lever – more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in
Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B; Robinson, Terry E; Khamassi, Mehdi
2014-02-01
Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself - a lever - more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational
A computational approach to compare regression modelling strategies in prediction research.
Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H
2016-08-25
It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.
Energy Technology Data Exchange (ETDEWEB)
NONE
2012-06-15
This report is the final report in a series of six reports detailing the findings from the Cowichan Valley Energy Mapping and Modelling project that was carried out from April of 2011 to March of 2012 by Ea Energy Analyses in conjunction with Geographic Resource Analysis and Science (GRAS). The driving force behind the Integrated Energy Mapping and Analysis project was the identification and analysis of a suite of pathways that the Cowichan Valley Regional District (CVRD) can utilise to increase its energy resilience, as well as reduce energy consumption and GHG emissions, with a primary focus on the residential sector. Mapping and analysis undertaken will support provincial energy and GHG reduction targets, and the suite of pathways outlined will address a CVRD internal target that calls for 75% of the region's energy within the residential sector to come from locally sourced renewables by 2050. The target has been developed as a mechanism to meet resilience and climate action target. The maps and findings produced are to be integrated as part of a regional policy framework currently under development. The present report is the final report and presents a summary of the findings of project tasks 1-5 and provides a set of recommendations to the CVRD based on the work done and with an eye towards the next steps in the energy planning process of the CVRD. (LN)
EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...
Modeling gene expression measurement error: a quasi-likelihood approach
Directory of Open Access Journals (Sweden)
Strimmer Korbinian
2003-03-01
Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also
Energy Technology Data Exchange (ETDEWEB)
Breitschopf, Barbara [Fraunhofer-Institut fuer System- und Innovationsforschung (ISI), Karlsruhe (Germany); Nathani, Carsten; Resch, Gustav
2011-11-15
full picture of the impacts of RE deployment on the total economy - covering all economic activities like production, service and consumption (industries, households). To get the number of additional jobs caused by RE deployment, they compare a situation without RE (baseline or counterfactual) to a situation under a strong RE deployment. In a second step, we characterize the studies inter alia by their scope, activities and impacts and show the relevant positive and negative effects that are included in gross or net impact assessment studies. The effects are briefly described in Table 0-1. While gross studies mainly include the positive effects listed here, net studies in general include positive and negative effects. Third, we distinguish between methodological approaches assessing impacts. We observe that the more effects are incorporated in the approach, the more data are needed, the more complex and demanding the methodological approach becomes and the more the impacts capture effects of and in the whole economy - representing net impacts. A simple approach requires a few data and allows answering simple questions concerning the impact on the RE-industry - representing gross impacts. We identify six main approaches, three for gross and three for net impacts. They are depicted in Figure 0-2. The methodological approaches are characterized by their effects captured, the complexity of model and additional data requirement (besides data on RE investments, capacities and generation) as well as by their depicted impacts reflecting the economic comprehensiveness. A detailed overview of the diverse studies in table form is given in the Annex to this report. Finally, we suggest to elaborate guidelines for the simple EF-approach, the gross IO-modelling and net IO-modelling approach. The first approach enables policy makers to do a quick assessment on gross effects, while the second is a more sophisticated approach for gross effects. The third approach builds on the gross IO
Directory of Open Access Journals (Sweden)
H. C. Winsemius
2006-01-01
Full Text Available Variations of water stocks in the upper Zambezi river basin have been determined by 2 different hydrological modelling approaches. The purpose was to provide preliminary terrestrial storage estimates in the upper Zambezi, which will be compared with estimates derived from the Gravity Recovery And Climate Experiment (GRACE in a future study. The first modelling approach is GIS-based, distributed and conceptual (STREAM. The second approach uses Lumped Elementary Watersheds identified and modelled conceptually (LEW. The STREAM model structure has been assessed using GLUE (Generalized Likelihood Uncertainty Estimation a posteriori to determine parameter identifiability. The LEW approach could, in addition, be tested for model structure, because computational efforts of LEW are low. Both models are threshold models, where the non-linear behaviour of the Zambezi river basin is explained by a combination of thresholds and linear reservoirs. The models were forced by time series of gauged and interpolated rainfall. Where available, runoff station data was used to calibrate the models. Ungauged watersheds were generally given the same parameter sets as their neighbouring calibrated watersheds. It appeared that the LEW model structure could be improved by applying GLUE iteratively. Eventually, it led to better identifiability of parameters and consequently a better model structure than the STREAM model. Hence, the final model structure obtained better represents the true hydrology. After calibration, both models show a comparable efficiency in representing discharge. However the LEW model shows a far greater storage amplitude than the STREAM model. This emphasizes the storage uncertainty related to hydrological modelling in data-scarce environments such as the Zambezi river basin. It underlines the need and potential for independent observations of terrestrial storage to enhance our understanding and modelling capacity of the hydrological processes. GRACE
Towards the final BSA modeling for the accelerator-driven BNCT facility at INFN LNL
Energy Technology Data Exchange (ETDEWEB)
Ceballos, C. [Centro de Aplicaciones Tecnlogicas y Desarrollo Nuclear, 5ta y30, Miramar, Playa, Ciudad Habana (Cuba); Esposito, J., E-mail: juan.esposito@lnl.infn.it [INFN, Laboratori Nazionali di Legnaro (LNL), via dell' Universita, 2, I-35020 Legnaro (PD) (Italy); Agosteo, S. [Politecnico di Milano, Dipartimento di Energia, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)] [INFN, Sezione di Milano, via Celoria 16, 20133 Milano (Italy); Colautti, P.; Conte, V.; Moro, D. [INFN, Laboratori Nazionali di Legnaro (LNL), via dell' Universita, 2, I-35020 Legnaro (PD) (Italy); Pola, A. [Politecnico di Milano, Dipartimento di Energia, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)] [INFN, Sezione di Milano, via Celoria 16, 20133 Milano (Italy)
2011-12-15
Some remarkable advances have been made in the last years on the SPES-BNCT project of the Istituto Nazionale di Fisica Nucleare (INFN) towards the development of the accelerator-driven thermal neutron beam facility at the Legnaro National Laboratories (LNL), aimed at the BNCT experimental treatment of extended skin melanoma. The compact neutron source will be produced via the {sup 9}Be(p,xn) reactions using the 5 MeV, 30 mA beam driven by the RFQ accelerator, whose modules construction has been recently completed, into a thick beryllium target prototype already available. The Beam Shaping Assembly (BSA) final modeling, using both neutron converter and the new, detailed, Be(p,xn) neutron yield spectra at 5 MeV energy recently measured at the CN Van de Graaff accelerator at LNL, is summarized here.
Modeling of integrated environmental control systems for coal-fired power plants. Final report
Energy Technology Data Exchange (ETDEWEB)
Rubin, E.S.; Salmento, J.S.; Frey, H.C.; Abu-Baker, A.; Berkenpas, M.
1991-05-01
The Integrated Environmental Control Model (IECM) was designed to permit the systematic evaluation of environmental control options for pulverized coal-fired (PC) power plants. Of special interest was the ability to compare the performance and cost of advanced pollution control systems to ``conventional`` technologies for the control of particulate, SO{sub 2} and NO{sub x}. Of importance also was the ability to consider pre-combustion, combustion and post-combustion control methods employed alone or in combination to meet tough air pollution emission standards. Finally, the ability to conduct probabilistic analyses is a unique capability of the IECM. Key results are characterized as distribution functions rather than as single deterministic values. (VC)
Final Report. Fumex-III. Improvement of Models Used for Fuel Behaviour Simulation
International Nuclear Information System (INIS)
Kulacsy, Katalin
2013-01-01
The FUMEX-III coordinated research programme organised by the IAEA was the first FUMEX exercise in which AEKI (Hungarian Academy of Sciences KFKI Atomic Energy Research Institute) took part with the partial support of Paks NPP. The aim of the participation was to test the code FUROM developed at AEKI against not only measurements but also other fuel behaviour simulation codes, to share and discuss modelling experience and issues, and to establish acquaintance with fuel modellers in other countries. Among the numerous cases proposed for the programme, AEKI chose to simulate normal operation up to high burn-up and ramp tests, with special interest in VVER rods and PWR rods with annular pellets. The US PWR 16x16, the SPC RE GINNA, the Kola3-MIR, the IFA-519.9 cases and the AREVA idealised rod were thus selected. The present Final Report gives a short description of the FUROM models relevant to the selected cases, presents the results for the 5 cases and summarises the conclusions of the FUMEX-III programme. The input parameters used for the simulations can be found in the Appendix at the end of the Report. Observations concerning the IFPE datasets are collected for each dataset in their respective Sections for possible use in the IFPE database. (author)
A review of function modeling: Approaches and applications
Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.
2008-01-01
This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research fields of artificial intelligence, design theory, and maintenance are discussed. In this discussion the goals are to highlight the features of various classical approaches in relation to FM, to delin...
Top-down approach to unified supergravity models
International Nuclear Information System (INIS)
Hempfling, R.
1994-03-01
We introduce a new approach for studying unified supergravity models. In this approach all the parameters of the grand unified theory (GUT) are fixed by imposing the corresponding number of low energy observables. This determines the remaining particle spectrum whose dependence on the low energy observables can now be investigated. We also include some SUSY threshold corrections that have previously been neglected. In particular the SUSY threshold corrections to the fermion masses can have a significant impact on the Yukawa coupling unification. (orig.)
Intelligent Transportation and Evacuation Planning A Modeling-Based Approach
Naser, Arab
2012-01-01
Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...
Multi-model approach to characterize human handwriting motion.
Chihi, I; Abdelkrim, A; Benrejeb, M
2016-02-01
This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.
International Nuclear Information System (INIS)
Sanderson, Allen R.; Johnson, Christopher R.
2007-01-01
During the third and final period of this grant, our goal was to refine the algorithmic approaches used to detect and visualize magnetic islands and their corresponding null points within both the NIMROD and M3D data sets. We refined our geometric approach, which gave a greater confidence in the accuracy of the Poincareplots created. The final results are best demonstrated through Figures 2-6 attached to the report. Technical details this work was reported in both the Physics and Visualization communities. The algorithms used to analyze the magnetic field lines and detect magnetic islands have been packaged into a library and were used within the SCIRun Problem Solving Environment which is being used by members of the CEMM for visualization. In addition, the library interface was developed so that it could be used by both the NIMROD and M3D codes directly. Thus allowing the fusion scientist to perform this analysis while their simulations were actively running. The use of the library for analysis and visualization was not limited to just within the CEMM SciDAC. Other groups such as the SciDAC for the Simulation of Wave Interactions with Magnetohydrodynamics using Silo code have used the tools for the analysis of their simulations, Figure 1. Though the funding of this project had concluded there is still much work to be performed on this analysis. The techniques developed are fast and robust when not in the presence of chaos. Magnetic field lines that are near the separatrices where chaos is most often present can be difficult to analyze yet these are the field lines that are greatest interest. We believe that investigating and developing techniques based on time frequency analysis may hold some promise. Two other issues that need to be address is the ability to automatically search for the magnetic islands and the ability to track the development of the magnetic islands over time. Our initial effort into automatically searching for the islands did not prove as
Wave Resource Characterization Using an Unstructured Grid Modeling Approach
Directory of Open Access Journals (Sweden)
Wei-Cheng Wu
2018-03-01
Full Text Available This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization, using the unstructured grid Simulating WAve Nearshore (SWAN model coupled with a nested grid WAVEWATCH III® (WWIII model. The flexibility of models with various spatial resolutions and the effects of open boundary conditions simulated by a nested grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured grid-modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Center Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the ST2 physics package’s ability to predict wave power density for large waves, which is important for wave resource assessment, load calculation of devices, and risk management. In addition, bivariate distributions show that the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than with the ST2 physics package. This study demonstrated that the unstructured grid wave modeling approach, driven by regional nested grid WWIII outputs along with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (102 km.
Directory of Open Access Journals (Sweden)
Ali Moeini
2015-01-01
Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.
Smeared crack modelling approach for corrosion-induced concrete damage
DEFF Research Database (Denmark)
Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik
2017-01-01
In this paper a smeared crack modelling approach is used to simulate corrosion-induced damage in reinforced concrete. The presented modelling approach utilizes a thermal analogy to mimic the expansive nature of solid corrosion products, while taking into account the penetration of corrosion...... products into the surrounding concrete, non-uniform precipitation of corrosion products, and creep. To demonstrate the applicability of the presented modelling approach, numerical predictions in terms of corrosion-induced deformations as well as formation and propagation of micro- and macrocracks were......-induced damage phenomena in reinforced concrete. Moreover, good agreements were also found between experimental and numerical data for corrosion-induced deformations along the circumference of the reinforcement....
A model-data based systems approach to process intensification
DEFF Research Database (Denmark)
Gani, Rafiqul
. Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...
METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT
Directory of Open Access Journals (Sweden)
Gorbenkova Elena Vladimirovna
2017-10-01
Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of
An algebraic approach to modeling in software engineering
International Nuclear Information System (INIS)
Loegel, C.J.; Ravishankar, C.V.
1993-09-01
Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form
A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks
Mohan, Arvind; Gaitonde, Datta
2017-11-01
Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.
International Nuclear Information System (INIS)
Envair, J.H.; Ekstrom, P.
1995-11-01
Wavelets are elementary mathematical functions used to construct, transform, and analyze higher functions and observational data. This report describes the results of an exploratory research effort to evaluate wavelet applications for numerically integrating differential equations associated with air pollution transport and conversion models. It is intended to provide a primer on wavelets, and specifically outlines the use of wavelets in a model that addresses derivative-evaluation and boundary-condition problems. Several factors complicate the use of wavelets for integrating differential equations. First, an enormous range of different wavelet types exists, making the choice of wavelet family for a given application challenging. Moreover, in contrast to the Fourier series, the functional derivatives necessary for numerical approximation are difficult to evaluate and consolidate in terms of wavelet expansions, introducing appreciable complexity into any attempt at wavelet-based integration. On the positive side, wavelet techniques do hold promise for effectively interfacing plume and other subgrid-scale phenomena in grid models. Moreover, workable techniques for derivative evaluation and simulation of boundary features appear feasible. Wavelet use may provide a viable, advantageous option for numerically integrating model equations describing fields on all scales of time and distance, especially where inhomogeneous fields exist, and provide a computationally efficient method of focusing on high-variability regions. The potential for wavelets to conduct integrations totally in transform space contrasts with Fourier-based approaches, which essentially preclude such treatments whenever nonlinear chemical processes occur in the modeled system
A Cluster-based Approach Towards Detecting and Modeling Network Dictionary Attacks
Directory of Open Access Journals (Sweden)
A. Tajari Siahmarzkooh
2016-12-01
Full Text Available In this paper, we provide an approach to detect network dictionary attacks using a data set collected as flows based on which a clustered graph is resulted. These flows provide an aggregated view of the network traffic in which the exchanged packets in the network are considered so that more internally connected nodes would be clustered. We show that dictionary attacks could be detected through some parameters namely the number and the weight of clusters in time series and their evolution over the time. Additionally, the Markov model based on the average weight of clusters,will be also created. Finally, by means of our suggested model, we demonstrate that artificial clusters of the flows are created for normal and malicious traffic. The results of the proposed approach on CAIDA 2007 data set suggest a high accuracy for the model and, therefore, it provides a proper method for detecting the dictionary attack.
Towards a 3d Spatial Urban Energy Modelling Approach
Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.
2013-09-01
Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies
Modelling of ductile and cleavage fracture by local approach
International Nuclear Information System (INIS)
Samal, M.K.; Dutta, B.K.; Kushwaha, H.S.
2000-08-01
This report describes the modelling of ductile and cleavage fracture processes by local approach. It is now well known that the conventional fracture mechanics method based on single parameter criteria is not adequate to model the fracture processes. It is because of the existence of effect of size and geometry of flaw, loading type and rate on the fracture resistance behaviour of any structure. Hence, it is questionable to use same fracture resistance curves as determined from standard tests in the analysis of real life components because of existence of all the above effects. So, there is need to have a method in which the parameters used for the analysis will be true material properties, i.e. independent of geometry and size. One of the solutions to the above problem is the use of local approaches. These approaches have been extensively studied and applied to different materials (including SA33 Gr.6) in this report. Each method has been studied and reported in a separate section. This report has been divided into five sections. Section-I gives a brief review of the fundamentals of fracture process. Section-II deals with modelling of ductile fracture by locally uncoupled type of models. In this section, the critical cavity growth parameters of the different models have been determined for the primary heat transport (PHT) piping material of Indian pressurised heavy water reactor (PHWR). A comparative study has been done among different models. The dependency of the critical parameters on stress triaxiality factor has also been studied. It is observed that Rice and Tracey's model is the most suitable one. But, its parameters are not fully independent of triaxiality factor. For this purpose, a modification to Rice and Tracery's model is suggested in Section-III. Section-IV deals with modelling of ductile fracture process by locally coupled type of models. Section-V deals with the modelling of cleavage fracture process by Beremins model, which is based on Weibulls
Atomistic approach for modeling metal-semiconductor interfaces
DEFF Research Database (Denmark)
Stradi, Daniele; Martinez, Umberto; Blom, Anders
2016-01-01
realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces......We present a general framework for simulating interfaces using an atomistic approach based on density functional theory and non-equilibrium Green's functions. The method includes all the relevant ingredients, such as doping and an accurate value of the semiconductor band gap, required to model...
Next-Gen^{3}: Sequencing, Modeling, and Advanced Biofuels - Final Technical Report
Energy Technology Data Exchange (ETDEWEB)
Zengler, Karsten [Univ. of California, San Diego, CA (United States). Dept. of Pediatrics; Palsson, Bernhard [Univ. of California, San Diego, CA (United States). Dept. of Bioengineering; Lewis, Nathan [Univ. of California, San Diego, CA (United States). Dept. of Pediatrics
2017-12-27
Successful, scalable implementation of biofuels is dependent on the efficient and near complete utilization of diverse biomass sources. One approach is to utilize the large recalcitrant biomass fraction (or any organic waste stream) through the thermochemical conversion of organic compounds to syngas, a mixture of carbon monoxide (CO), carbon dioxide (CO_{2}), and hydrogen (H_{2}), which can subsequently be metabolized by acetogenic microorganisms to produce next-gen biofuels. The goal of this proposal was to advance the development of the acetogen Clostridium ljungdahlii as a chassis organism for next-gen biofuel production from cheap, renewable sources and to detail the interconnectivity of metabolism, energy conservation, and regulation of acetogens using next-gen sequencing and next-gen modeling. To achieve this goal we determined optimization of carbon and energy utilization through differential translational efficiency in C. ljungdahlii. Furthermore, we reconstructed a next-generation model of all major cellular processes, such as macromolecular synthesis and transcriptional regulation and deployed this model to predicting proteome allocation, overflow metabolism, and metal requirements in this model acetogen. In addition we explored the evolutionary significance of tRNA operon structure using the next-gen model and determined the optimal operon structure for bioproduction. Our study substantially enhanced the knowledgebaase for chemolithoautotrophs and their potential for advanced biofuel production. It provides next-generation modeling capability, offer innovative tools for genome-scale engineering, and provide novel methods to utilize next-generation models for the design of tunable systems that produce commodity chemicals from inexpensive sources.
A new approach towards image based virtual 3D city modeling by using close range photogrammetry
Singh, S. P.; Jain, K.; Mandla, V. R.
2014-05-01
3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country
Systems and context modeling approach to requirements analysis
Ahuja, Amrit; Muralikrishna, G.; Patwari, Puneet; Subhrojyoti, C.; Swaminathan, N.; Vin, Harrick
2014-08-01
Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.
An approach to multiscale modelling with graph grammars.
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-09-01
Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.
Chan, Jennifer S K
2016-05-01
Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A robust quantitative near infrared modeling approach for blend monitoring.
Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A
2018-01-30
This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.
Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report
International Nuclear Information System (INIS)
Cai, Xiao-Chuan; Yang, Chao; Pernice, Michael
2014-01-01
The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementation since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.
Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling
Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.
2016-01-01
The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...
Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo
2018-02-01
Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.
Software sensors based on the grey-box modelling approach
DEFF Research Database (Denmark)
Carstensen, J.; Harremoës, P.; Strube, Rune
1996-01-01
In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-box model for the specific dynamics is identified. Similarly, an on-line software sensor for detecting the occurrence of backwater phenomena can be developed by comparing the dynamics of a flow measurement with a nearby level measurement. For treatment plants it is found that grey-box models applied to on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey...
Bianchi VI0 and III models: self-similar approach
International Nuclear Information System (INIS)
Belinchon, Jose Antonio
2009-01-01
We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.
Environmental Radiation Effects on Mammals A Dynamical Modeling Approach
Smirnova, Olga A
2010-01-01
This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...
Software package r3t. Model for transport and retention in porous media. Final report
International Nuclear Information System (INIS)
Fein, E.
2004-01-01
In long-termsafety analyses for final repositories for hazardous wastes in deep geological formations the impact to the biosphere due to potential release of hazardous materials is assessed for relevant scenarios. The model for migration of wastes from repositories to men is divided into three almost independent parts: the near field, the geosphere, and the biosphere. With the development of r 3 t the feasibility to model the pollutant transport through the geosphere for porous or equivalent porous media in large, three-dimensional, and complex regions is established. Furthermore one has at present the ability to consider all relevant retention and interaction effects which are important for long-term safety analyses. These are equilibrium sorption, kinetically controlled sorption, diffusion into immobile pore waters, and precipitation. The processes of complexing, colloidal transport and matrix diffusion may be considered at least approximately by skilful choice of parameters. Speciation is not part of the very recently developed computer code r 3 t. With r 3 t it is possible to assess the potential dilution and the barrier impact of the overburden close to reality
Use on non-conjugate prior distributions in compound failure models. Final technical report
International Nuclear Information System (INIS)
Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.
1981-12-01
Several theoretical and computational techniques are presented for compound failure models in which the failure rate or failure probability for a class of components is considered to be a random variable. Both the failure-on-demand and failure-rate situation are considered. Ten different prior families are presented for describing the variation or uncertainty of the failure parameter. Methods considered for estimating values for the prior parameters from a given set of failure data are (1) matching data moments to those of the prior distribution, (2) matching data moments to those of the compound marginal distribution, and (3) the marginal maximum likelihood method. Numerical methods for computing the parameter estimators for all ten prior families are presented, as well as methods for obtaining estimates of the variances and covariance of the parameter estimators, it is shown that various confidence, probability, and tolerance intervals can be evaluated. Finally, to test the resulting failure models against the given failure data, generalized chi-squage and Kolmogorov-Smirnov goodness-of-fit tests are proposed together with a test to eliminate outliers from the failure data. Computer codes based on the results presented here have been prepared and are presented in a companion report
A new approach to Naturalness in SUSY models
Ghilencea, D M
2013-01-01
We review recent results that provide a new approach to the old problem of naturalness in supersymmetric models, without relying on subjective definitions for the fine-tuning associated with {\\it fixing} the EW scale (to its measured value) in the presence of quantum corrections. The approach can address in a model-independent way many questions related to this problem. The results show that naturalness and its measure (fine-tuning) are an intrinsic part of the likelihood to fit the data that {\\it includes} the EW scale. One important consequence is that the additional {\\it constraint} of fixing the EW scale, usually not imposed in the data fits of the models, impacts on their overall likelihood to fit the data (or chi^2/ndf, ndf: number of degrees of freedom). This has negative implications for the viability of currently popular supersymmetric extensions of the Standard Model.
Model selection and inference a practical information-theoretic approach
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
Merits of a Scenario Approach in Dredge Plume Modelling
DEFF Research Database (Denmark)
Pedersen, Claus; Chu, Amy Ling Chu; Hjelmager Jensen, Jacob
2011-01-01
Dredge plume modelling is a key tool for quantification of potential impacts to inform the EIA process. There are, however, significant uncertainties associated with the modelling at the EIA stage when both dredging methodology and schedule are likely to be a guess at best as the dredging...... contractor would rarely have been appointed. Simulation of a few variations of an assumed full dredge period programme will generally not provide a good representation of the overall environmental risks associated with the programme. An alternative dredge plume modelling strategy that attempts to encapsulate...... uncertainties associated with preliminary dredging programmes by using a scenario-based modelling approach is presented. The approach establishes a set of representative and conservative scenarios for key factors controlling the spill and plume dispersion and simulates all combinations of e.g. dredge, climatic...
2014-05-01
The U.S. Environmental Protection Agencys (EPA) newest emissions model, MOtor Vehicle : Emission Simulator (MOVES), uses a disaggregate approach that enables the users of the model to create : and use local drive schedules (drive cycles) in order ...
Tseng, W. L.; Johnson, R. E.; Tucker, O. J.; Perry, M. E.; Ip, W. H.
2017-12-01
During the Cassini Grand Finale mission, this spacecraft, for the first time, has done the in-situ measurements of Saturn's upper atmosphere and its rings and provides critical information for understanding the coupling dynamics between the main rings and the Saturnian system. The ring atmosphere is the source of neutrals (i.e., O2, H2, H; Tseng et al., 2010; 2013a), which is primarily generated by photolytic decomposition of water ice (Johnson et al., 2006), and plasma (i.e., O2+ and H2+; Tseng et al., 2011) in the Saturnian magnetosphere. In addition, the main rings have strong interaction with Saturn's atmosphere and ionosphere (i.e., a source of oxygen into Saturn's upper atmosphere and/or the "ring rain" in O'Donoghue et al., 2013). Furthermore, the near-ring plasma environment is complicated by the neutrals from both the seasonally dependent ring atmosphere and Enceladus torus (Tseng et al., 2013b), and, possibly, from small grains from the main and tenuous F and G rings (Johnson et al.2017). The data now coming from Cassini Grand Finale mission already shed light on the dominant physics and chemistry in this region of Saturn's magnetosphere, for example, the presence of carbonaceous material from meteorite impacts in the main rings and each gas species have similar distribution in the ring atmosphere. We will revisit the details in our ring atmosphere/ionosphere model to study, such as the source mechanism for the organic material and the neutral-grain-plasma interaction processes.
Regularization of quantum gravity in the matrix model approach
International Nuclear Information System (INIS)
Ueda, Haruhiko
1991-02-01
We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)
PASSENGER TRAFFIC MOVEMENT MODELLING BY THE CELLULAR-AUTOMAT APPROACH
Directory of Open Access Journals (Sweden)
T. Mikhaylovskaya
2009-01-01
Full Text Available The mathematical model of passenger traffic movement developed on the basis of the cellular-automat approach is considered. The program realization of the cellular-automat model of pedastrians streams movement in pedestrian subways at presence of obstacles, at subway structure narrowing is presented. The optimum distances between the obstacles and the angle of subway structure narrowing providing pedastrians stream safe movement and traffic congestion occurance are determined.
The Generalised Ecosystem Modelling Approach in Radiological Assessment
International Nuclear Information System (INIS)
Klos, Richard
2008-03-01
An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment
The Generalised Ecosystem Modelling Approach in Radiological Assessment
Energy Technology Data Exchange (ETDEWEB)
Klos, Richard
2008-03-15
An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment
Reduced modeling of signal transduction – a modular approach
Directory of Open Access Journals (Sweden)
Ederer Michael
2007-09-01
Full Text Available Abstract Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good
A nonlinear complementarity approach for the national energy modeling system
International Nuclear Information System (INIS)
Gabriel, S.A.; Kydes, A.S.
1995-01-01
The National Energy Modeling System (NEMS) is a large-scale mathematical model that computes equilibrium fuel prices and quantities in the U.S. energy sector. At present, to generate these equilibrium values, NEMS sequentially solves a collection of linear programs and nonlinear equations. The NEMS solution procedure then incorporates the solutions of these linear programs and nonlinear equations in a nonlinear Gauss-Seidel approach. The authors describe how the current version of NEMS can be formulated as a particular nonlinear complementarity problem (NCP), thereby possibly avoiding current convergence problems. In addition, they show that the NCP format is equally valid for a more general form of NEMS. They also describe several promising approaches for solving the NCP form of NEMS based on recent Newton type methods for general NCPs. These approaches share the feature of needing to solve their direction-finding subproblems only approximately. Hence, they can effectively exploit the sparsity inherent in the NEMS NCP
Non-frontal Model Based Approach to Forensic Face Recognition
Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
2012-01-01
In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance
A Behavioral Decision Making Modeling Approach Towards Hedging Services
Pennings, J.M.E.; Candel, M.J.J.M.; Egelkraut, T.M.
2003-01-01
This paper takes a behavioral approach toward the market for hedging services. A behavioral decision-making model is developed that provides insight into how and why owner-managers decide the way they do regarding hedging services. Insight into those choice processes reveals information needed by
Export of microplastics from land to sea. A modelling approach
Siegfried, Max; Koelmans, A.A.; Besseling, E.; Kroeze, C.
2017-01-01
Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea.
The Bipolar Approach: A Model for Interdisciplinary Art History Courses.
Calabrese, John A.
1993-01-01
Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)
Teaching Modeling with Partial Differential Equations: Several Successful Approaches
Myers, Joseph; Trubatch, David; Winkel, Brian
2008-01-01
We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…
A review of function modeling : Approaches and applications
Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.
2008-01-01
This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research
A novel Monte Carlo approach to hybrid local volatility models
A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.
Model-independent approach for dark matter phenomenology
Indian Academy of Sciences (India)
We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detection experiments of dark matter. Once the dark matter is discovered in the ...
Model-independent approach for dark matter phenomenology ...
Indian Academy of Sciences (India)
Abstract. We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detec- tion experiments of dark matter. Once the dark matter is discovered ...
The variational approach to the Glashow-Weinberg-Salam model
International Nuclear Information System (INIS)
Manka, R.; Sladkowski, J.
1987-01-01
The variational approach to the Glashow-Weinberg-Salam model, based on canonical quantization, is presented. It is shown that taking into consideration the Becchi-Rouet-Stora symmetry leads to the correct, temperature-dependent, effective potential. This generalization of the Weinberg-Coleman potential leads to a phase transition of the first kind
Methodological Approach for Modeling of Multienzyme in-pot Processes
DEFF Research Database (Denmark)
Andrade Santacoloma, Paloma de Gracia; Roman Martinez, Alicia; Sin, Gürkan
2011-01-01
This paper presents a methodological approach for modeling multi-enzyme in-pot processes. The methodology is exemplified stepwise through the bi-enzymatic production of N-acetyl-D-neuraminic acid (Neu5Ac) from N-acetyl-D-glucosamine (GlcNAc). In this case study, sensitivity analysis is also used ...
An Approach to Quality Estimation in Model-Based Development
DEFF Research Database (Denmark)
Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter
2004-01-01
We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...
EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES
Directory of Open Access Journals (Sweden)
Slavko Arsovski
2009-03-01
Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The
Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Directory of Open Access Journals (Sweden)
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
An Alternative Approach to the Extended Drude Model
Gantzler, N. J.; Dordevic, S. V.
2018-05-01
The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.
Multiphysics modeling using COMSOL a first principles approach
Pryor, Roger W
2011-01-01
Multiphysics Modeling Using COMSOL rapidly introduces the senior level undergraduate, graduate or professional scientist or engineer to the art and science of computerized modeling for physical systems and devices. It offers a step-by-step modeling methodology through examples that are linked to the Fundamental Laws of Physics through a First Principles Analysis approach. The text explores a breadth of multiphysics models in coordinate systems that range from 1D to 3D and introduces the readers to the numerical analysis modeling techniques employed in the COMSOL Multiphysics software. After readers have built and run the examples, they will have a much firmer understanding of the concepts, skills, and benefits acquired from the use of computerized modeling techniques to solve their current technological problems and to explore new areas of application for their particular technological areas of interest.
Evaluation of Workflow Management Systems - A Meta Model Approach
Directory of Open Access Journals (Sweden)
Michael Rosemann
1998-11-01
Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.
Generalised additive modelling approach to the fermentation process of glutamate.
Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping
2011-03-01
In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Li, Haiyan [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Huang, Yunbao, E-mail: Huangyblhy@gmail.com [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Jiang, Shaoen, E-mail: Jiangshn@vip.sina.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Jing, Longfei, E-mail: scmyking_2008@163.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Tianxuan, Huang; Ding, Yongkun [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China)
2015-11-15
Highlights: • A unified modeling approach for physical experiment design is presented. • Any laser facility can be flexibly defined and included with two scripts. • Complex targets and laser beams can be parametrically modeled for optimization. • Automatically mapping of laser beam energy facilitates targets shape optimization. - Abstract: Physical experiment design and optimization is very essential for laser driven inertial confinement fusion due to the high cost of each shot. However, only limited experiments with simple structure or shape on several laser facilities can be designed and evaluated in available codes, and targets are usually defined by programming, which may lead to it difficult for complex shape target design and optimization on arbitrary laser facilities. A unified modeling approach for physical experiment design and optimization on any laser facilities is presented in this paper. Its core idea includes: (1) any laser facility can be flexibly defined and included with two scripts, (2) complex shape targets and laser beams can be parametrically modeled based on features, (3) an automatically mapping scheme of laser beam energy onto discrete mesh elements of targets enable targets or laser beams be optimized without any additional interactive modeling or programming, and (4) significant computation algorithms are additionally presented to efficiently evaluate radiation symmetry on the target. Finally, examples are demonstrated to validate the significance of such unified modeling approach for physical experiments design and optimization in laser driven inertial confinement fusion.
International Nuclear Information System (INIS)
Mok, Chin Man; Doughty, Christine; Zhang, Keni; Pruess, Karsten; Kiureghian, Armen; Zhang, Miao; Kaback, Dawn
2010-01-01
A new computer code, CALRELTOUGH, which uses reliability methods to incorporate parameter sensitivity and uncertainty analysis into subsurface flow and transport models, was developed by Geomatrix Consultants, Inc. in collaboration with Lawrence Berkeley National Laboratory and University of California at Berkeley. The CALREL reliability code was developed at the University of California at Berkely for geotechnical applications and the TOUGH family of codes was developed at Lawrence Berkeley National Laboratory for subsurface flow and tranport applications. The integration of the two codes provides provides a new approach to deal with uncertainties in flow and transport modeling of the subsurface, such as those uncertainties associated with hydrogeology parameters, boundary conditions, and initial conditions of subsurface flow and transport using data from site characterization and monitoring for conditioning. The new code enables computation of the reliability of a system and the components that make up the system, instead of calculating the complete probability distributions of model predictions at all locations at all times. The new CALRELTOUGH code has tremendous potential to advance subsurface understanding for a variety of applications including subsurface energy storage, nuclear waste disposal, carbon sequestration, extraction of natural resources, and environmental remediation. The new code was tested on a carbon sequestration problem as part of the Phase I project. Phase iI was not awarded.
Toward the Development of a Cold Regions Regional-Scale Hydrologic Model, Final Project Report
Energy Technology Data Exchange (ETDEWEB)
Hinzman, Larry D [Univ. of Alaska, Fairbanks, AK (United States); Bolton, William Robert [Univ. of Alaska, Fairbanks, AK (United States); Young-Robertson, Jessica (Cable) [Univ. of Alaska, Fairbanks, AK (United States)
2018-01-02
This project improves meso-scale hydrologic modeling in the boreal forest by: (1) demonstrating the importance of capturing the heterogeneity of the landscape using small scale datasets for parameterization for both small and large basins; (2) demonstrating that in drier parts of the landscape and as the boreal forest dries with climate change, modeling approaches must consider the sensitivity of simulations to soil hydraulic parameters - such as residual water content - that are usually held constant. Thus, variability / flexibility in residual water content must be considered for accurate simulation of hydrologic processes in the boreal forest; (3) demonstrating that assessing climate change impacts on boreal forest hydrology through multiple model integration must account for direct effects of climate change (temperature and precipitation), and indirect effects from climate impacts on landscape characteristics (permafrost and vegetation distribution). Simulations demonstrated that climate change will increase runoff, but will increase ET to a greater extent and result in a drying of the landscape; and (4) vegetation plays a significant role in boreal hydrologic processes in permafrost free areas that have deciduous trees. This landscape type results in a decoupling of ET and precipitation, a tight coupling of ET and temperature, low runoff, and overall soil drying.
Validation of Slosh Modeling Approach Using STAR-CCM+
Benson, David J.; Ng, Wanyi
2018-01-01
Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.
Feedback structure based entropy approach for multiple-model estimation
Institute of Scientific and Technical Information of China (English)
Shen-tu Han; Xue Anke; Guo Yunfei
2013-01-01
The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.
Polynomial Chaos Expansion Approach to Interest Rate Models
Directory of Open Access Journals (Sweden)
Luca Di Persio
2015-01-01
Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.
Common modelling approaches for training simulators for nuclear power plants
International Nuclear Information System (INIS)
1990-02-01
Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs
Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach
Yu, Bing; Shu, Wenjun
2017-03-01
The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.
An approach to ductile fracture resistance modelling in pipeline steels
Energy Technology Data Exchange (ETDEWEB)
Pussegoda, L.N.; Fredj, A. [BMT Fleet Technology Ltd., Kanata (Canada)
2009-07-01
Ductile fracture resistance studies of high grade steels in the pipeline industry often included analyses of the crack tip opening angle (CTOA) parameter using 3-point bend steel specimens. The CTOA is a function of specimen ligament size in high grade materials. Other resistance measurements may include steady state fracture propagation energy, critical fracture strain, and the adoption of damage mechanisms. Modelling approaches for crack propagation were discussed in this abstract. Tension tests were used to calibrate damage model parameters. Results from the tests were then applied to the crack propagation in a 3-point bend specimen using modern 1980 vintage steels. Limitations and approaches to overcome the difficulties associated with crack propagation modelling were discussed.
High dimensions - a new approach to fermionic lattice models
International Nuclear Information System (INIS)
Vollhardt, D.
1991-01-01
The limit of high spatial dimensions d, which is well-established in the theory of classical and localized spin models, is shown to be a fruitful approach also to itinerant fermion systems, such as the Hubbard model and the periodic Anderson model. Many investigations which are probability difficult in finite dimensions, become tractable in d=∞. At the same time essential features of systems in d=3 and even lower dimensions are very well described by the results obtained in d=∞. A wide range of applications of this new concept (e.g., in perturbation theory, Fermi liquid theory, variational approaches, exact results, etc.) is discussed and the state-of-the-art is reviewed. (orig.)
sigma model approach to the heterotic string theory
International Nuclear Information System (INIS)
Sen, A.
1985-09-01
Relation between the equations of motion for the massless fields in the heterotic string theory, and the conformal invariance of the sigma model describing the propagation of the heterotic string in arbitrary background massless fields is discussed. It is emphasized that this sigma model contains complete information about the string theory. Finally, we discuss the extension of the Hull-Witten proof of local gauge and Lorentz invariance of the sigma-model to higher order in α', and the modification of the transformation laws of the antisymmetric tensor field under these symmetries. Presence of anomaly in the naive N = 1/2 supersymmetry transformation is also pointed out in this context. 12 refs
A fuzzy approach for modelling radionuclide in lake system
International Nuclear Information System (INIS)
Desai, H.K.; Christian, R.A.; Banerjee, J.; Patra, A.K.
2013-01-01
Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of 3 H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict 3 H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and 3 H concentration at discharge point. The Output was 3 H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. -- Highlights: • Uncommon approach (Fuzzy Rule Base) of modelling radionuclide dispersion in Lake. • Predicts 3 H released from Kakrapar Atomic Power Station at a point of human exposure. • RMSE of fuzzy model is 1.95, which means, it has well imitated natural ecosystem
Modeling energy fluxes in heterogeneous landscapes employing a mosaic approach
Klein, Christian; Thieme, Christoph; Priesack, Eckart
2015-04-01
Recent studies show that uncertainties in regional and global climate and weather simulations are partly due to inadequate descriptions of the energy flux exchanges between the land surface and the atmosphere. One major shortcoming is the limitation of the grid-cell resolution, which is recommended to be about at least 3x3 km² in most models due to limitations in the model physics. To represent each individual grid cell most models select one dominant soil type and one dominant land use type. This resolution, however, is often too coarse in regions where the spatial diversity of soil and land use types are high, e.g. in Central Europe. An elegant method to avoid the shortcoming of grid cell resolution is the so called mosaic approach. This approach is part of the recently developed ecosystem model framework Expert-N 5.0. The aim of this study was to analyze the impact of the characteristics of two managed fields, planted with winter wheat and potato, on the near surface soil moistures and on the near surface energy flux exchanges of the soil-plant-atmosphere interface. The simulated energy fluxes were compared with eddy flux tower measurements between the respective fields at the research farm Scheyern, North-West of Munich, Germany. To perform these simulations, we coupled the ecosystem model Expert-N 5.0 to an analytical footprint model. The coupled model system has the ability to calculate the mixing ratio of the surface energy fluxes at a given point within one grid cell (in this case at the flux tower between the two fields). This approach accounts for the differences of the two soil types, of land use managements, and of canopy properties due to footprint size dynamics. Our preliminary simulation results show that a mosaic approach can improve modeling and analyzing energy fluxes when the land surface is heterogeneous. In this case our applied method is a promising approach to extend weather and climate models on the regional and on the global scale.
DEFF Research Database (Denmark)
Simonsen, Kent Inge; Kristensen, Lars Michael
2013-01-01
Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...... is beneficial both in terms of reduced total modelling effort and confidence that the verification results are valid also for the implementation model. In this paper we introduce the concept of a descriptive specification model and an approach based on refining a descriptive model to target both verification...... how this model can be refined to target both verification and implementation....
Energy Technology Data Exchange (ETDEWEB)
Wessel, Silvia [Ballard Materials Products; Harvey, David [Ballard Materials Products
2013-06-28
The durability of PEM fuel cells is a primary requirement for large scale commercialization of these power systems in transportation and stationary market applications that target operational lifetimes of 5,000 hours and 40,000 hours by 2015, respectively. Key degradation modes contributing to fuel cell lifetime limitations have been largely associated with the platinum-based cathode catalyst layer. Furthermore, as fuel cells are driven to low cost materials and lower catalyst loadings in order to meet the cost targets for commercialization, the catalyst durability has become even more important. While over the past few years significant progress has been made in identifying the underlying causes of fuel cell degradation and key parameters that greatly influence the degradation rates, many gaps with respect to knowledge of the driving mechanisms still exist; in particular, the acceleration of the mechanisms due to different structural compositions and under different fuel cell conditions remains an area not well understood. The focus of this project was to address catalyst durability by using a dual path approach that coupled an extensive range of experimental analysis and testing with a multi-scale modeling approach. With this, the major technical areas/issues of catalyst and catalyst layer performance and durability that were addressed are: 1. Catalyst and catalyst layer degradation mechanisms (Pt dissolution, agglomeration, Pt loss, e.g. Pt in the membrane, carbon oxidation and/or corrosion). a. Driving force for the different degradation mechanisms. b. Relationships between MEA performance, catalyst and catalyst layer degradation and operational conditions, catalyst layer composition, and structure. 2. Materials properties a. Changes in catalyst, catalyst layer, and MEA materials properties due to degradation. 3. Catalyst performance a. Relationships between catalyst structural changes and performance. b. Stability of the three-phase boundary and its effect on
Model-Driven Approach for Body Area Network Application Development.
Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata
2016-05-12
This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.
Model-Driven Approach for Body Area Network Application Development
Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata
2016-01-01
This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394
Model-Driven Approach for Body Area Network Application Development
Directory of Open Access Journals (Sweden)
Algimantas Venčkauskas
2016-05-01
Full Text Available This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS. We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.
Final report of MoReMO 2011-2012. Modelling resilience for maintenance and outage
International Nuclear Information System (INIS)
Gotcheva, N.; Macchi, L.; Oedewald, P.; Eitrheim, M.H.R.; Axelsson, C.; Reiman, T.; Pietikaeinen, E.
2013-04-01
The project Modelling Resilience for Maintenance and Outage (MoReMO) represents a two-year joint effort by VTT Technical Research Centre of Finland, Institute for Energy Technology (IFE, Norway) and Vattenfall (Sweden) to develop and test new approaches for safety management. The overall goal of the project was to present concepts on how resilience can be operationalized and built in a safety critical and socio-technical context. Furthermore, the project also aimed at providing guidance for other organizations that strive to develop and improve their safety performance in a business driven industry. We have applied four approaches in different case studies: Organisational Core Task modelling (OCT), Functional Resonance Analysis Method (FRAM), Efficiency Thoroughness Trade-Off (ETTO) analysis, and Work Practice and Culture Characterisation. During 2011 and 2012 the MoReMO project team has collected data through field observations, interviews, workshops, and document analysis on the work practices and adjustments in maintenance and outage in Nordic NPPs. The project consisted of two sub-studies, one focused on identifying and assessing adjustments and supporting resilient work practices in maintenance activities, while the other focused on handling performance trade-offs in maintenance and outage, as follows: A. Adjustments in maintenance work in Nordic nuclear power plants (VTT and Vattenfall). B. Handling performance trade-offs - the support of adaptive capacities (IFE and Vattenfall). The historical perspective of maintenance and outage management (Chapter 1.1) was provided by Vattenfall. Together, the two sub-studies have provided valuable insights for understanding the rationale behind work practices and adjustments, their effects on resilience, promoting flexibility and balancing between flexibility and reliability. (Author)
Final report of MoReMO 2011-2012. Modelling resilience for maintenance and outage
Energy Technology Data Exchange (ETDEWEB)
Gotcheva, N.; Macchi, L.; Oedewald, P. [Technical Research Centre of Finland (VTT), Espoo (Finland); Eitrheim, M.H.R. [Institute for Energy Technology (IFE) (Norway); Axelsson, C.; Reiman, T.; Pietikaeinen, E. [Ringhals AB (NPP), Vattenfall AB (Sweden)
2013-04-15
The project Modelling Resilience for Maintenance and Outage (MoReMO) represents a two-year joint effort by VTT Technical Research Centre of Finland, Institute for Energy Technology (IFE, Norway) and Vattenfall (Sweden) to develop and test new approaches for safety management. The overall goal of the project was to present concepts on how resilience can be operationalized and built in a safety critical and socio-technical context. Furthermore, the project also aimed at providing guidance for other organizations that strive to develop and improve their safety performance in a business driven industry. We have applied four approaches in different case studies: Organisational Core Task modelling (OCT), Functional Resonance Analysis Method (FRAM), Efficiency Thoroughness Trade-Off (ETTO) analysis, and Work Practice and Culture Characterisation. During 2011 and 2012 the MoReMO project team has collected data through field observations, interviews, workshops, and document analysis on the work practices and adjustments in maintenance and outage in Nordic NPPs. The project consisted of two sub-studies, one focused on identifying and assessing adjustments and supporting resilient work practices in maintenance activities, while the other focused on handling performance trade-offs in maintenance and outage, as follows: A. Adjustments in maintenance work in Nordic nuclear power plants (VTT and Vattenfall). B. Handling performance trade-offs - the support of adaptive capacities (IFE and Vattenfall). The historical perspective of maintenance and outage management (Chapter 1.1) was provided by Vattenfall. Together, the two sub-studies have provided valuable insights for understanding the rationale behind work practices and adjustments, their effects on resilience, promoting flexibility and balancing between flexibility and reliability. (Author)
Unraveling the Mechanisms of Manual Therapy: Modeling an Approach.
Bialosky, Joel E; Beneciuk, Jason M; Bishop, Mark D; Coronado, Rogelio A; Penza, Charles W; Simon, Corey B; George, Steven Z
2018-01-01
Synopsis Manual therapy interventions are popular among individual health care providers and their patients; however, systematic reviews do not strongly support their effectiveness. Small treatment effect sizes of manual therapy interventions may result from a "one-size-fits-all" approach to treatment. Mechanistic-based treatment approaches to manual therapy offer an intriguing alternative for identifying patients likely to respond to manual therapy. However, the current lack of knowledge of the mechanisms through which manual therapy interventions inhibit pain limits such an approach. The nature of manual therapy interventions further confounds such an approach, as the related mechanisms are likely a complex interaction of factors related to the patient, the provider, and the environment in which the intervention occurs. Therefore, a model to guide both study design and the interpretation of findings is necessary. We have previously proposed a model suggesting that the mechanical force from a manual therapy intervention results in systemic neurophysiological responses leading to pain inhibition. In this clinical commentary, we provide a narrative appraisal of the model and recommendations to advance the study of manual therapy mechanisms. J Orthop Sports Phys Ther 2018;48(1):8-18. doi:10.2519/jospt.2018.7476.
Site-conditions map for Portugal based on VS measurements: methodology and final model
Vilanova, Susana; Narciso, João; Carvalho, João; Lopes, Isabel; Quinta Ferreira, Mario; Moura, Rui; Borges, José; Nemser, Eliza; Pinto, carlos
2017-04-01
In this paper we present a statistically significant site-condition model for Portugal based on shear-wave velocity (VS) data and surface geology. We also evaluate the performance of commonly used Vs30 proxies based on exogenous data and analyze the implications of using those proxies for calculating site amplification in seismic hazard assessment. The dataset contains 161 Vs profiles acquired in Portugal in the context of research projects, technical reports, academic thesis and academic papers. The methodologies involved in characterizing the Vs structure at the sites in the database include seismic refraction, multichannel analysis of seismic waves and refraction microtremor. Invasive measurements were performed in selected locations in order to compare the Vs profiles obtained from both invasive and non-invasive techniques. In general there was good agreement in the subsurface structure of Vs30 obtained from the different methodologies. The database flat-file includes information on Vs30, surface geology at 1:50.000 and 1:500.000 scales, elevation and topographic slope and based on SRTM30 topographic dataset. The procedure used to develop the site-conditions map is based on a three-step process that includes defining a preliminary set of geological units based on the literature, performing statistical tests to assess whether or not the differences in the distributions of Vs30 are statistically significant, and merging of the geological units accordingly. The dataset was, to some extent, affected by clustering and/or preferential sampling and therefore a declustering algorithm was applied. The final model includes three geological units: 1) Igneous, metamorphic and old (Paleogene and Mesozoic) sedimentary rocks; 2) Neogene and Pleistocene formations, and 3) Holocene formations. The evaluation of proxies indicates that although geological analogues and topographic slope are in general unbiased, the latter shows significant bias for particular geological units and
Polynomial fuzzy model-based approach for underactuated surface vessels
DEFF Research Database (Denmark)
Khooban, Mohammad Hassan; Vafamand, Navid; Dragicevic, Tomislav
2018-01-01
The main goal of this study is to introduce a new polynomial fuzzy model-based structure for a class of marine systems with non-linear and polynomial dynamics. The suggested technique relies on a polynomial Takagi–Sugeno (T–S) fuzzy modelling, a polynomial dynamic parallel distributed compensation...... surface vessel (USV). Additionally, in order to overcome the USV control challenges, including the USV un-modelled dynamics, complex nonlinear dynamics, external disturbances and parameter uncertainties, the polynomial fuzzy model representation is adopted. Moreover, the USV-based control structure...... and a sum-of-squares (SOS) decomposition. The new proposed approach is a generalisation of the standard T–S fuzzy models and linear matrix inequality which indicated its effectiveness in decreasing the tracking time and increasing the efficiency of the robust tracking control problem for an underactuated...
Bayesian approach to errors-in-variables in regression models
Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad
2017-05-01
In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-11-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.
Pressure sintering and creep deformation: a joint modeling approach
International Nuclear Information System (INIS)
Notis, M.R.
1979-10-01
Work related to microchemical and microstructural aspects of the joint modeling of pressure sintering and creep in ceramic oxides is reported. Quantitative techniques for the microchemical analysis of ceramic oxides and for the examination of impurity segregation effects in polycrystalline ceramic materials were developed. This has included fundamental absorption corrections for the oxygen anion species as a function of foil thickness. The evolution in microstructure during the transition from intermediate stage to final stage densification during hot pressing of cobalt oxide and preliminary studies with doped oxides were studied. This work shows promise in using time-integrated microstructural effects to elucidate the role of impurities in the sintering of ceramic materials
Zuhdi, Ubaidillah
2014-03-01
The purpose of this study is to analyze the impacts of final demand changes on total output of Japanese Information and Communication Technologies (ICT) sectors in future time. This study employs one of analysis tool in Input-Output (IO) analysis, demand-pull IO quantity model, in achieving the purpose. There are three final demand changes used in this study, namely (1) export, (2) import, and (3) outside households consumption changes. This study focuses on "pure change" condition, the condition that final demand changes only appear in analyzed sectors. The results show that export and outside households consumption modifications give positive impact while opposite impact could be seen in import change.
Practical modeling approaches for geological storage of carbon dioxide.
Celia, Michael A; Nordbotten, Jan M
2009-01-01
The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.
A fuzzy approach for modelling radionuclide in lake system.
Desai, H K; Christian, R A; Banerjee, J; Patra, A K
2013-10-01
Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of (3)H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict (3)H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and (3)H concentration at discharge point. The Output was (3)H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. Copyright © 2013 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Hall, Alex [University of California, Los Angeles, CA (United States). Joint Institute for Regional Earth System Science and Engineering
2013-07-24
the mostly dry mountain-breeze circulations force an additional component that results in semi-diurnal variations near the coast. A series of numerical tests, however, reveal sensitivity of the simulations to the choice of vertical grid, limiting the possibility of solid quantitative statements on the amplitudes and phases of the diurnal and semidiurnal components across the domain. According to our experiments, the Mellor-Yamada-Nakanishi-Niino (MYNN) boundary layer scheme and the WSM6 microphysics scheme is the combination of schemes that performs best. For that combination, mean cloud cover, liquid water path, and cloud depth are fairly wellsimulated, while mean cloud top height remains too low in comparison to observations. Both microphysics and boundary layer schemes contribute to the spread in liquid water path and cloud depth, although the microphysics contribution is slightly more prominent. Boundary layer schemes are the primary contributors to cloud top height, degree of adiabaticity, and cloud cover. Cloud top height is closely related to surface fluxes and boundary layer structure. Thus, our study infers that an appropriate tuning of cloud top height would likely improve the low-cloud representation in the model. Finally, we show that entrainment governs the degree of adiabaticity, while boundary layer decoupling is a control on cloud cover. In the intercomparison study using WRF single-column model experiments, most parameterizations show a poor agreement of the vertical boundary layer structure when compared with large-eddy simulation models. We also implement a new Total-Energy/Mass- Flux boundary layer scheme into the WRF model and evaluate its ability to simulate both stratocumulus and shallow cumulus clouds. Result comparisons against large-eddy simulation show that this advanced parameterization based on the new Eddy-Diffusivity/Mass-Flux approach provides a better performance than other boundary layer parameterizations.
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
Directory of Open Access Journals (Sweden)
W. Bastiaan Kleijn
2005-06-01
Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.
A modal approach to modeling spatially distributed vibration energy dissipation.
Energy Technology Data Exchange (ETDEWEB)
Segalman, Daniel Joseph
2010-08-01
The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.
Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model
Directory of Open Access Journals (Sweden)
Neha Gupta
2013-12-01
Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4
Energy Technology Data Exchange (ETDEWEB)
Walton, W.C.; Voorhees, M.L.; Prickett, T.A.
1980-05-23
This technical memorandum was prepared to: (1) describe a typical basalt radionuclide repository site, (2) describe geologic and hydrologic processes associated with regional radionuclide transport in basalts, (3) define the parameters required to model regional radionuclide transport from a basalt repository site, and (4) develop a ''conceptual model'' of radionuclide transport from a basalt repository site. In a general hydrological sense, basalts may be described as layered sequences of aquifers and aquitards. The Columbia River Basalt, centered near the semi-arid Pasco Basin, is considered by many to be typical basalt repository host rock. Detailed description of the flow system including flow velocities with high-low hydraulic conductivity sequences are not possible with existing data. However, according to theory, waste-transport routes are ultimately towards the Columbia River and the lengths of flow paths from the repository to the biosphere may be relatively short. There are many physical, chemical, thermal, and nuclear processes with associated parameters that together determine the possible pattern of radionuclide migration in basalts and surrounding formations. Brief process descriptions and associated parameter lists are provided. Emphasis has been placed on the use of the distribution coefficient in simulating ion exchange. The use of the distribution coefficient approach is limited because it takes into account only relatively fast mass transfer processes. In general, knowledge of hydrogeochemical processes is primitive.
International Nuclear Information System (INIS)
Walton, W.C.; Voorhees, M.L.; Prickett, T.A.
1980-01-01
This technical memorandum was prepared to: (1) describe a typical basalt radionuclide repository site, (2) describe geologic and hydrologic processes associated with regional radionuclide transport in basalts, (3) define the parameters required to model regional radionuclide transport from a basalt repository site, and (4) develop a ''conceptual model'' of radionuclide transport from a basalt repository site. In a general hydrological sense, basalts may be described as layered sequences of aquifers and aquitards. The Columbia River Basalt, centered near the semi-arid Pasco Basin, is considered by many to be typical basalt repository host rock. Detailed description of the flow system including flow velocities with high-low hydraulic conductivity sequences are not possible with existing data. However, according to theory, waste-transport routes are ultimately towards the Columbia River and the lengths of flow paths from the repository to the biosphere may be relatively short. There are many physical, chemical, thermal, and nuclear processes with associated parameters that together determine the possible pattern of radionuclide migration in basalts and surrounding formations. Brief process descriptions and associated parameter lists are provided. Emphasis has been placed on the use of the distribution coefficient in simulating ion exchange. The use of the distribution coefficient approach is limited because it takes into account only relatively fast mass transfer processes. In general, knowledge of hydrogeochemical processes is primitive
International Nuclear Information System (INIS)
Mahmood, N.; Burney, S.M.A.
2017-01-01
Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)
A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.
Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C
2017-07-01
Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.
A Bayesian Approach for Structural Learning with Hidden Markov Models
Directory of Open Access Journals (Sweden)
Cen Li
2002-01-01
Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.
A Novel Approach to Implement Takagi-Sugeno Fuzzy Models.
Chang, Chia-Wen; Tao, Chin-Wang
2017-09-01
This paper proposes new algorithms based on the fuzzy c-regressing model algorithm for Takagi-Sugeno (T-S) fuzzy modeling of the complex nonlinear systems. A fuzzy c-regression state model (FCRSM) algorithm is a T-S fuzzy model in which the functional antecedent and the state-space-model-type consequent are considered with the available input-output data. The antecedent and consequent forms of the proposed FCRSM consists mainly of two advantages: one is that the FCRSM has low computation load due to only one input variable is considered in the antecedent part; another is that the unknown system can be modeled to not only the polynomial form but also the state-space form. Moreover, the FCRSM can be extended to FCRSM-ND and FCRSM-Free algorithms. An algorithm FCRSM-ND is presented to find the T-S fuzzy state-space model of the nonlinear system when the input-output data cannot be precollected and an assumed effective controller is available. In the practical applications, the mathematical model of controller may be hard to be obtained. In this case, an online tuning algorithm, FCRSM-FREE, is designed such that the parameters of a T-S fuzzy controller and the T-S fuzzy state model of an unknown system can be online tuned simultaneously. Four numerical simulations are given to demonstrate the effectiveness of the proposed approach.
Approach to Organizational Structure Modelling in Construction Companies
Directory of Open Access Journals (Sweden)
Ilin Igor V.
2016-01-01
Full Text Available Effective management system is one of the key factors of business success nowadays. Construction companies usually have a portfolio of independent projects running at the same time. Thus it is reasonable to take into account project orientation of such kind of business while designing the construction companies’ management system, which main components are business process system and organizational structure. The paper describes the management structure designing approach, based on the project-oriented nature of the construction projects, and propose a model of the organizational structure for the construction company. Application of the proposed approach will enable to assign responsibilities within the organizational structure in construction projects effectively and thus to shorten the time for projects allocation and to provide its smoother running. The practical case of using the approach also provided in the paper.
Energy Technology Data Exchange (ETDEWEB)
Peter J. Mucha
2007-08-30
Suspensions of solid particles in liquids appear in numerous applications, from environmental settings like river silt, to industrial systems of solids transport and water treatment, and biological flows such as blood flow. Despite their importance, much remains unexplained about these complicated systems. Mucha's research aims to improve understanding of basic properties of suspensions through a program of simulating model interacting particle systems with critical evaluation of proposed continuum equations, in close collaboration with experimentalists. Natural to this approach, the original proposal centered around collaboration with studies already conducted in various experimental groups. However, as was detailed in the 2004 progress report, following the first year of this award, a number of the questions from the original proposal were necessarily redirected towards other specific goals because of changes in the research programs of the proposed experimental collaborators. Nevertheless, the modified project goals and the results that followed from those goals maintain close alignment with the main themes of the original proposal, improving efficient simulation and macroscopic modeling of sedimenting and colloidal suspensions. In particular, the main investigations covered under this award have included: (1) Sedimentation instabilities, including the sedimentation analogue of the Rayleigh-Taylor instability (for heavy, particle-laden fluid over lighter, clear fluid). (2) Ageing dynamics of colloidal suspensions at concentrations above the glass transition, using simplified interactions. (3) Stochastic reconstruction of velocity-field dependence for particle image velocimetry (PIV). (4) Stochastic modeling of the near-wall bias in 'nano-PIV'. (5) Distributed Lagrange multiplier simulation of the 'internal splash' of a particle falling through a stable stratified interface. (6) Fundamental study of velocity fluctuations in sedimentation
A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins
Gronewold, A.; Alameddine, I.; Anderson, R. M.
2009-12-01
States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program, as well as those addressing coastal population dynamics and sea level rise. Our approach has several advantages, including the propagation of parameter uncertainty through a nonparametric probability distribution which avoids common pitfalls of fitting parameters and model error structure to a predetermined parametric distribution function. In addition, by explicitly acknowledging correlation between model parameters (and reflecting those correlations in our predictive model) our model yields relatively efficient prediction intervals (unlike those in the current literature which are often unnecessarily large, and may lead to overly-conservative management actions). Finally, our model helps improve understanding of the rainfall-runoff process by identifying model parameters (and associated catchment attributes) which are most sensitive to current and future land use change patterns. Disclaimer: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.
A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model
DEFF Research Database (Denmark)
Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen
2007-01-01
This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertaintie...
2017-01-03
This final rule implements three new Medicare Parts A and B episode payment models, a Cardiac Rehabilitation (CR) Incentive Payment model and modifications to the existing Comprehensive Care for Joint Replacement model under section 1115A of the Social Security Act. Acute care hospitals in certain selected geographic areas will participate in retrospective episode payment models targeting care for Medicare fee-forservice beneficiaries receiving services during acute myocardial infarction, coronary artery bypass graft, and surgical hip/femur fracture treatment episodes. All related care within 90 days of hospital discharge will be included in the episode of care. We believe these models will further our goals of improving the efficiency and quality of care for Medicare beneficiaries receiving care for these common clinical conditions and procedures.
Application of declarative modeling approaches for external events
International Nuclear Information System (INIS)
Anoba, R.C.
2005-01-01
Probabilistic Safety Assessments (PSAs) are increasingly being used as a tool for supporting the acceptability of design, procurement, construction, operation, and maintenance activities at Nuclear Power Plants. Since the issuance of Generic Letter 88-20 and subsequent IPE/IPEEE assessments, the NRC has issued several Regulatory Guides such as RG 1.174 to describe the use of PSA in risk-informed regulation activities. Most PSA have the capability to address internal events including internal floods. As the more demands are being placed for using the PSA to support risk-informed applications, there has been a growing need to integrate other eternal events (Seismic, Fire, etc.) into the logic models. Most external events involve spatial dependencies and usually impact the logic models at the component level. Therefore, manual insertion of external events impacts into a complex integrated fault tree model may be too cumbersome for routine uses of the PSA. Within the past year, a declarative modeling approach has been developed to automate the injection of external events into the PSA. The intent of this paper is to introduce the concept of declarative modeling in the context of external event applications. A declarative modeling approach involves the definition of rules for injection of external event impacts into the fault tree logic. A software tool such as the EPRI's XInit program can be used to interpret the pre-defined rules and automatically inject external event elements into the PSA. The injection process can easily be repeated, as required, to address plant changes, sensitivity issues, changes in boundary conditions, etc. External event elements may include fire initiating events, seismic initiating events, seismic fragilities, fire-induced hot short events, special human failure events, etc. This approach has been applied at a number of US nuclear power plants including a nuclear power plant in Romania. (authors)
Understanding complex urban systems multidisciplinary approaches to modeling
Gurr, Jens; Schmidt, J
2014-01-01
Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...
A Variational Approach to the Modeling of MIMO Systems
Directory of Open Access Journals (Sweden)
Jraifi A
2007-01-01
Full Text Available Motivated by the study of the optimization of the quality of service for multiple input multiple output (MIMO systems in 3G (third generation, we develop a method for modeling MIMO channel . This method, which uses a statistical approach, is based on a variational form of the usual channel equation. The proposed equation is given by with scalar variable . Minimum distance of received vectors is used as the random variable to model MIMO channel. This variable is of crucial importance for the performance of the transmission system as it captures the degree of interference between neighbors vectors. Then, we use this approach to compute numerically the total probability of errors with respect to signal-to-noise ratio (SNR and then predict the numbers of antennas. By fixing SNR variable to a specific value, we extract informations on the optimal numbers of MIMO antennas.
On quantum approach to modeling of plasmon photovoltaic effect
DEFF Research Database (Denmark)
Kluczyk, Katarzyna; David, Christin; Jacak, Witold Aleksander
2017-01-01
Surface plasmons in metallic nanostructures including metallically nanomodified solar cells are conventionally studied and modeled by application of the Mie approach to plasmons or by the finite element solution of differential Maxwell equations with imposed boundary and material constraints (e...... to the semiconductor solar cell mediated by surface plasmons in metallic nanoparticles deposited on the top of the battery. In addition, short-ranged electron-electron interaction in metals is discussed in the framework of the semiclassical hydrodynamic model. The significance of the related quantum corrections......-aided photovoltaic phenomena. Quantum corrections considerably improve both the Mie and COMSOL approaches in this case. We present the semiclassical random phase approximation description of plasmons in metallic nanoparticles and apply the quantumFermi golden rule scheme to assess the sunlight energy transfer...
Innovation Networks New Approaches in Modelling and Analyzing
Pyka, Andreas
2009-01-01
The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.
Hypercompetitive Environments: An Agent-based model approach
Dias, Manuel; Araújo, Tanya
Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.
Energy Technology Data Exchange (ETDEWEB)
Rai, Varun [Univ. of Texas, Austin, TX (United States)
2016-08-15
This project sought to enable electric utilities in Texas to accelerate diffusion of residential solar photovoltaic (PV) by systematically identifying and targeting existing barriers to PV adoption. A core goal of the project was to develop an integrated research framework that combines survey research, econometric modeling, financial modeling, and implementation and evaluation of pilot projects to study the PV diffusion system. This project considered PV diffusion as an emergent system, with attention to the interactions between the constituent parts of the PV socio-technical system including: economics of individual decision-making; peer and social influences; behavioral responses; and information and transaction costs. We also conducted two pilot projects, which have yielded new insights into behavioral and informational aspects of PV adoption. Finally, this project has produced robust and generalizable results that will provide deeper insights into the technology-diffusion process that will be applicable for the design of utility programs for other technologies such as home-energy management systems and plug-in electric vehicles. When we started this project in 2013 there was little systematic research on characterizing the decision-making process of households interested in adopting PV. This project was designed to fill that research gap by analyzing the PV adoption process from the consumers' decision-making perspective and with the objective to systematically identifying and addressing the barriers that consumers face in the adoption of PV. The two key components of that decision-making process are consumers' evaluation of: (i) uncertainties and non-monetary costs associated with the technology and (ii) the direct monetary cost-benefit. This project used an integrated approach to study both the non-monetary and the monetary components of the consumer decision-making process.
Modelling transport energy demand: A socio-technical approach
International Nuclear Information System (INIS)
Anable, Jillian; Brand, Christian; Tran, Martino; Eyre, Nick
2012-01-01
Despite an emerging consensus that societal energy consumption and related emissions are not only influenced by technical efficiency but also by lifestyles and socio-cultural factors, few attempts have been made to operationalise these insights in models of energy demand. This paper addresses that gap by presenting a scenario exercise using an integrated suite of sectoral and whole systems models to explore potential energy pathways in the UK transport sector. Techno-economic driven scenarios are contrasted with one in which social change is strongly influenced by concerns about energy use, the environment and well-being. The ‘what if’ Lifestyle scenario reveals a future in which distance travelled by car is reduced by 74% by 2050 and final energy demand from transport is halved compared to the reference case. Despite the more rapid uptake of electric vehicles and the larger share of electricity in final energy demand, it shows a future where electricity decarbonisation could be delayed. The paper illustrates the key trade-off between the more aggressive pursuit of purely technological fixes and demand reduction in the transport sector and concludes there are strong arguments for pursuing both demand and supply side solutions in the pursuit of emissions reduction and energy security.
Modeling fabrication of nuclear components: An integrative approach
Energy Technology Data Exchange (ETDEWEB)
Hench, K.W.
1996-08-01
Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.
Injury prevention risk communication: A mental models approach
DEFF Research Database (Denmark)
Austin, Laurel Cecelia; Fischhoff, Baruch
2012-01-01
fail to see risks, do not make use of available protective interventions or misjudge the effectiveness of protective measures. If these misunderstandings can be reduced through context-appropriate risk communications, then their improved mental models may help people to engage more effectively...... and create an expert model of the risk situation, interviewing lay people to elicit their comparable mental models, and developing and evaluating communication interventions designed to close the gaps between lay people and experts. This paper reviews the theory and method behind this research stream...... interventions on the most critical opportunities to reduce risks. That research often seeks to identify the ‘mental models’ that underlie individuals' interpretations of their circumstances and the outcomes of possible actions. In the context of injury prevention, a mental models approach would ask why people...
A nonlinear optimal control approach to stabilization of a macroeconomic development model
Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.
2017-11-01
A nonlinear optimal (H-infinity) control approach is proposed for the problem of stabilization of the dynamics of a macroeconomic development model that is known as the Grossman-Helpman model of endogenous product cycles. The dynamics of the macroeconomic development model is divided in two parts. The first one describes economic activities in a developed country and the second part describes variation of economic activities in a country under development which tries to modify its production so as to serve the needs of the developed country. The article shows that through control of the macroeconomic model of the developed country, one can finally control the dynamics of the economy in the country under development. The control method through which this is achieved is the nonlinear H-infinity control. The macroeconomic model for the country under development undergoes approximate linearization round a temporary operating point. This is defined at each time instant by the present value of the system's state vector and the last value of the control input vector that was exerted on it. The linearization is based on Taylor series expansion and the computation of the associated Jacobian matrices. For the linearized model an H-infinity feedback controller is computed. The controller's gain is calculated by solving an algebraic Riccati equation at each iteration of the control method. The asymptotic stability of the control approach is proven through Lyapunov analysis. This assures that the state variables of the macroeconomic model of the country under development will finally converge to the designated reference values.
Variational approach to thermal masses in compactified models
Energy Technology Data Exchange (ETDEWEB)
Dominici, Daniele [Dipartimento di Fisica e Astronomia Università di Firenze and INFN - Sezione di Firenze,Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Roditi, Itzhak [Centro Brasileiro de Pesquisas Físicas - CBPF/MCT,Rua Dr. Xavier Sigaud 150, 22290-180, Rio de Janeiro, RJ (Brazil)
2015-08-20
We investigate by means of a variational approach the effective potential of a 5DU(1) scalar model at finite temperature and compactified on S{sup 1} and S{sup 1}/Z{sub 2} as well as the corresponding 4D model obtained through a trivial dimensional reduction. We are particularly interested in the behavior of the thermal masses of the scalar field with respect to the Wilson line phase and the results obtained are compared with those coming from a one-loop effective potential calculation. We also explore the nature of the phase transition.
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-06
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Risk Modeling Approaches in Terms of Volatility Banking Transactions
Directory of Open Access Journals (Sweden)
Angelica Cucşa (Stratulat
2016-01-01
Full Text Available The inseparability of risk and banking activity is one demonstrated ever since banking systems, the importance of the topic being presend in current life and future equally in the development of banking sector. Banking sector development is done in the context of the constraints of nature and number of existing risks and those that may arise, and serves as limiting the risk of banking activity. We intend to develop approaches to analyse risk through mathematical models by also developing a model for the Romanian capital market 10 active trading picks that will test investor reaction in controlled and uncontrolled conditions of risk aggregated with harmonised factors.
The experimental and shell model approach to 100Sn
International Nuclear Information System (INIS)
Grawe, H.; Maier, K.H.; Fitzgerald, J.B.; Heese, J.; Spohr, K.; Schubart, R.; Gorska, M.; Rejmund, M.
1995-01-01
The present status of experimental approach to 100 Sn and its shell model structure is given. New developments in experimental techniques, such as low background isomer spectroscopy and charged particle detection in 4π are surveyed. Based on recent experimental data shell model calculations are used to predict the structure of the single- and two-nucleon neighbours of 100 Sn. The results are compared to the systematic of Coulomb energies and spin-orbit splitting and discussed with respect to future experiments. (author). 51 refs, 11 figs, 1 tab
THE SIGNAL APPROACH TO MODELLING THE BALANCE OF PAYMENT CRISIS
Directory of Open Access Journals (Sweden)
O. Chernyak
2016-12-01
Full Text Available The paper considers and presents synthesis of theoretical models of balance of payment crisis and investigates the most effective ways to model the crisis in Ukraine. For mathematical formalization of balance of payment crisis, comparative analysis of the effectiveness of different calculation methods of Exchange Market Pressure Index was performed. A set of indicators that signal the growing likelihood of balance of payments crisis was defined using signal approach. With the help of minimization function thresholds indicators were selected, the crossing of which signalize increase in the probability of balance of payment crisis.
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-01
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
An interdisciplinary approach for earthquake modelling and forecasting
Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.
2016-12-01
Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.
Modeling workforce demand in North Dakota: a System Dynamics approach
Muminova, Adiba
2015-01-01
This study investigates the dynamics behind the workforce demand and attempts to predict the potential effects of future changes in oil prices on workforce demand in North Dakota. The study attempts to join System Dynamics and Input-Output models in order to overcome shortcomings in both of the approaches and gain a more complete understanding of the issue of workforce demand. A system dynamics simulation of workforce demand within different economic sector...
A self-consistent first-principle based approach to model carrier mobility in organic materials
International Nuclear Information System (INIS)
Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang
2015-01-01
Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC
Aespoe Pillar Stability Experiment. Final 2D coupled thermo-mechanical modelling
Energy Technology Data Exchange (ETDEWEB)
Fredriksson, Anders; Staub, Isabelle; Outters, Nils [Golder Associates AB, Uppsala (Sweden)
2004-02-01
A site scale Pillar Stability Experiment is planned in the Aespoe Hard Rock Laboratory. One of the experiment's aims is to demonstrate the possibilities of predicting spalling in the fractured rock mass. In order to investigate the probability and conditions for spalling in the pillar 'prior to experiment' numerical simulations have been undertaken. This report presents the results obtained from 2D coupled thermo-mechanical numerical simulations that have been done with the Finite Element based programme JobFem. The 2D numerical simulations were conducted at two different depth levels, 0.5 and 1.5 m below tunnel floor. The in situ stresses have been confirmed with convergence measurements during the excavation of the tunnel. After updating the mechanical and thermal properties of the rock mass the final simulations have been undertaken. According to the modelling results the temperature in the pillar will increase from the initial 15.2 deg up to 58 deg after 120 days of heating. Based on these numerical simulations and on the thermal induced stresses the total stresses are expected to exceed 210 MPa at the border of the pillar for the level at 0.5 m below tunnel floor and might reach 180-182 MPa for the level at 1.5 m below tunnel floor. The stresses are slightly higher at the border of the confined hole. Upon these results and according to the rock mechanical properties the Crack Initiation Stress is exceeded at the border of the pillar already after the excavation phase. These results also illustrate that the Crack Damage Stress is exceeded only for the level at 0.5 m below tunnel floor and after at least 80 days of heating. The interpretation of the results shows that the required level of stress for spalling can be reached in the pillar.
CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach
International Nuclear Information System (INIS)
Mimouni, S.; Mechitoua, N.; Foissac, A.; Hassanaly, M.; Ouraou, M.
2011-01-01
The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE C FD with a homogeneous model, of widespread use for engineering studies, implemented in Code S aturne. The model implemented in NEPTUNE C FD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay). Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.
Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach
International Nuclear Information System (INIS)
2014-12-01
In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the
Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach
Energy Technology Data Exchange (ETDEWEB)
NONE
2014-12-15
In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the
A multi-model ensemble approach to seabed mapping
Diesing, Markus; Stephens, David
2015-06-01
Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.
A Model-Driven Approach for 3D Modeling of Pylon from Airborne LiDAR Data
Directory of Open Access Journals (Sweden)
Qingquan Li
2015-09-01
Full Text Available Reconstructing three-dimensional model of the pylon from LiDAR (Light Detection And Ranging point clouds automatically is one of the key techniques for facilities management GIS system of high-voltage nationwide transmission smart grid. This paper presents a model-driven three-dimensional pylon modeling (MD3DM method using airborne LiDAR data. We start with constructing a parametric model of pylon, based on its actual structure and the characteristics of point clouds data. In this model, a pylon is divided into three parts: pylon legs, pylon body and pylon head. The modeling approach mainly consists of four steps. Firstly, point clouds of individual pylon are detected and segmented from massive high-voltage transmission corridor point clouds automatically. Secondly, an individual pylon is divided into three relatively simple parts in order to reconstruct different parts with different strategies. Its position and direction are extracted by contour analysis of the pylon body in this stage. Thirdly, the geometric features of the pylon head are extracted, from which the head type is derived with a SVM (Support Vector Machine classifier. After that, the head is constructed by seeking corresponding model from pre-build model library. Finally, the body is modeled by fitting the point cloud to planes. Experiment results on several point clouds data sets from China Southern high-voltage nationwide transmission grid from Yunnan Province to Guangdong Province show that the proposed approach can achieve the goal of automatic three-dimensional modeling of the pylon effectively.
Modeling AEC—New Approaches to Study Rare Genetic Disorders
Koch, Peter J.; Dinella, Jason; Fete, Mary; Siegfried, Elaine C.; Koster, Maranke I.
2015-01-01
Ankyloblepharon-ectodermal defects-cleft lip/palate (AEC) syndrome is a rare monogenetic disorder that is characterized by severe abnormalities in ectoderm-derived tissues, such as skin and its appendages. A major cause of morbidity among affected infants is severe and chronic skin erosions. Currently, supportive care is the only available treatment option for AEC patients. Mutations in TP63, a gene that encodes key regulators of epidermal development, are the genetic cause of AEC. However, it is currently not clear how mutations in TP63 lead to the various defects seen in the patients’ skin. In this review, we will discuss current knowledge of the AEC disease mechanism obtained by studying patient tissue and genetically engineered mouse models designed to mimic aspects of the disorder. We will then focus on new approaches to model AEC, including the use of patient cells and stem cell technology to replicate the disease in a human tissue culture model. The latter approach will advance our understanding of the disease and will allow for the development of new in vitro systems to identify drugs for the treatment of skin erosions in AEC patients. Further, the use of stem cell technology, in particular induced pluripotent stem cells (iPSC), will enable researchers to develop new therapeutic approaches to treat the disease using the patient’s own cells (autologous keratinocyte transplantation) after correction of the disease-causing mutations. PMID:24665072
Policy harmonized approach for the EU agricultural sector modelling
Directory of Open Access Journals (Sweden)
G. SALPUTRA
2008-12-01
Full Text Available Policy harmonized (PH approach allows for the quantitative assessment of the impact of various elements of EU CAP direct support schemes, where the production effects of direct payments are accounted through reaction prices formed by producer price and policy price add-ons. Using the AGMEMOD model the impacts of two possible EU agricultural policy scenarios upon beef production have been analysed full decoupling with a switch from historical to regional Single Payment scheme or alternatively with re-distribution of country direct payment envelopes via introduction of EU-wide flat area payment. The PH approach, by systematizing and harmonizing the management and use of policy data, ensures that projected differential policy impacts arising from changes in common EU policies reflect the likely actual differential impact as opposed to differences in how common policies are implemented within analytical models. In the second section of the paper the AGMEMOD models structure is explained. The policy harmonized evaluation method is presented in the third section. Results from an application of the PH approach are presented and discussed in the papers penultimate section, while section 5 concludes.;
An Adaptive Agent-Based Model of Homing Pigeons: A Genetic Algorithm Approach
Directory of Open Access Journals (Sweden)
Francis Oloo
2017-01-01
Full Text Available Conventionally, agent-based modelling approaches start from a conceptual model capturing the theoretical understanding of the systems of interest. Simulation outcomes are then used “at the end” to validate the conceptual understanding. In today’s data rich era, there are suggestions that models should be data-driven. Data-driven workflows are common in mathematical models. However, their application to agent-based models is still in its infancy. Integration of real-time sensor data into modelling workflows opens up the possibility of comparing simulations against real data during the model run. Calibration and validation procedures thus become automated processes that are iteratively executed during the simulation. We hypothesize that incorporation of real-time sensor data into agent-based models improves the predictive ability of such models. In particular, that such integration results in increasingly well calibrated model parameters and rule sets. In this contribution, we explore this question by implementing a flocking model that evolves in real-time. Specifically, we use genetic algorithms approach to simulate representative parameters to describe flight routes of homing pigeons. The navigation parameters of pigeons are simulated and dynamically evaluated against emulated GPS sensor data streams and optimised based on the fitness of candidate parameters. As a result, the model was able to accurately simulate the relative-turn angles and step-distance of homing pigeons. Further, the optimised parameters could replicate loops, which are common patterns in flight tracks of homing pigeons. Finally, the use of genetic algorithms in this study allowed for a simultaneous data-driven optimization and sensitivity analysis.
Expanding Model Independent Approaches for Measuring the CKM angle $\\gamma$ at LHCb
Prouve, Claire
2017-01-01
Model independent approaches to measuring the CKM angle $\\gamma$ in $B\\rightarrow DK$ decays at LHCb are explored. In particular, we consider the case where the $D$ meson decays into a final state with four hadrons. Using four-body final states such as $\\pi^+ \\pi^- \\pi^+ \\pi^-$, $K^+ \\pi^- \\pi^+ \\pi^-$ and $K^+ K^- \\pi^+ \\pi^-$ in addition to traditional 2 and 3 body states and has the potential to significantly improve to the overall constraint on $\\gamma$. There is a significant systematic uncertainty associated with modelling the complex phase of the $D$ decay amplitude across the five-dimensional phase space of the four body decay. It is therefore important to replace these model-dependent quantities with model-independent parameters as input for the $\\gamma$ measurement. These model independent parameters have been measured using quantum-correlated $\\psi(3770) \\rightarrow D^0 \\overline{D^0}$ decays collected by the CLEO-c experiment, and, for $D\\rightarrow K^+ \\pi^- \\pi^+ \\pi^-$, with $D^0-\\overline{D^0...
Final Report - Modeling the Physics of Damage Cluster Formation in a Cellular Environment
International Nuclear Information System (INIS)
L.H. Toburen, Principal Investigator; J.L. Shinpaugh; M. Dingfelder; and G. Lapicki; Co-Investigators
2007-01-01
interactions in biomolecules are particularly relevant. And third, we worked with Monte Carlo Modelers to incorporate these data into their codes for testing the sensitivity of results to the different input data and for direct tests of modeling results. We were particularly interested in how the molecular make up of the media influences the sensitivity of the Monte Carlo models of electron transport and the quality of the interaction cross sections used as the input database. This approach helps link the underling physics to the observed biological responses
Data and Dynamics Driven Approaches for Modelling and Forecasting the Red Sea Chlorophyll
Dreano, Denis
2017-01-01
concentration and have practical applications for fisheries operation and harmful algae blooms monitoring. Modelling approaches can be divided between physics- driven (dynamical) approaches, and data-driven (statistical) approaches. Dynamical models are based
A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model
Energy Technology Data Exchange (ETDEWEB)
Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-11
This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.
Object-Oriented Approach to Modeling Units of Pneumatic Systems
Directory of Open Access Journals (Sweden)
Yu. V. Kyurdzhiev
2014-01-01
Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
A New Approach of Modeling an Ultra-Super-Critical Power Plant for Performance Improvement
Directory of Open Access Journals (Sweden)
Guolian Hou
2016-04-01
Full Text Available A suitable model of coordinated control system (CCS with high accuracy and simple structure is essential for the design of advanced controllers which can improve the efficiency of the ultra-super-critical (USC power plant. Therefore, with the demand of plant performance improvement, an improved T-S fuzzy model identification approach is proposed in this paper. Firstly, the improved entropy cluster algorithm is applied to identify the premise parameters which can automatically determine the cluster numbers and initial cluster centers by introducing the concept of a decision-making constant and threshold. Then, the learning algorithm is used to modify the initial cluster center and a new structure of concluding part is discussed, the incremental data around the cluster center is used to identify the local linear model through a weighted recursive least-square algorithm. Finally, the proposed approach is employed to model the CCS of a 1000 MW USC one-through boiler power plant by using on-site measured data. Simulation results show that the T-S fuzzy model built in this paper is accurate enough to reflect the dynamic performance of CCS and can be treated as a foundation model for the overall optimizing control of the USC power plant.
Directory of Open Access Journals (Sweden)
Yaolin Lin
2018-06-01
Full Text Available Thermal load and indoor comfort level are two important building performance indicators, rapid predictions of which can help significantly reduce the computation time during design optimization. In this paper, a three-step approach is used to develop and evaluate prediction models. Firstly, the Latin Hypercube Sampling Method (LHSM is used to generate a representative 19-dimensional design database and DesignBuilder is then used to obtain the thermal load and discomfort degree hours through simulation. Secondly, samples from the database are used to develop and validate seven prediction models, using data mining approaches including multilinear regression (MLR, chi-square automatic interaction detector (CHAID, exhaustive CHAID (ECHAID, back-propagation neural network (BPNN, radial basis function network (RBFN, classification and regression trees (CART, and support vector machines (SVM. It is found that the MLR and BPNN models outperform the others in the prediction of thermal load with average absolute error of less than 1.19%, and the BPNN model is the best at predicting discomfort degree hour with 0.62% average absolute error. Finally, two hybrid models—MLR (MLR + BPNN and MLR-BPNN—are developed. The MLR-BPNN models are found to be the best prediction models, with average absolute error of 0.82% in thermal load and 0.59% in discomfort degree hour.
Modeling drug- and chemical- induced hepatotoxicity with systems biology approaches
Directory of Open Access Journals (Sweden)
Sudin eBhattacharya
2012-12-01
Full Text Available We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of ‘toxicity pathways’ is described in the context of the 2007 US National Academies of Science report, Toxicity testing in the 21st Century: A Vision and A Strategy. Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically-based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular virtual tissue model of the liver lobule that combines molecular circuits in individual hepatocytes with cell-cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the AhR toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsymTM to understand drug-induced liver injury (DILI, the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales.
Multiscale modeling of alloy solidification using a database approach
Tan, Lijian; Zabaras, Nicholas
2007-11-01
A two-scale model based on a database approach is presented to investigate alloy solidification. Appropriate assumptions are introduced to describe the behavior of macroscopic temperature, macroscopic concentration, liquid volume fraction and microstructure features. These assumptions lead to a macroscale model with two unknown functions: liquid volume fraction and microstructure features. These functions are computed using information from microscale solutions of selected problems. This work addresses the selection of sample problems relevant to the interested problem and the utilization of data from the microscale solution of the selected sample problems. A computationally efficient model, which is different from the microscale and macroscale models, is utilized to find relevant sample problems. In this work, the computationally efficient model is a sharp interface solidification model of a pure material. Similarities between the sample problems and the problem of interest are explored by assuming that the liquid volume fraction and microstructure features are functions of solution features extracted from the solution of the computationally efficient model. The solution features of the computationally efficient model are selected as the interface velocity and thermal gradient in the liquid at the time the sharp solid-liquid interface passes through. An analytical solution of the computationally efficient model is utilized to select sample problems relevant to solution features obtained at any location of the domain of the problem of interest. The microscale solution of selected sample problems is then utilized to evaluate the two unknown functions (liquid volume fraction and microstructure features) in the macroscale model. The temperature solution of the macroscale model is further used to improve the estimation of the liquid volume fraction and microstructure features. Interpolation is utilized in the feature space to greatly reduce the number of required
Modelling an industrial anaerobic granular reactor using a multi-scale approach
DEFF Research Database (Denmark)
Feldman, Hannah; Flores Alsina, Xavier; Ramin, Pedram
2017-01-01
The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within...... the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark...... simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally...
Analytic Model Predictive Control of Uncertain Nonlinear Systems: A Fuzzy Adaptive Approach
Directory of Open Access Journals (Sweden)
Xiuyan Peng
2015-01-01
Full Text Available A fuzzy adaptive analytic model predictive control method is proposed in this paper for a class of uncertain nonlinear systems. Specifically, invoking the standard results from the Moore-Penrose inverse of matrix, the unmatched problem which exists commonly in input and output dimensions of systems is firstly solved. Then, recurring to analytic model predictive control law, combined with fuzzy adaptive approach, the fuzzy adaptive predictive controller synthesis for the underlying systems is developed. To further reduce the impact of fuzzy approximation error on the system and improve the robustness of the system, the robust compensation term is introduced. It is shown that by applying the fuzzy adaptive analytic model predictive controller the rudder roll stabilization system is ultimately uniformly bounded stabilized in the H-infinity sense. Finally, simulation results demonstrate the effectiveness of the proposed method.
Modelling of capital requirements in the energy sector: capital market access. Final memorandum
Energy Technology Data Exchange (ETDEWEB)
1978-04-01
Formal modelling techniques for analyzing the capital requirements of energy industries have been performed at DOE. A survey has been undertaken of a number of models which forecast energy-sector capital requirements or which detail the interactions of the energy sector and the economy. Models are identified which can be useful as prototypes for some portion of DOE's modelling needs. The models are examined to determine any useful data bases which could serve as inputs to an original DOE model. A selected group of models are examined which can comply with the stated capabilities. The data sources being used by these models are covered and a catalog of the relevant data bases is provided. The models covered are: capital markets and capital availability models (Fossil 1, Bankers Trust Co., DRI Macro Model); models of physical capital requirements (Bechtel Supply Planning Model, ICF Oil and Gas Model and Coal Model, Stanford Research Institute National Energy Model); macroeconomic forecasting models with input-output analysis capabilities (Wharton Annual Long-Term Forecasting Model, Brookhaven/University of Illinois Model, Hudson-Jorgenson/Brookhaven Model); utility models (MIT Regional Electricity Model-Baughman Joskow, Teknekron Electric Utility Simulation Model); and others (DRI Energy Model, DRI/Zimmerman Coal Model, and Oak Ridge Residential Energy Use Model).
Overview of the FEP analysis approach to model development
International Nuclear Information System (INIS)
Bailey, L.
1998-01-01
This report heads a suite of documents describing the Nirex model development programme. The programme is designed to provide a clear audit trail from the identification of significant features, events and processes (FEPs) to the models and modelling processes employed within a detailed safety assessment. A five stage approach has been adopted, which provides a systematic framework for addressing uncertainty and for the documentation of all modelling decisions and assumptions. The five stages are as follows: Stage 1: EP Analysis - compilation and structuring of a FEP database; Stage 2: Scenario and Conceptual Model Development; Stage 3: Mathematical Model Development; Stage 4: Software Development; Stage 5: confidence Building. This report describes the development and structuring of a FEP database as a Master Directed Diagram (MDD) and explains how this may be used to identify different modelling scenarios, based upon the identification of scenario -defining FEPs. The methodology describes how the possible evolution of a repository system can be addressed in terms of a base scenario, a broad and reasonable representation of the 'natural' evolution of the system, and a number of variant scenarios, representing the effects of probabilistic events and processes. The MDD has been used to identify conceptual models to represent the base scenario and the interactions between these conceptual models have been systematically reviewed using a matrix diagram technique. This has led to the identification of modelling requirements for the base scenario, against which existing assessment software capabilities have been reviewed. A mechanism for combining probabilistic scenario-defining FEPs to construct multi-FEP variant scenarios has been proposed and trialled using the concept of a 'timeline', a defined sequence of events, from which consequences can be assessed. An iterative approach, based on conservative modelling principles, has been proposed for the evaluation of
A multi-model approach to X-ray pulsars
Directory of Open Access Journals (Sweden)
Schönherr G.
2014-01-01
Full Text Available The emission characteristics of X-ray pulsars are governed by magnetospheric accretion within the Alfvén radius, leading to a direct coupling of accretion column properties and interactions at the magnetosphere. The complexity of the physical processes governing the formation of radiation within the accreted, strongly magnetized plasma has led to several sophisticated theoretical modelling efforts over the last decade, dedicated to either the formation of the broad band continuum, the formation of cyclotron resonance scattering features (CRSFs or the formation of pulse profiles. While these individual approaches are powerful in themselves, they quickly reach their limits when aiming at a quantitative comparison to observational data. Too many fundamental parameters, describing the formation of the accretion columns and the systems’ overall geometry are unconstrained and different models are often based on different fundamental assumptions, while everything is intertwined in the observed, highly phase-dependent spectra and energy-dependent pulse profiles. To name just one example: the (phase variable line width of the CRSFs is highly dependent on the plasma temperature, the existence of B-field gradients (geometry and observation angle, parameters which, in turn, drive the continuum radiation and are driven by the overall two-pole geometry for the light bending model respectively. This renders a parallel assessment of all available spectral and timing information by a compatible across-models-approach indispensable. In a collaboration of theoreticians and observers, we have been working on a model unification project over the last years, bringing together theoretical calculations of the Comptonized continuum, Monte Carlo simulations and Radiation Transfer calculations of CRSFs as well as a General Relativity (GR light bending model for ray tracing of the incident emission pattern from both magnetic poles. The ultimate goal is to implement a
Energy Technology Data Exchange (ETDEWEB)
Huber, George W.; Upadhye, Aniruddha A.; Ford, David M.; Bhatia, Surita R.; Badger, Phillip C.
2012-10-19
This University of Massachusetts, Amherst project, "Fast Pyrolysis Oil Stabilization: An Integrated Catalytic and Membrane Approach for Improved Bio-oils" started on 1st February 2009 and finished on August 31st 2011. The project consisted following tasks: Task 1.0: Char Removal by Membrane Separation Technology The presence of char particles in the bio-oil causes problems in storage and end-use. Currently there is no well-established technology to remove char particles less than 10 micron in size. This study focused on the application of a liquid-phase microfiltration process to remove char particles from bio-oil down to slightly sub-micron levels. Tubular ceramic membranes of nominal pore sizes 0.5 and 0.8m were employed to carry out the microfiltration, which was conducted in the cross-flow mode at temperatures ranging from 38 to 45 C and at three different trans-membrane pressures varying from 1 to 3 bars. The results demonstrated the removal of the major quantity of char particles with a significant reduction in overall ash content of the bio-oil. The results clearly showed that the cake formation mechanism of fouling is predominant in this process. Task 2.0 Acid Removal by Membrane Separation Technology The feasibility of removing small organic acids from the aqueous fraction of fast pyrolysis bio-oils using nanofiltration (NF) and reverse osmosis (RO) membranes was studied. Experiments were carried out with a single solute solutions of acetic acid and glucose, binary solute solutions containing both acetic acid and glucose, and a model aqueous fraction of bio-oil (AFBO). Retention factors above 90% for glucose and below 0% for acetic acid were observed at feed pressures near 40 bar for single and binary solutions, so that their separation in the model AFBO was expected to be feasible. However, all of the membranes were irreversibly damaged when experiments were conducted with the model AFBO due to the presence of guaiacol in the feed solution. Experiments
A DYNAMICAL SYSTEM APPROACH IN MODELING TECHNOLOGY TRANSFER
Directory of Open Access Journals (Sweden)
Hennie Husniah
2016-05-01
Full Text Available In this paper we discuss a mathematical model of two parties technology transfer from a leader to a follower. The model is reconstructed via dynamical system approach from a known standard Raz and Assa model and we found some important conclusion which have not been discussed in the original model. The model assumes that in the absence of technology transfer from a leader to a follower, both the leader and the follower have a capability to grow independently with a known upper limit of the development. We obtain a rich mathematical structure of the steady state solution of the model. We discuss a special situation in which the upper limit of the technological development of the follower is higher than that of the leader, but the leader has started earlier than the follower in implementing the technology. In this case we show a paradox stating that the follower is unable to reach its original upper limit of the technological development could appear whenever the transfer rate is sufficiently high. We propose a new model to increase realism so that any technological transfer rate could only has a positive effect in accelerating the rate of growth of the follower in reaching its original upper limit of the development.
Predicting the emission from an incineration plant - a modelling approach
International Nuclear Information System (INIS)
Rohyiza Baan
2004-01-01
The emissions from combustion process of Municipal Solid Waste (MSW) have become an important issue in incineration technology. Resulting from unstable combustion conditions, the formation of undesirable compounds such as CO, SO 2 , NO x , PM 10 and dioxin become the source of pollution concentration in the atmosphere. The impact of emissions on criteria air pollutant concentrations could be obtained directly using ambient air monitoring equipment or predicted using dispersion modelling. Literature shows that the complicated atmospheric processes that occur in nature can be described using mathematical models. This paper will highlight the air dispersion model as a tool to relate and simulate the release and dispersion of air pollutants in the atmosphere. The technique is based on a programming approach to develop the air dispersion ground level concentration model with the use of Gaussian and Pasquil equation. This model is useful to study the consequences of various sources of air pollutant and estimating the amount of pollutants released into the air from existing emission sources. From this model, it was found that the difference in percentage of data between actual conditions and the model's prediction is about 5%. (Author)
Modeling human diseases: an education in interactions and interdisciplinary approaches
Directory of Open Access Journals (Sweden)
Leonard Zon
2016-06-01
Full Text Available Traditionally, most investigators in the biomedical arena exploit one model system in the course of their careers. Occasionally, an investigator will switch models. The selection of a suitable model system is a crucial step in research design. Factors to consider include the accuracy of the model as a reflection of the human disease under investigation, the numbers of animals needed and ease of husbandry, its physiology and developmental biology, and the ability to apply genetics and harness the model for drug discovery. In my lab, we have primarily used the zebrafish but combined it with other animal models and provided a framework for others to consider the application of developmental biology for therapeutic discovery. Our interdisciplinary approach has led to many insights into human diseases and to the advancement of candidate drugs to clinical trials. Here, I draw on my experiences to highlight the importance of combining multiple models, establishing infrastructure and genetic tools, forming collaborations, and interfacing with the medical community for successful translation of basic findings to the clinic.
International Nuclear Information System (INIS)
Renn, Ortwin; Ruddat, Michael; Sautter, Alexander
2007-01-01
The central aim of the BfS research project titled ''operationalization of the 'risk sovereignty model' with special consideration to lifestyle and value approaches as a basis for risk communication in the field of radiation protection'' was the identification of suitable measures to enhance the degree of risk sovereignty of the German population with regard to radiation risks (mobile telephony, nuclear power, ultraviolet radiation and X-rays). This requires the development of a measuring instrument for capturing the prevailing degree of risk sovereignty in the whole population or in certain subgroups with regard to radiation risks empirically. In the first two phases of the project suitable instruments for the construct ''risk sovereignty'' have been developed. Furthermore a value-typology for the identification of different groups of persons as well as independent variables likely to have an influence on 'risk sovereignty' (information behavior, communication or participation intention) were included in the study. The empirical research is divided into a quantitative and a qualitative inquiry. Based on the empirical studies, a guidance document to improve the cognitive capability of people to build up risk sovereignty, in particular in relation to radiation was developed. For the three types of respondents, different strategies were recommended taking into account their needs and information seeking behavior
DEFF Research Database (Denmark)
Wahby, Mostafa; Hofstadler, Daniel Nicolas; Heinrich, Mary Katherine
2016-01-01
approach where task performance is determined by monitoring the plant's reaction. First, we do initial plant experiments with simple, predetermined controllers. Then we use image sampling data as a model of the dynamics of the plant tip xy position. Second, we use this approach to evolve robot controllers...... in simulation. The task is to make the plant approach three predetermined, distinct points in an xy-plane. Finally, we test the evolved controllers in real plant experiments and find that we cross the reality gap successfully. We shortly describe how we have extended from plant tip to many points on the plant...
Supplementary Material for: A global sensitivity analysis approach for morphogenesis models
Boas, Sonja
2015-01-01
Abstract Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A novel approach to multihazard modeling and simulation.
Smith, Silas W; Portelli, Ian; Narzisi, Giuseppe; Nelson, Lewis S; Menges, Fabian; Rekow, E Dianne; Mincer, Joshua S; Mishra, Bhubaneswar; Goldfrank, Lewis R
2009-06-01
To develop and apply a novel modeling approach to support medical and public health disaster planning and response using a sarin release scenario in a metropolitan environment. An agent-based disaster simulation model was developed incorporating the principles of dose response, surge response, and psychosocial characteristics superimposed on topographically accurate geographic information system architecture. The modeling scenarios involved passive and active releases of sarin in multiple transportation hubs in a metropolitan city. Parameters evaluated included emergency medical services, hospital surge capacity (including implementation of disaster plan), and behavioral and psychosocial characteristics of the victims. In passive sarin release scenarios of 5 to 15 L, mortality increased nonlinearly from 0.13% to 8.69%, reaching 55.4% with active dispersion, reflecting higher initial doses. Cumulative mortality rates from releases in 1 to 3 major transportation hubs similarly increased nonlinearly as a function of dose and systemic stress. The increase in mortality rate was most pronounced in the 80% to 100% emergency department occupancy range, analogous to the previously observed queuing phenomenon. Effective implementation of hospital disaster plans decreased mortality and injury severity. Decreasing ambulance response time and increasing available responding units reduced mortality among potentially salvageable patients. Adverse psychosocial characteristics (excess worry and low compliance) increased demands on health care resources. Transfer to alternative urban sites was possible. An agent-based modeling approach provides a mechanism to assess complex individual and systemwide effects in rare events.
A comprehensive approach to age-dependent dosimetric modeling
International Nuclear Information System (INIS)
Leggett, R.W.; Cristy, M.; Eckerman, K.F.
1986-01-01
In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks
A comprehensive approach to age-dependent dosimetric modeling
International Nuclear Information System (INIS)
Leggett, R.W.; Cristy, M.; Eckerman, K.F.
1987-01-01
In the absence of age-specific biokinetic models, current retention models of the International Commission of Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper a comprehensive approach to age-dependent dosimetric modeling is discussed in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates of risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks. 16 refs.; 3 figs.; 1 table
Micromechanical modeling and inverse identification of damage using cohesive approaches
International Nuclear Information System (INIS)
Blal, Nawfal
2013-01-01
In this study a micromechanical model is proposed for a collection of cohesive zone models embedded between two each elements of a standard cohesive-volumetric finite element method. An equivalent 'matrix-inclusions' composite is proposed as a representation of the cohesive-volumetric discretization. The overall behaviour is obtained using homogenization approaches (Hashin Shtrikman scheme and the P. Ponte Castaneda approach). The derived model deals with elastic, brittle and ductile materials. It is available whatever the triaxiality loading rate and the shape of the cohesive law, and leads to direct relationships between the overall material properties and the local cohesive parameters and the mesh density. First, rigorous bounds on the normal and tangential cohesive stiffnesses are obtained leading to a suitable control of the inherent artificial elastic loss induced by intrinsic cohesive models. Second, theoretical criteria on damageable and ductile cohesive parameters are established (cohesive peak stress, critical separation, cohesive failure energy,... ). These criteria allow a practical calibration of the cohesive zone parameters as function of the overall material properties and the mesh length. The main interest of such calibration is its promising capacity to lead to a mesh-insensitive overall response in surface damage. (author) [fr
Renormalization group approach to a p-wave superconducting model
International Nuclear Information System (INIS)
Continentino, Mucio A.; Deus, Fernanda; Caldas, Heron
2014-01-01
We present in this work an exact renormalization group (RG) treatment of a one-dimensional p-wave superconductor. The model proposed by Kitaev consists of a chain of spinless fermions with a p-wave gap. It is a paradigmatic model of great actual interest since it presents a weak pairing superconducting phase that has Majorana fermions at the ends of the chain. Those are predicted to be useful for quantum computation. The RG allows to obtain the phase diagram of the model and to study the quantum phase transition from the weak to the strong pairing phase. It yields the attractors of these phases and the critical exponents of the weak to strong pairing transition. We show that the weak pairing phase of the model is governed by a chaotic attractor being non-trivial from both its topological and RG properties. In the strong pairing phase the RG flow is towards a conventional strong coupling fixed point. Finally, we propose an alternative way for obtaining p-wave superconductivity in a one-dimensional system without spin–orbit interaction.
Improving stability of prediction models based on correlated omics data by using network approaches.
Directory of Open Access Journals (Sweden)
Renaud Tissier
Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.
A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model
Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue
2015-10-01
In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.
Numerical modelling of carbonate platforms and reefs: approaches and opportunities
Energy Technology Data Exchange (ETDEWEB)
Dalmasso, H.; Montaggioni, L.F.; Floquet, M. [Universite de Provence, Marseille (France). Centre de Sedimentologie-Palaeontologie; Bosence, D. [Royal Holloway University of London, Egham (United Kingdom). Dept. of Geology
2001-07-01
This paper compares different computing procedures that have been utilized in simulating shallow-water carbonate platform development. Based on our geological knowledge we can usually give a rather accurate qualitative description of the mechanisms controlling geological phenomena. Further description requires the use of computer stratigraphic simulation models that allow quantitative evaluation and understanding of the complex interactions of sedimentary depositional carbonate systems. The roles of modelling include: (1) encouraging accuracy and precision in data collection and process interpretation (Watney et al., 1999); (2) providing a means to quantitatively test interpretations concerning the control of various mechanisms on producing sedimentary packages; (3) predicting or extrapolating results into areas of limited control; (4) gaining new insights regarding the interaction of parameters; (5) helping focus on future studies to resolve specific problems. This paper addresses two main questions, namely: (1) What are the advantages and disadvantages of various types of models? (2) How well do models perform? In this paper we compare and discuss the application of five numerical models: CARBONATE (Bosence and Waltham, 1990), FUZZIM (Nordlund, 1999), CARBPLAT (Bosscher, 1992), DYNACARB (Li et al., 1993), PHIL (Bowman, 1997) and SEDPAK (Kendall et al., 1991). The comparison, testing and evaluation of these models allow one to gain a better knowledge and understanding of controlling parameters of carbonate platform development, which are necessary for modelling. Evaluating numerical models, critically comparing results from models using different approaches, and pushing experimental tests to their limits, provide an effective vehicle to improve and develop new numerical models. A main feature of this paper is to closely compare the performance between two numerical models: a forward model (CARBONATE) and a fuzzy logic model (FUZZIM). These two models use common