A novel approach to modeling unstable EOR displacements. Final report
Energy Technology Data Exchange (ETDEWEB)
Peters, E.J.
1994-04-01
Most enhanced oil recovery schemes involve the displacement of a more dense and more viscous oil by a less dense and less viscous fluid in a heterogeneous porous medium. The interaction of heterogeneity with the several competing forces, namely, viscous, capillary, gravitational, and dispersive forces, can conspire to make the displacements unstable and difficult to model and to predict. The objective of this research was to develop a systematic methodology for modeling unstable fluid displacements in heterogeneous media. Flow visualization experiments were conducted using X-ray computed tomography imaging and a video imaging workstation to gain insights into the dynamics of unstable displacements, acquire detailed quantitative experimental image data for calibrating numerical models of unstable displacements, and image and characterize heterogeneities in laboratory cores geostatistically. High-resolution numerical models modified for use on vector-architecture supercomputers were used to replicate the image data. Geostatistical models of reservoir heterogeneity were incorporated in order to study the interaction of hydrodynamic instability and heterogeneity in reservoir displacements. Finally, a systematic methodology for matching the experimental data with the numerical models and scaling the laboratory results to other systems were developed. The result is a new method for predicting the performance of unstable EOR displacements in the field based on small-scale displacements in the laboratory. The methodology is general and can be applied to forecast the performance of most processes that involve fluid flow and transport in porous media. Therefore, this research should be of interest to those involved in forecasting the performance of enhanced oil recovery processes and the spreading of contaminants in heterogeneous aquifers.
Voyager Approaches Final Frontier
2003-01-01
An artist's concept illustrates the positions of the Voyager spacecraft in relation to structures formed around our Sun by the solar wind. Also illustrated is the termination shock, a violent region the spacecraft must pass through before reaching the outer limits of the solar system. At the termination shock, the supersonic solar wind abruptly slows from an average speed of 400 kilometers per second to less than 100 kilometer per second (900,000 to less than 225,000 miles per hour). Beyond the termination shock is the solar system's final frontier, the heliosheath, a vast region where the turbulent and hot solar wind is compressed as it presses outward against the interstellar wind that is beyond the heliopause. A bow shock likely forms as the interstellar wind approaches and is deflected around the heliosphere, forcing it into a teardrop-shaped structure with a long, comet-like tail.The exact location of the termination shock is unknown, and it originally was thought to be closer to the Sun than Voyager 1 currently is. As Voyager 1 cruised ever farther from the Sun, it confirmed that all the planets are inside an immense bubble blown by the solar wind and the termination shock was much more distant.
Pestle, Ruth
A pilot project implemented a role-model approach to job transition for disadvantaged cooperative home economics students in Tulsa and Oklahoma City, Oklahoma. From 1974 through 1976, 21 students in four urban high schools were matched with role models on the job. Sixteen of these students retained their jobs. The matches included many different…
Approaches for scalable modeling and emulation of cyber systems : LDRD final report.
Energy Technology Data Exchange (ETDEWEB)
Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.
2009-09-01
The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.
Energy Technology Data Exchange (ETDEWEB)
Ian Sue Wing
2006-04-18
The research supported by this award pursued three lines of inquiry: (1) The construction of dynamic general equilibrium models to simulate the accumulation and substitution of knowledge, which has resulted in the preparation and submission of several papers: (a) A submitted pedagogic paper which clarifies the structure and operation of computable general equilibrium (CGE) models (C.2), and a review article in press which develops a taxonomy for understanding the representation of technical change in economic and engineering models for climate policy analysis (B.3). (b) A paper which models knowledge directly as a homogeneous factor, and demonstrates that inter-sectoral reallocation of knowledge is the key margin of adjustment which enables induced technical change to lower the costs of climate policy (C.1). (c) An empirical paper which estimates the contribution of embodied knowledge to aggregate energy intensity in the U.S. (C.3), followed by a companion article which embeds these results within a CGE model to understand the degree to which autonomous energy efficiency improvement (AEEI) is attributable to technical change as opposed to sub-sectoral shifts in industrial composition (C.4) (d) Finally, ongoing theoretical work to characterize the precursors and implications of the response of innovation to emission limits (E.2). (2) Data development and simulation modeling to understand how the characteristics of discrete energy supply technologies determine their succession in response to emission limits when they are embedded within a general equilibrium framework. This work has produced two peer-reviewed articles which are currently in press (B.1 and B.2). (3) Empirical investigation of trade as an avenue for the transmission of technological change to developing countries, and its implications for leakage, which has resulted in an econometric study which is being revised for submission to a journal (E.1). As work commenced on this topic, the U.S. withdrawal
New Approaches to Final Cooling
Neuffer, David
2015-01-01
A high-energy muon collider scenario requires a "final cooling" system that reduces transverse emittance by a factor of ~10 while allowing longitudinal emittance increase. The baseline approach has low-energy transverse cooling within high-field solenoids, with strong longitudinal heating. This approach and its recent simulation are discussed. Alternative approaches which more explicitly include emittance exchange are also presented. Round-to-flat beam transform, transverse slicing, and longitudinal bunch coalescence are possible components of the alternative approach. A more explicit understanding of solenoidal cooling beam dynamics is introduced.
New Approaches to Final Cooling
Energy Technology Data Exchange (ETDEWEB)
Neuffer, David [Fermilab
2014-11-10
A high-energy muon collider scenario require a “final cooling” system that reduces transverse emittances by a factor of ~10 while allowing longitudinal emittance increase. The baseline approach has low-energy transverse cooling within high-field solenoids, with strong longitudinal heating. This approach and its recent simulation are discussed. Alternative approaches which more explicitly include emittance exchange are also presented. Round-to-flat beam transform, transverse slicing, and longitudinal bunch coalescence are possible components of the alternative approach. A more explicit understanding of solenoidal cooling beam dynamics is introduced.
Energy Technology Data Exchange (ETDEWEB)
Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith
2008-09-01
The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.
Energy Technology Data Exchange (ETDEWEB)
Stengel, D N; Luenberger, D G; Larson, R E; Cline, T B
1979-02-01
A new approach to modeling and analysis of systems is presented that exploits the underlying structure of the system. The development of the approach focuses on a new modeling form, called 'descriptor variable' systems, that was first introduced in this research. Key concepts concerning the classification and solution of descriptor-variable systems are identified, and theories are presented for the linear case, the time-invariant linear case, and the nonlinear case. Several standard systems notions are demonstrated to have interesting interpretations when analyzed via descriptor-variable theory. The approach developed also focuses on the optimization of large-scale systems. Descriptor variable models are convenient representations of subsystems in an interconnected network, and optimization of these models via dynamic programming is described. A general procedure for the optimization of large-scale systems, called spatial dynamic programming, is presented where the optimization is spatially decomposed in the way standard dynamic programming temporally decomposes the optimization of dynamical systems. Applications of this approach to large-scale economic markets and power systems are discussed.
Energy Technology Data Exchange (ETDEWEB)
Bechtle, Philip; Desch, Klaus; Leininger, Jonas [University of Bonn (Germany)
2013-07-01
The ongoing search for a supersymmetric extension of the Standard Model (SM) is expected to be speed up by the use of Simplified Models rather than fully-fledged ones like the MSSM. Simplified Models in general come with the advantage of a smaller parameter space while in the presented case the term refers to particular model-independent supersymmetric decay chains. The benefit of this approach is the versatility it offers: one can look for these decay chains independent of any model. Starting from a RPV-MSSM-motivated choice of decay chains involving 4 leptons, the method of parameter reduction towards an as simple as possible model structure is described. Moreover, possible final parametrizations and a resulting parameter grid are presented and discussed. Based on these grids, the setting of cross section limits on these decay chains and thus on SUSY models in which they are realized is a future prospect.
Matheny, David L.
This study attempted to determine if the "comparative advantages" debate case is a legitimate and logical approach to affirmative case construction in college and high school debate. The study population totaled 25 high school debate directors and 40 college and university debate directors. Four tape-recorded debates, in which an affirmative team…
Heavy-to-Light Form Factors in the Final Hadron Large Energy Limit Covariant Quark Model Approach
Charles, J; Oliver, L; Pène, O; Raynal, J C
1999-01-01
We prove the full covariance of the heavy-to-light weak current matrix elements based on the Bakamjian-Thomas construction of relativistic quark models, in the heavy mass limit for the parent hadron and the large energy limit for the daughter one. Moreover, this quark model representation of the heavy-to-light form factors fulfills the general relations that were recently argued to hold in the corresponding limit of QCD, namely that there are only three independent form factors describing the B -> pi (rho) matrix elements, as well as the factorized scaling law sqrt(M)z(E) of the form factors with respect to the heavy mass M and large energy E. These results constitute another good property of the quark models à la Bakamjian-Thomas, which were previously shown to exhibit covariance and Isgur-Wise scaling in the heavy-to-heavy case.
A Lexical Approach to the Remediation of Final Sound Omissions
Davis, Marilyn; Ferrier, E. E.
1973-01-01
The hypothesis that a language training (vocabulary building) or lexical approach to the remediation of final sound omissions may be an effective method of therapy was tested with a 6-year-old trainable mentally retarded boy. (Author/GW)
Lyons, Thomas M.; Knight, Glen A.
A model project was conducted to demonstrate how Chrysler, in partnership with the education community and the government, could provide technical training to enable displaced workers to contribute to the "H-Body" car launch, to improve their job skills, and to enhance their future employability. The training was conducted on a pilot basis for 2…
Energy Technology Data Exchange (ETDEWEB)
Dr. Tarasankar DebRoy
2009-12-11
In recent years, applications of numerical heat transfer and fluid flow models of fusion welding have resulted in improved understanding of both the welding processes and welded materials. They have been used to accurately calculate thermal cycles and fusion zone geometry in many cases. Here we report the following three major advancements from this project. First, we show how microstructures, grain size distribution and topology of welds of several important engineering alloys can be computed starting from better understanding of the fusion welding process through numerical heat transfer and fluid flow calculations. Second, we provide a conclusive proof that the reliability of numerical heat transfer and fluid flow calculations can be significantly improved by optimizing several uncertain model parameters. Third, we demonstrate how the numerical heat transfer and fluid flow models can be combined with a suitable global optimization program such as a genetic algorithm for the tailoring of weld attributes such as attaining a specified weld geometry or a weld thermal cycle. The results of the project have been published in many papers and a listing of these are included together with a list of the graduate thesis that resulted from this project. The work supported by the DOE award has resulted in several important national and international awards. A listing of these awards and the status of the graduate students are also presented in this report.
Energy Technology Data Exchange (ETDEWEB)
Curtis, Peter [The Ohio State Univ., Columbus, OH (United States); Bohrer, Gil [The Ohio State Univ., Columbus, OH (United States); Gough, Christopher [Virginia Commonwealth Univ., Richmond, VA (United States); Nadelhoffer, Knute [Univ. of Michigan, Ann Arbor, MI (United States)
2015-03-12
At the University of Michigan Biological Station (UMBS) AmeriFlux sites (US-UMB and US-UMd), long-term C cycling measurements and a novel ecosystem-scale experiment are revealing physical, biological, and ecological mechanisms driving long-term trajectories of C cycling, providing new data for improving modeling forecasts of C storage in eastern forests. Our findings provide support for previously untested hypotheses that stand-level structural and biological properties constrain long-term trajectories of C storage, and that remotely sensed canopy structural parameters can substantially improve model forecasts of forest C storage. Through the Forest Accelerated Succession ExperimenT (FASET), we are directly testing the hypothesis that forest C storage will increase due to increasing structural and biological complexity of the emerging tree communities. Support from this project, 2011-2014, enabled us to incorporate novel physical and ecological mechanisms into ecological, meteorological, and hydrological models to improve forecasts of future forest C storage in response to disturbance, succession, and current and long-term climate variation
SEISMIC MODELING ENGINES PHASE 1 FINAL REPORT
Energy Technology Data Exchange (ETDEWEB)
BRUCE P. MARION
2006-02-09
Seismic modeling is a core component of petroleum exploration and production today. Potential applications include modeling the influence of dip on anisotropic migration; source/receiver placement in deviated-well three-dimensional surveys for vertical seismic profiling (VSP); and the generation of realistic data sets for testing contractor-supplied migration algorithms or for interpreting AVO (amplitude variation with offset) responses. This project was designed to extend the use of a finite-difference modeling package, developed at Lawrence Berkeley Laboratories, to the advanced applications needed by industry. The approach included a realistic, easy-to-use 2-D modeling package for the desktop of the practicing geophysicist. The feasibility of providing a wide-ranging set of seismic modeling engines was fully demonstrated in Phase I. The technical focus was on adding variable gridding in both the horizontal and vertical directions, incorporating attenuation, improving absorbing boundary conditions and adding the optional coefficient finite difference methods.
Temperature Buffer Test. Final THM modelling
Energy Technology Data Exchange (ETDEWEB)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)
2012-01-15
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
Temperature Buffer Test. Final THM modelling
Energy Technology Data Exchange (ETDEWEB)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)
2012-01-15
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
Dispersive approaches for three-particle final state interaction
Energy Technology Data Exchange (ETDEWEB)
Guo, Peng; Szczepaniak, Adam P. [Indiana University, Physics Department, Bloomington, IN (United States); Indiana University, Center For Exploration of Energy and Matter, Bloomington, IN (United States); Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Danilkin, I.V. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States)
2015-10-15
In this work, we present different representations of the Khuri-Treiman equation and discuss advantages and disadvantages of each representation. In particular we focus on the inversion technique proposed by Pasquier, which, even though developed a long time ago, has not been used in modern analyses of data on three particle decays. We apply the method to a toy model and compare the sensitivity of this and alternative solution methods to the left-hand cut contribution. We also discuss the meaning and applicability of Watson's theorem when three particles in final states are involved. (orig.)
Functional Approach to Quantum Decoherence and the Classical Final Limit
Castagnino, M A; Castagnino, Mario; Laura, Roberto
2000-01-01
For a wide set of quantum systems it is demonstrated that the quantum regime can be considered as the transient phase while the final classical statistical regime is a permanent state. A basis where exact matrix decoherence appears for these final states is found. The relation with the decoherence of histories formalism is studied. A set of final intrinsically consistent histories is found.
Modeling of GE Appliances: Final Presentation
Energy Technology Data Exchange (ETDEWEB)
Fuller, Jason C.; Vyakaranam, Bharat; Leistritz, Sean M.; Parker, Graham B.
2013-01-31
This report is the final in a series of three reports funded by U.S. Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) in collaboration with GE Appliances’ through a Cooperative Research and Development Agreement (CRADA) to describe the potential of GE Appliances’ DR-enabled appliances to provide benefits to the utility grid.
Innovative design approaches for large wind turbine blades : final report.
Energy Technology Data Exchange (ETDEWEB)
2004-05-01
The goal of the Blade System Design Study (BSDS) was investigation and evaluation of design and manufacturing issues for wind turbine blades in the one to ten megawatt size range. A series of analysis tasks were completed in support of the design effort. We began with a parametric scaling study to assess blade structure using current technology. This was followed by an economic study of the cost to manufacture, transport and install large blades. Subsequently we identified several innovative design approaches that showed potential for overcoming fundamental physical and manufacturing constraints. The final stage of the project was used to develop several preliminary 50m blade designs. The key design impacts identified in this study are: (1) blade cross-sections, (2) alternative materials, (3) IEC design class, and (4) root attachment. The results show that thick blade cross-sections can provide a large reduction in blade weight, while maintaining high aerodynamic performance. Increasing blade thickness for inboard sections is a key method for improving structural efficiency and reducing blade weight. Carbon/glass hybrid blades were found to provide good improvements in blade weight, stiffness, and deflection when used in the main structural elements of the blade. The addition of carbon resulted in modest cost increases and provided significant benefits, particularly with respect to deflection. The change in design loads between IEC classes is quite significant. Optimized blades should be designed for each IEC design class. A significant portion of blade weight is related to the root buildup and metal hardware for typical root attachment designs. The results show that increasing the number of blade fasteners has a positive effect on total weight, because it reduces the required root laminate thickness.
Final Project Report Load Modeling Transmission Research
Energy Technology Data Exchange (ETDEWEB)
Lesieutre, Bernard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bravo, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yinger, Robert [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chassin, Dave [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Huang, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lu, Ning [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hiskens, Ian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Venkataramanan, Giri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-03-31
The research presented in this report primarily focuses on improving power system load models to better represent their impact on system behavior. The previous standard load model fails to capture the delayed voltage recovery events that are observed in the Southwest and elsewhere. These events are attributed to stalled air conditioner units after a fault. To gain a better understanding of their role in these events and to guide modeling efforts, typical air conditioner units were testing in laboratories. Using data obtained from these extensive tests, new load models were developed to match air conditioner behavior. An air conditioner model is incorporated in the new WECC composite load model. These models are used in dynamic studies of the West and can impact power transfer limits for California. Unit-level and systemlevel solutions are proposed as potential solutions to the delayed voltage recovery problem.
Final model of multicriterionevaluation of animal welfare
DEFF Research Database (Denmark)
Bonde, Marianne; Botreau, R; Bracke, MBM
One major objective of Welfare Quality® is to propose harmonized methods for the overall assessment of animal welfare on farm and at slaughter that are science based and meet societal concerns. Welfare is a multidimensional concept and its assessment requires measures of different aspects. Welfar......, acceptable welfare and not classified. This evaluation model is tuned according to the views of experts from animal and social sciences, and stakeholders....... Quality® proposes a formal evaluation model whereby the data on animals or their environment are transformed into value scores that reflect compliance with 12 subcriteria and 4 criteria of good welfare. Each animal unit is then allocated to one of four categories: excellent welfare, enhanced welfare...
Unit dose sampling and final product performance: an alternative approach.
Geoffroy, J M; Leblond, D; Poska, R; Brinker, D; Hsu, A
2001-08-01
This article documents a proposed plan for validation testing for the content uniformity for final blends and finished solid oral dosage forms (SODFs). The testing logic and statistical justification of the plan are presented. The plan provides good assurance that a passing lot will perforin well against the USP tablet content uniformity test. The operating characteristics of the test and the probability of needing to test for blend sampling bias are reported. A case study is presented.
EPA announced the availability of the final report, Metabolically Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates. This report provides a revised approach for calculating an individual's ventilation rate directly from their oxygen c...
Learning Approaches - Final Report Sub-Project 4
DEFF Research Database (Denmark)
Dirckinck-Holmfeld, Lone; Rodríguez Illera, José Luis; Escofet, Anna
2007-01-01
.4 (in Spanish), and deliverable 4.5. (in Spanish), which are attached in as Annex 1, 2, 3 and 4. Deliverable 4.1 provides a conceptual framework that has inspired the learning approaches in ELAC. The deliverable presents an overview of the overall approach and methodology used within the project......, followed by a presentation of learning approaches, and the identification of pedagogic concepts and tools applied in e-Learning. The deliverable moreover has a list of produced working papers and articles from partners within the ELAC project with relevance for deliverable. Deliverable 4.2 focus...... on establishing an experimental infrastructure; Open-source software, Moodle as the learning management system and virtual learning environment (VLE); Pedagogical considerations on the selection of an open source virtual learning environment; Testing of pedagogical concepts and tools; Conceptual framework...
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...
The MICOR hadronization model with final state interactions
Csizmadia, P
2002-01-01
Final state interactions on the hadron spectra obtained from the MIcroscopic COalescence Rehadronization (MICOR) model are investigated. MICOR generates baryon and meson resonances in an out- of-equilibrium distribution, directly from quark matter. At the next step, resonances decay into stable hadrons by the JETSET event generator. The final state interactions are simulated using a hadronic cascade, with initial momentum distributions given by MICOR. For the initial space distributions, two simple models are applied and compared. (12 refs).
Combined approach to the inverse protein folding problem. Final report
Energy Technology Data Exchange (ETDEWEB)
Ruben A. Abagyan
2000-06-01
The main scientific contribution of the project ''Combined approach to the inverse protein folding problem'' submitted in 1996 and funded by the Department of Energy in 1997 is the formulation and development of the idea of the multilink recognition method for identification of functional and structural homologues of newly discovered genes. This idea became very popular after they first announced it and used it in prediction of the threading targets for the CASP2 competition (Critical Assessment of Structure Prediction).
New approaches for high-efficiency solar cells. Final report
Energy Technology Data Exchange (ETDEWEB)
Bedair, S M; El-Masry, N A [North Carolina State Univ., Raleigh, NC (United States)
1997-12-01
This report summarizes the activities carried out in this subcontract. These activities cover, first the atomic layer epitaxy (ALE) growth of GaAs, AlGaAs and InGaP at fairly low growth temperatures. This was followed by using ALE to achieve high levels of doping both n-type and p-type required for tunnel junctions (Tj) in the cascade solar cell structures. Then the authors studied the properties of AlGaAs/InGaP and AlGaAs/GaAs tunnel junctions and their performances at different growth conditions. This is followed by the use of these tunnel junctions in stacked solar cell structures. The effect of these tunnel junctions on the performance of stacked solar cells was studied at different temperatures and different solar fluences. Finally, the authors studied the effect of different types of black surface fields (BSF), both p/n and n/p GaInP solar cell structures, and their potential for window layer applications. Parts of these activities were carried in close cooperation with Dr. Mike Timmons of the Research Triangle Institute.
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...
Model Construct Based Enterprise Model Architecture and Its Modeling Approach
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.
Decomposition approach to model smart suspension struts
Song, Xubin
2008-10-01
Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.
Hydraulic Modeling of Lock Approaches
2016-08-01
cation was that the guidewall design changed from a solid wall to one on pilings in which water was allowed to flow through and/or under the wall ...develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army, the...magnitudes and directions at lock approaches for open river conditions. The meshes were developed using the Surface- water Modeling System. The two
LP Approach to Statistical Modeling
Mukhopadhyay, Subhadeep; Parzen, Emanuel
2014-01-01
We present an approach to statistical data modeling and exploratory data analysis called `LP Statistical Data Science.' It aims to generalize and unify traditional and novel statistical measures, methods, and exploratory tools. This article outlines fundamental concepts along with real-data examples to illustrate how the `LP Statistical Algorithm' can systematically tackle different varieties of data types, data patterns, and data structures under a coherent theoretical framework. A fundament...
Calculation of extreme wind atlases using mesoscale modeling. Final report
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Badger, Jake
This is the final report of the project PSO-10240 "Calculation of extreme wind atlases using mesoscale modeling". The overall objective is to improve the estimation of extreme winds by developing and applying new methodologies to confront the many weaknesses in the current methodologies...... as explained in Section 2. The focus has been put on developing a number of new methodologies through numerical modeling and statistical modeling....
Modeling for fairness: A Rawlsian approach.
Diekmann, Sven; Zwart, Sjoerd D
2014-06-01
In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.
Energy Technology Data Exchange (ETDEWEB)
Gibson, S. I.
2000-06-01
This is a final report describing the results of the research funded by the DOE Energy Biosciences Program grant entitled ''A Molecular-Genetic Approach to Studying Source-Sink Interactions in Arabidiopsis thaliana''.
Final Report on the Fuel Saving Effectiveness of Various Driver Feedback Approaches
Energy Technology Data Exchange (ETDEWEB)
Gonder, J.; Earleywine, M.; Sparks, W.
2011-03-01
This final report quantifies the fuel-savings opportunities from specific driving behavior changes, identifies factors that influence drivers' receptiveness to adopting fuel-saving behaviors, and assesses various driver feedback approaches.
Scientific Theories, Models and the Semantic Approach
Directory of Open Access Journals (Sweden)
Décio Krause
2007-12-01
Full Text Available According to the semantic view, a theory is characterized by a class of models. In this paper, we examine critically some of the assumptions that underlie this approach. First, we recall that models are models of something. Thus we cannot leave completely aside the axiomatization of the theories under consideration, nor can we ignore the metamathematics used to elaborate these models, for changes in the metamathematics often impose restrictions on the resulting models. Second, based on a parallel between van Fraassen’s modal interpretation of quantum mechanics and Skolem’s relativism regarding set-theoretic concepts, we introduce a distinction between relative and absolute concepts in the context of the models of a scientific theory. And we discuss the significance of that distinction. Finally, by focusing on contemporary particle physics, we raise the question: since there is no general accepted unification of the parts of the standard model (namely, QED and QCD, we have no theory, in the usual sense of the term. This poses a difficulty: if there is no theory, how can we speak of its models? What are the latter models of? We conclude by noting that it is unclear that the semantic view can be applied to contemporary physical theories.
Approaches to Modeling of Recrystallization
Directory of Open Access Journals (Sweden)
Håkan Hallberg
2011-10-01
Full Text Available Control of the material microstructure in terms of the grain size is a key component in tailoring material properties of metals and alloys and in creating functionally graded materials. To exert this control, reliable and efficient modeling and simulation of the recrystallization process whereby the grain size evolves is vital. The present contribution is a review paper, summarizing the current status of various approaches to modeling grain refinement due to recrystallization. The underlying mechanisms of recrystallization are briefly recollected and different simulation methods are discussed. Analytical and empirical models, continuum mechanical models and discrete methods as well as phase field, vertex and level set models of recrystallization will be considered. Such numerical methods have been reviewed previously, but with the present focus on recrystallization modeling and with a rapidly increasing amount of related publications, an updated review is called for. Advantages and disadvantages of the different methods are discussed in terms of applicability, underlying assumptions, physical relevance, implementation issues and computational efficiency.
Model validation studies of solar systems, Phase III. Final report
Energy Technology Data Exchange (ETDEWEB)
Lantz, L.J.; Winn, C.B.
1978-12-01
Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.
Photovoltaic subsystem marketing and distribution model: programming manual. Final report
Energy Technology Data Exchange (ETDEWEB)
1982-07-01
Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.
Photovoltaic subsystem marketing and distribution model: programming manual. Final report
Energy Technology Data Exchange (ETDEWEB)
1982-07-01
Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.
Organic acid modeling and model validation: Workshop summary. Final report
Energy Technology Data Exchange (ETDEWEB)
Sullivan, T.J.; Eilers, J.M.
1992-08-14
A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E&S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled ``Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.`` The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.
THE QUANTITATIVE MODEL OF THE FINALIZATIONS IN MEN’S COMPETITIVE HANDBALL AND THEIR EFFICIENCY
Directory of Open Access Journals (Sweden)
Eftene Alexandru
2009-10-01
Full Text Available In the epistemic steps, we approach a competitive performance behavior model build after a quantitativeanalysis of certain data collected from the official International Handball Federation protocols on theperformance of the first four teams of the World Men's Handball Championship - Croatia 2009, duringsemifinals and finals.This model is a part of the integrative (global model of the handball game, which will be graduallyinvestigated during the following research.I have started the construction of this model from the premise that the finalization represents theessence of the game.The components of our model, in a prioritized order: shot at the goal from 9m- 15p; shot at the goalfrom 6m- 12p; shot at the goal from 7m- 12p; fast break shot at the goal - 11,5p; wing shot at the goal - 8,5p;penetration shot at the goal - 7p;
Gauge-invariant approach to meson photoproduction including the final-state interaction
Haberzettl, H; Krewald, S
2006-01-01
A fully gauge-invariant (pseudoscalar) meson photoproduction amplitude off a nucleon including the final-state interaction is derived. The approach based on a comprehensive field-theoretical formalism developed earlier by one of the authors replaces certain dynamical features of the full interaction current by phenomenological auxiliary contact currents. A procedure is outlined that allows for a systematic improvement of this approximation. The feasibility of the approach is illustrated by applying it to both the neutral and charged pion photoproductions.
Exploitation of parallelism in climate models. Final report
Energy Technology Data Exchange (ETDEWEB)
Baer, Ferdinand; Tribbia, Joseph J.; Williamson, David L.
2001-02-05
This final report includes details on the research accomplished by the grant entitled 'Exploitation of Parallelism in Climate Models' to the University of Maryland. The purpose of the grant was to shed light on (a) how to reconfigure the atmospheric prediction equations such that the time iteration process could be compressed by use of MPP architecture; (b) how to develop local subgrid scale models which can provide time and space dependent parameterization for a state-of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics; and (c) how to capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. In the process of addressing these issues, we created parallel algorithms with spectral accuracy; we developed a process for concurrent climate simulations; we established suitable model reconstructions to speed up computation; we identified and tested optimum realization statistics; we undertook a number of parameterization studies to better understand model physics; and we studied the impact of subgrid scale motions and their parameterization in atmospheric models.
Bugaev, Edgar; Petkov, Valery
2007-01-01
Possibilities of an experimental search for gamma-ray bursts from primordial black hole (PBH) evaporations in space are reconsidered. It is argued that the corresponding constraints which can be obtained in experiments with cosmic ray detectors strongly depend on theoretical approach used for a description of the PBH evaporation process. Predictions of several theoretical models for gamma-ray spectra from final stages of PBH life (integrated over time) are given.
Directory of Open Access Journals (Sweden)
Sri Darwati
2012-01-01
Full Text Available The main problem of landfill management in Indonesia is the difficulty in getting a location for Final Processing Sites (FPS due to limited land and high land prices. Besides, about 95% of existing landfills are uncontrolled dumping sites, which could potentially lead to water, soil and air pollution. Based on data from the Ministry of Environment (2010, The Act of the Republic of Indonesia Number 18 Year 2008 Concerning Solid Waste Management, prohibits open dumping at final processing sites and in ratification, the Local Governments have to convert the open dump sites into controlled or sanitary landfill. The Research Institute for Human Settlements has been conducting multi-year researches related to the rehabilitation of dumpsites toward sustainable landfill. The research methods are literature reviews, experiments, laboratory analysis and field observations. A pilot model of dumpsite rehabilitation was carried out in 2010 at the Final Processing Site at Cikundul in Sukabumi City, consisting of (1 mining landfill (2 construction of landfill cells in a former mining area with a semi aerobic landfill and an anaerobic landfill and (3 landfill operations using decomposed material from landfill mining as a soil cover. The purpose of the study is to develop a sustainable approach for landfill management and rehabilitation through landfill mining and implementation of semi aerobic landfill. Findings in the construction of landfill mining indicate that (1 the construction of landfill mining is constrained by leachate that is trapped in a pile of waste, therefore, the leachate needs to be pumped to leachate treatment installations, (2 the volume of waste excavation is expanding due to the high plastic content of about 26% in landfills (3 the potency of decomposed materials from landfill mining is 40–83% for landfill operations or greening.. The performance of landfill systems shows that leachate quality of semi aerobic landfill tends to be lower
Validation of Modeling Flow Approaching Navigation Locks
2013-08-01
instrumentation, direction vernier . ........................................................................ 8 Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model
An Adaptive Approach to Schema Classification for Data Warehouse Modeling
Institute of Scientific and Technical Information of China (English)
Hong-Ding Wang; Yun-Hai Tong; Shao-Hua Tan; Shi-Wei Tang; Dong-Qing Yang; Guo-Hui Sun
2007-01-01
Data warehouse (DW) modeling is a complicated task, involving both knowledge of business processes and familiarity with operational information systems structure and behavior. Existing DW modeling techniques suffer from the following major drawbacks -data-driven approach requires high levels of expertise and neglects the requirements of end users, while demand-driven approach lacks enterprise-wide vision and is regardless of existing models of underlying operational systems. In order to make up for those shortcomings, a method of classification of schema elements for DW modeling is proposed in this paper. We first put forward the vector space models for subjects and schema elements, then present an adaptive approach with self-tuning theory to construct context vectors of subjects, and finally classify the source schema elements into different subjects of the DW automatically. Benefited from the result of the schema elements classification, designers can model and construct a DW more easily.
Model Mapping Approach Based on Ontology Semantics
Directory of Open Access Journals (Sweden)
Jinkui Hou
2013-09-01
Full Text Available The mapping relations between different models are the foundation for model transformation in model-driven software development. On the basis of ontology semantics, model mappings between different levels are classified by using structural semantics of modeling languages. The general definition process for mapping relations is explored, and the principles of structure mapping are proposed subsequently. The approach is further illustrated by the mapping relations from class model of object oriented modeling language to the C programming codes. The application research shows that the approach provides a theoretical guidance for the realization of model mapping, and thus can make an effective support to model-driven software development
An optimization approach to kinetic model reduction for combustion chemistry
Lebiedz, Dirk
2013-01-01
Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...
Learning Action Models: Qualitative Approach
Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.
2015-01-01
In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite
Center for Extended Magnetohydrodynamics Modeling - Final Technical Report
Energy Technology Data Exchange (ETDEWEB)
Parker, Scott [Univ. of Colorado, Boulder, CO (United States)
2016-02-14
This project funding supported approximately 74 percent of a Ph.D. graduate student, not including costs of travel and supplies. We had a highly successful research project including the development of a second-order implicit electromagnetic kinetic ion hybrid model [Cheng 2013, Sturdevant 2016], direct comparisons with the extended MHD NIMROD code and kinetic simulation [Schnack 2013], modeling of slab tearing modes using the fully kinetic ion hybrid model and finally, modeling global tearing modes in cylindrical geometry using gyrokinetic simulation [Chen 2015, Chen 2016]. We developed an electromagnetic second-order implicit kinetic ion fluid electron hybrid model [Cheng 2013]. As a first step, we assumed isothermal electrons, but have included drift-kinetic electrons in similar models [Chen 2011]. We used this simulation to study the nonlinear evolution of the tearing mode in slab geometry, including nonlinear evolution and saturation [Cheng 2013]. Later, we compared this model directly to extended MHD calculations using the NIMROD code [Schnack 2013]. In this study, we investigated the ion-temperature-gradient instability with an extended MHD code for the first time and got reasonable agreement with the kinetic calculation in terms of linear frequency, growth rate and mode structure. We then extended this model to include orbit averaging and sub-cycling of the ions and compared directly to gyrokinetic theory [Sturdevant 2016]. This work was highlighted in an Invited Talk at the International Conference on the Numerical Simulation of Plasmas in 2015. The orbit averaging sub-cycling multi-scale algorithm is amenable to hybrid architectures with GPUS or math co-processors. Additionally, our participation in the Center for Extend Magnetohydrodynamics motivated our research on developing the capability for gyrokinetic simulation to model a global tearing mode. We did this in cylindrical geometry where the results could be benchmarked with existing eigenmode
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning......In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...
Geometrical approach to fluid models
Kuvshinov, B. N.; Schep, T. J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical
Geometrical approach to fluid models
Kuvshinov, B. N.; Schep, T. J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notio
Model based feature fusion approach
Schwering, P.B.W.
2001-01-01
In recent years different sensor data fusion approaches have been analyzed and evaluated in the field of mine detection. In various studies comparisons have been made between different techniques. Although claims can be made for advantages for using certain techniques, until now there has been no si
A model-driven approach to information security compliance
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
Global energy modeling - A biophysical approach
Energy Technology Data Exchange (ETDEWEB)
Dale, Michael
2010-09-15
This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.
A POMDP approach to Affective Dialogue Modeling
Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Keller, E.; Marinaro, M.; Bratanic, M.
2007-01-01
We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's
The chronic diseases modelling approach
Hoogenveen RT; Hollander AEM de; Genugten MLL van; CCM
1998-01-01
A mathematical model structure is described that can be used to simulate the changes of the Dutch public health state over time. The model is based on the concept of demographic and epidemiologic processes (events) and is mathematically based on the lifetable method. The population is divided over s
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power......—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...
A Unified Approach to Modeling and Programming
DEFF Research Database (Denmark)
Madsen, Ole Lehrmann; Møller-Pedersen, Birger
2010-01-01
of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...
Szekeres models: a covariant approach
Apostolopoulos, Pantelis S
2016-01-01
We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.
Matrix Model Approach to Cosmology
Chaney, A; Stern, A
2015-01-01
We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...
A new approach to adaptive data models
Directory of Open Access Journals (Sweden)
Ion LUNGU
2016-12-01
Full Text Available Over the last decade, there has been a substantial increase in the volume and complexity of data we collect, store and process. We are now aware of the increasing demand for real time data processing in every continuous business process that evolves within the organization. We witness a shift from a traditional static data approach to a more adaptive model approach. This article aims to extend understanding in the field of data models used in information systems by examining how an adaptive data model approach for managing business processes can help organizations accommodate on the fly and build dynamic capabilities to react in a dynamic environment.
Energy Technology Data Exchange (ETDEWEB)
Kessinger, Glen Frank; Nelson, Lee Orville; Grandy, Jon Drue; Zuck, Larry Douglas; Kong, Peter Chuen Sun; Anderson, Gail
1999-08-01
The purpose of LDRD #2349, Characterize and Model Final Waste Formulations and Offgas Solids from Thermal Treatment Processes, was to develop a set of tools that would allow the user to, based on the chemical composition of a waste stream to be immobilized, predict the durability (leach behavior) of the final waste form and the phase assemblages present in the final waste form. The objectives of the project were: • investigation, testing and selection of thermochemical code • development of auxiliary thermochemical database • synthesis of materials for leach testing • collection of leach data • using leach data for leach model development • thermochemical modeling The progress toward completion of these objectives and a discussion of work that needs to be completed to arrive at a logical finishing point for this project will be presented.
Modeling software behavior a craftsman's approach
Jorgensen, Paul C
2009-01-01
A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth
Model-based approach for elevator performance estimation
Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.
2016-02-01
In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.
Current approaches to gene regulatory network modelling
Directory of Open Access Journals (Sweden)
Brazma Alvis
2007-09-01
Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.
Liemohn, Wendell
The final report and addendum document a 2-year study which examined the efficacy of a rhythmic training program to improve rhythmic skill in children with mild mental retardation or learning disabilities (year 1) or hearing impairments (year 2). The study was based on the writings of the Soviet neuropsychologist A. R. Luria suggesting the value…
Model Oriented Approach for Industrial Software Development
Directory of Open Access Journals (Sweden)
P. D. Drobintsev
2015-01-01
Full Text Available The article considers the specifics of a model oriented approach to software development based on the usage of Model Driven Architecture (MDA, Model Driven Software Development (MDSD and Model Driven Development (MDD technologies. Benefits of this approach usage in the software development industry are described. The main emphasis is put on the system design, automated code generation for large systems, verification, proof of system properties and reduction of bug density. Drawbacks of the approach are also considered. The approach proposed in the article is specific for industrial software systems development. These systems are characterized by different levels of abstraction, which is used on modeling and code development phases. The approach allows to detail the model to the level of the system code, at the same time store the verified model semantics and provide the checking of the whole detailed model. Steps of translating abstract data structures (including transactions, signals and their parameters into data structures used in detailed system implementation are presented. Also the grammar of a language for specifying rules of abstract model data structures transformation into real system detailed data structures is described. The results of applying the proposed method in the industrial technology are shown.The article is published in the authors’ wording.
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Energy Technology Data Exchange (ETDEWEB)
1988-12-15
This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.
A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...
Modeling diffuse pollution with a distributed approach.
León, L F; Soulis, E D; Kouwen, N; Farquhar, G J
2002-01-01
The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.
MODULAR APPROACH WITH ROUGH DECISION MODELS
Directory of Open Access Journals (Sweden)
Ahmed T. Shawky
2012-09-01
Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.
Modular Approach with Rough Decision Models
Directory of Open Access Journals (Sweden)
Ahmed T. Shawky
2012-10-01
Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.
Modeling approach suitable for energy system
Energy Technology Data Exchange (ETDEWEB)
Goetschel, D. V.
1979-01-01
Recently increased attention has been placed on optimization problems related to the determination and analysis of operating strategies for energy systems. Presented in this paper is a nonlinear model that can be used in the formulation of certain energy-conversion systems-modeling problems. The model lends itself nicely to solution approaches based on nonlinear-programming algorithms and, in particular, to those methods falling into the class of variable metric algorithms for nonlinearly constrained optimization.
A computational language approach to modeling prose recall in schizophrenia.
Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita
2014-06-01
Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.
Towards a whole-cell modeling approach for synthetic biology
Purcell, Oliver; Jain, Bonny; Karr, Jonathan R.; Covert, Markus W.; Lu, Timothy K.
2013-06-01
Despite rapid advances over the last decade, synthetic biology lacks the predictive tools needed to enable rational design. Unlike established engineering disciplines, the engineering of synthetic gene circuits still relies heavily on experimental trial-and-error, a time-consuming and inefficient process that slows down the biological design cycle. This reliance on experimental tuning is because current modeling approaches are unable to make reliable predictions about the in vivo behavior of synthetic circuits. A major reason for this lack of predictability is that current models view circuits in isolation, ignoring the vast number of complex cellular processes that impinge on the dynamics of the synthetic circuit and vice versa. To address this problem, we present a modeling approach for the design of synthetic circuits in the context of cellular networks. Using the recently published whole-cell model of Mycoplasma genitalium, we examined the effect of adding genes into the host genome. We also investigated how codon usage correlates with gene expression and find agreement with existing experimental results. Finally, we successfully implemented a synthetic Goodwin oscillator in the whole-cell model. We provide an updated software framework for the whole-cell model that lays the foundation for the integration of whole-cell models with synthetic gene circuit models. This software framework is made freely available to the community to enable future extensions. We envision that this approach will be critical to transforming the field of synthetic biology into a rational and predictive engineering discipline.
Stormwater infiltration trenches: a conceptual modelling approach.
Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare
2009-01-01
In recent years, limitations linked to traditional urban drainage schemes have been pointed out and new approaches are developing introducing more natural methods for retaining and/or disposing of stormwater. These mitigation measures are generally called Best Management Practices or Sustainable Urban Drainage System and they include practices such as infiltration and storage tanks in order to reduce the peak flow and retain part of the polluting components. The introduction of such practices in urban drainage systems entails an upgrade of existing modelling frameworks in order to evaluate their efficiency in mitigating the impact of urban drainage systems on receiving water bodies. While storage tank modelling approaches are quite well documented in literature, some gaps are still present about infiltration facilities mainly dependent on the complexity of the involved physical processes. In this study, a simplified conceptual modelling approach for the simulation of the infiltration trenches is presented. The model enables to assess the performance of infiltration trenches. The main goal is to develop a model that can be employed for the assessment of the mitigation efficiency of infiltration trenches in an integrated urban drainage context. Particular care was given to the simulation of infiltration structures considering the performance reduction due to clogging phenomena. The proposed model has been compared with other simplified modelling approaches and with a physically based model adopted as benchmark. The model performed better compared to other approaches considering both unclogged facilities and the effect of clogging. On the basis of a long-term simulation of six years of rain data, the performance and the effectiveness of an infiltration trench measure are assessed. The study confirmed the important role played by the clogging phenomenon on such infiltration structures.
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches
Challenges in structural approaches to cell modeling.
Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A
2016-07-31
Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.
Connectivity of channelized reservoirs: a modelling approach
Energy Technology Data Exchange (ETDEWEB)
Larue, David K. [ChevronTexaco, Bakersfield, CA (United States); Hovadik, Joseph [ChevronTexaco, San Ramon, CA (United States)
2006-07-01
Connectivity represents one of the fundamental properties of a reservoir that directly affects recovery. If a portion of the reservoir is not connected to a well, it cannot be drained. Geobody or sandbody connectivity is defined as the percentage of the reservoir that is connected, and reservoir connectivity is defined as the percentage of the reservoir that is connected to wells. Previous studies have mostly considered mathematical, physical and engineering aspects of connectivity. In the current study, the stratigraphy of connectivity is characterized using simple, 3D geostatistical models. Based on these modelling studies, stratigraphic connectivity is good, usually greater than 90%, if the net: gross ratio, or sand fraction, is greater than about 30%. At net: gross values less than 30%, there is a rapid diminishment of connectivity as a function of net: gross. This behaviour between net: gross and connectivity defines a characteristic 'S-curve', in which the connectivity is high for net: gross values above 30%, then diminishes rapidly and approaches 0. Well configuration factors that can influence reservoir connectivity are well density, well orientation (vertical or horizontal; horizontal parallel to channels or perpendicular) and length of completion zones. Reservoir connectivity as a function of net: gross can be improved by several factors: presence of overbank sandy facies, deposition of channels in a channel belt, deposition of channels with high width/thickness ratios, and deposition of channels during variable floodplain aggradation rates. Connectivity can be reduced substantially in two-dimensional reservoirs, in map view or in cross-section, by volume support effects and by stratigraphic heterogeneities. It is well known that in two dimensions, the cascade zone for the 'S-curve' of net: gross plotted against connectivity occurs at about 60% net: gross. Generalizing this knowledge, any time that a reservoir can be regarded as &apos
Building Water Models, A Different Approach
Izadi, Saeed; Onufriev, Alexey V
2014-01-01
Simplified, classical models of water are an integral part of atomistic molecular simulations, especially in biology and chemistry where hydration effects are critical. Yet, despite several decades of effort, these models are still far from perfect. Presented here is an alternative approach to constructing point charge water models - currently, the most commonly used type. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than symmetry. Instead, we optimize the distribution of point charges to best describe the "electrostatics" of the water molecule, which is key to many unusual properties of liquid water. The search for the optimal charge distribution is performed in 2D parameter space of key lowest multipole moments of the model, to find best fit to a small set of bulk water properties at room temperature. A virtually exhaustive search is enabled via analytical equations that relate the charge distribution to the multipole moments. The resulting "optimal"...
Towards new approaches in phenological modelling
Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas
2014-05-01
Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.
Computational modelling of final covers for uranium mill tailings impoundments.
Leoni, Guilherme Luís Menegassi; Almeida, Márcio de Souza Soares; Fernandes, Horst Monken
2004-07-05
To properly design a final cover for uranium mill tailings impoundments the designer must attempt to find an effective geotechnical solution which addresses the radiological and non-radiological potential impact and prevents geochemical processes from occurring within the tailings. This paper presents a computer-based method for evaluating the performance of engineered final covers for the remediation of uranium mill tailings impoundments. Three hypothetical final covers were taken from scientific literature to investigate the proposed method: (i) a compacted clay liner (CCL); (ii) a composite liner (CL) and (iii) a capillary barrier (CB). The processes investigated: (i) the saturated hydraulic flux; (ii) the unsaturated hydraulic flux (exclusively for the capillary barrier) and (iii) the radon exhalation to the atmosphere. The computer programs utilised for the analyses are: (i) Hydrologic Evaluation of Landfill Performance (HELP); (ii) SEEP/W and (iii) RADON. The site considered for the development of the research presented herein was the uranium mill tailings impoundment located at the Brazilian city of Poços de Caldas, in the Minas Gerais State.
Energy Technology Data Exchange (ETDEWEB)
Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.
1998-12-01
The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.
Modelling Coagulation Systems: A Stochastic Approach
Ryazanov, V V
2011-01-01
A general stochastic approach to the description of coagulating aerosol system is developed. As the object of description one can consider arbitrary mesoscopic values (number of aerosol clusters, their size etc). The birth-and-death formalism for a number of clusters can be regarded as a partial case of the generalized storage model. An application of the storage model to the number of monomers in a cluster is discussed.
A Multiple Model Approach to Modeling Based on LPF Algorithm
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``
Towards a Multiscale Approach to Cybersecurity Modeling
Energy Technology Data Exchange (ETDEWEB)
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.
Post-16 Biology--Some Model Approaches?
Lock, Roger
1997-01-01
Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)
Improvement of procedures for evaluating photochemical models. Final report
Energy Technology Data Exchange (ETDEWEB)
Tesche, T.W.; Lurmann, F.R.; Roth, P.M.; Georgopoulos, P.; Seinfeld, J.H.
1990-08-01
The study establishes a set of procedures that should be used by all groups evaluating the performance of a photochemical model application. A set of ten numerical measures are recommended for evaluating a photochemical model's accuracy in predicting ozone concentrations. Nine graphical methods and six investigative simulations are also recommended to give additional insight into model performance. Standards are presented that each modeling study should try to meet. To complement the operational model evaluation procedures, several diagnostic procedures are suggested. The sensitivity of the model to uncertainties in hydrocarbon emission rates and speciation, and other parameters should be assessed. Uncertainty bounds of key input variables and parameters can be propagated through the model to provide estimated uncertainties in the ozone predictions. Comparisons between measurements and predictions of species other than ozone will help ensure that the model is predicting the right ozone for the right reasons. Plotting concentrations residuals (differences) against a variety of variables may give insight into the reasons for poor model performance. Mass flux and balance calculations can identify the relative importance of emissions and transport. The study also identifies testing a model's response to emission changes as the most important research need. Another important area is testing the emissions inventory.
Using Sub Skills to Model and Estimate Final Skill Level
Directory of Open Access Journals (Sweden)
Hadi Moradi
2013-04-01
Full Text Available Skill level estimation is very important since it allows an instructor, a human or an artificial instructor through an intelligent tutoring system, to predict the level of a student and adjust the learning materials accordingly. In this paper, a new approach based on 1-NN (First Nearest Neighbor is introduced to determine the skill level of a student based on the pattern of skill levels learned over time in the same course. The data over several years are used to determine four clusters of expert, good, average and bad skill level. The advantage of the proposed approach is in its capability to adjust the levels over time based on the new data received each year. Furthermore, it can estimate the skill level after a few homework or project assignments. Consequently it can help an instructor to better conduct its class. The proposed approach has been implemented and tested on an introductory computer programming course and the results prove the validity of the approach.
A relaxation-based approach to damage modeling
Junker, Philipp; Schwarz, Stephan; Makowski, Jerzy; Hackl, Klaus
2017-01-01
Material models, including softening effects due to, for example, damage and localizations, share the problem of ill-posed boundary value problems that yield mesh-dependent finite element results. It is thus necessary to apply regularization techniques that couple local behavior described, for example, by internal variables, at a spatial level. This can take account of the gradient of the internal variable to yield mesh-independent finite element results. In this paper, we present a new approach to damage modeling that does not use common field functions, inclusion of gradients or complex integration techniques: Appropriate modifications of the relaxed (condensed) energy hold the same advantage as other methods, but with much less numerical effort. We start with the theoretical derivation and then discuss the numerical treatment. Finally, we present finite element results that prove empirically how the new approach works.
Final Report for Integrated Multiscale Modeling of Molecular Computing Devices
Energy Technology Data Exchange (ETDEWEB)
Glotzer, Sharon C.
2013-08-28
In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarse-grained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.
Cupola modeling research: Phase 2 (Year one), Final report
Energy Technology Data Exchange (ETDEWEB)
1991-11-20
Objective was to develop a mathematical model of the cupola furnace (cast iron production) in on-line and off-line process control and optimization. In Phase I, the general structure of the heat transfer, fluid flow, and chemical models were laid out, providing reasonable descriptions of cupola behavior with a one-dimensional representation. Work was also initiated on a two-dimensional model. Phase II was focused on perfecting the one-dimensional model. The contributions include these from MIT, Michigan University, and GM.
Heat transfer modeling an inductive approach
Sidebotham, George
2015-01-01
This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...
Descriptive documentation for New Mexico electricity econometric final demand model
Energy Technology Data Exchange (ETDEWEB)
Baxter, J.D.; Ben-David, S.
1981-01-01
A mathematical model is developed for computing consumption and residential electric power demands for New Mexico. Factors considered in developing the model included: number of electric utility customers, past consumption data; household devices using electric power and their energy efficiencies; climatic conditions; and power costs. (LCL)
Great Plains ASPEN model development: Phosam section. Final topical report
Energy Technology Data Exchange (ETDEWEB)
Stern, S S; Kirman, J J
1985-02-01
An ASPEN model has been developed of the PHOSAM Section, Section 4600, of the Great Plains Gasification Plant. The bases for this model are the process description given in Section 6.18 of the Great Plains Project Management Plan and the Lummus Phosam Schematic Process Flow Diagram, Dwg. No. SKD-7102-IM-O. The ASPEN model that has been developed contains the complete set of components that are assumed to be in the gasifier effluent. The model is primarily a flowsheet simulation that will give the material and energy balance and equipment duties for a given set of process conditions. The model is unable to predict fully changes in process conditions that would result from load changes on equipment of fixed sizes, such as a rating model would predict. The model can be used to simulate the steady-state operation of the plant at or near design conditions or to design other PHOSAM units. Because of the limited amount of process information that was available, several major process assumptions had to be made in the development of the flowsheet model. Patent literature was consulted to establish the ammonia concentration in the circulating fluid. Case studies were made with the ammonia content of the feed 25% higher and 25% lower than the base feed. Results of these runs show slightly lower recoveries of ammonia with less ammonia in the feed. As expected, the duties of the Stripper and Fractionator reboilers were higher with more ammonia in the feed. 63 references.
Power Management and Distribution (PMAD) Model Development: Final Report
Metcalf, Kenneth J.
2011-01-01
Power management and distribution (PMAD) models were developed in the early 1990's to model candidate architectures for various Space Exploration Initiative (SEI) missions. They were used to generate "ballpark" component mass estimates to support conceptual PMAD system design studies. The initial set of models was provided to NASA Lewis Research Center (since renamed Glenn Research Center) in 1992. They were developed to estimate the characteristics of power conditioning components predicted to be available in the 2005 timeframe. Early 90's component and device designs and material technologies were projected forward to the 2005 timeframe, and algorithms reflecting those design and material improvements were incorporated into the models to generate mass, volume, and efficiency estimates for circa 2005 components. The models are about ten years old now and NASA GRC requested a review of them to determine if they should be updated to bring them into agreement with current performance projections or to incorporate unforeseen design or technology advances. This report documents the results of this review and the updated power conditioning models and new transmission line models generated to estimate post 2005 PMAD system masses and sizes. This effort continues the expansion and enhancement of a library of PMAD models developed to allow system designers to assess future power system architectures and distribution techniques quickly and consistently.
Regional forecasting with global atmospheric models; Final report
Energy Technology Data Exchange (ETDEWEB)
Crowley, T.J.; Smith, N.R. [Applied Research Corp., College Station, TX (United States)
1994-05-01
The purpose of the project was to conduct model simulations for past and future climate change with respect to the proposed Yucca Mtn. repository. The authors report on three main topics, one of which is boundary conditions for paleo-hindcast studies. These conditions are necessary for the conduction of three to four model simulations. The boundary conditions have been prepared for future runs. The second topic is (a) comparing the atmospheric general circulation model (GCM) with observations and other GCMs; and (b) development of a better precipitation data base for the Yucca Mtn. region for comparisons with models. These tasks have been completed. The third topic is preliminary assessments of future climate change. Energy balance model (EBM) simulations suggest that the greenhouse effect will likely dominate climate change at Yucca Mtn. for the next 10,000 years. The EBM study should improve rational choice of GCM CO{sub 2} scenarios for future climate change.
Popularity Modeling for Mobile Apps: A Sequential Approach.
Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong
2015-07-01
The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.
A Bayesian Shrinkage Approach for AMMI Models.
da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
A Bayesian Shrinkage Approach for AMMI Models.
Directory of Open Access Journals (Sweden)
Carlos Pereira da Silva
Full Text Available Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI model, are widely applicable to genotype-by-environment interaction (GEI studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05 in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct
Conceptual modelling approach of mechanical products based on functional surface
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A modelling framework based on functional surface is presented to support conceptual design of mechanical products. The framework organizes product information in an abstract and multilevel manner. It consists of two mapping processes: function decomposition process and form reconstitution process. The steady mapping relationship from function to form (function-functional surface-form) is realized by taking functional surface as the middle layer. It farthest reduces the possibilities of combinatorial explosion that can occur during function decomposition and form reconstitution. Finally, CAD tools are developed and an auto-bender machine is applied to demonstrate the proposed approach.
Alligator Rivers Analogue project. Hydrogeological modelling. Final Report - Volume 6
Energy Technology Data Exchange (ETDEWEB)
Townley, L.R.; Trefry, M.G.; Barr, A.D. [CSIRO Div of Water Resources, PO Wembley, WA (Australia); Braumiller, S. [Univ of Arizona, Tucson, AZ (United States). Dept of Hydrology and Water Resources; Kawanishi, M. [Central Research Institute of Electric Power Industry, Abiko-Shi, Chiba-Ken (Japan)] [and others
1992-12-31
This volume describes hydrogeological modelling carried out as part of the Alligator Rivers Analogue Project. Hydrogeology has played a key integrating role in the Project, largely because water movement is believed to have controlled the evolution of the Koongarra uranium Orebody and therefore affects field observations of all types at all scales. Aquifer testing described uses the concept of transmissivity in its interpretation of aquifer response to pumping. The concept of an aquifer, a layer transmitting significant quantities of water in a mainly horizontal direction, seems hard to accept in an environment as heterogeneous as that at Koongarra. But modelling of aquifers both in one dimension and two dimensionally in plan has contributed significantly to our understanding of the site. A one-dimensional model with three layers (often described as a quasi two dimensional model) was applied to flow between the Fault and Koongarra Creek. Being a transient model, this model was able to show that reverse flows can indeed occur back towards the Fault, but only if there is distributed recharge over the orebody as well as a mechanism for the Fault, or a region near the Fault, to remove water from the simulated cross-section. The model also showed clearly that the response of the three-layered system, consisting of a highly weathered zone, a fractured transmissive zone and a less conductive lower schist zone, is governed mainly by the transmissivity and storage coefficient of the middle layer. The storage coefficient of the higher layer has little effect. A two-dimensional model in plan used a description of anisotropy to show that reverse flows can also occur even without a conducting Fault. Modelling of a three-dimensional region using discrete fractures showed that it is certainly possible to simulate systems like that observed at Koongarra, but that large amounts of data are probably needed to obtain realistic descriptions of the fracture networks. Inverse modelling
Modeling Results For the ITER Cryogenic Fore Pump. Final Report
Energy Technology Data Exchange (ETDEWEB)
Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)
2014-03-31
A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.
Multiscale Model Approach for Magnetization Dynamics Simulations
De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias
2016-01-01
Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...
Continuum modeling an approach through practical examples
Muntean, Adrian
2015-01-01
This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.
A Multivariate Approach to Functional Neuro Modeling
DEFF Research Database (Denmark)
Mørch, Niels J.S.
1998-01-01
This Ph.D. thesis, A Multivariate Approach to Functional Neuro Modeling, deals with the analysis and modeling of data from functional neuro imaging experiments. A multivariate dataset description is provided which facilitates efficient representation of typical datasets and, more importantly...... and overall conditions governing the functional experiment, via associated micro- and macroscopic variables. The description facilitates an efficient microscopic re-representation, as well as a handle on the link between brain and behavior; the latter is achieved by hypothesizing variations in the micro...... a generalization theoretical framework centered around measures of model generalization error. - Only few, if any, examples of the application of generalization theory to functional neuro modeling currently exist in the literature. - Exemplification of the proposed generalization theoretical framework...
Interfacial Fluid Mechanics A Mathematical Modeling Approach
Ajaev, Vladimir S
2012-01-01
Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail. Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also: Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...
Dental Health Care Models of Southwest Cultures. Final Report.
Pettibone, Timothy J.; Solis, Enrique, Jr.
The major goal of this research was the development and validation of cultural models of dental health practices. The specific objectives were to determine if 3 cultural groups (American Indians, Mexican Americans, and Anglo Americans) differ in the dental health hygiene indices, characteristics, psychological factors, or social factors; to…
Operational Model for Career Development and Vocational Preparation. Final Report.
Upton, Anne L.; Barrett, Samuel L.
Three California State Department units (vocational education, pupil personnel services, and career education) and two school districts (Fremont Unified and Huntington Beach Union High) established a consortium to develop demonstration sites for model career development and vocational preparation systems and staff development programs. The…
Efforts - Final technical report on task 4. Physical modelling calidation
DEFF Research Database (Denmark)
Andreasen, Jan Lasson; Olsson, David Dam; Christensen, T. W.
The present report is documentation for the work carried out in Task 4 at DTU Physical modelling-validation on the Brite/Euram project No. BE96-3340, contract No. BRPR-CT97-0398, with the title Enhanced Framework for forging design using reliable three-dimensional simulation (EFFORTS). The report...
Ambient Weather Model Research and Development: Final Report.
Energy Technology Data Exchange (ETDEWEB)
Walker, Stel Nathan; Wade, John Edward
1990-08-31
Ratings for Bonneville Power Administration (BPA) transmission lines are based upon the IEEE Standard for Calculation of Bare Overhead Conductor Temperatures and Ampacity under Steady-State Conditions (1985). This steady-state model is very sensitive to the ambient weather conditions of temperature and wind speed. The model does not account for wind yaw, turbulence, or conductor roughness as proposed by Davis (1976) for a real time rating system. The objective of this research has been to determine (1) how conservative the present rating system is for typical ambient weather conditions, (2) develop a probability-based methodology, (3) compile available weather data into a compatible format, and (4) apply the rating methodology to a hypothetical line. The potential benefit from this research is to rate transmission lines statistically which will allow BPA to take advantage of any unknown thermal capacity. The present deterministic weather model is conservative overall and studies suggest a refined model will uncover additional unknown capacity. 14 refs., 40 figs., 7 tabs.
Systematic approach to MIS model creation
Directory of Open Access Journals (Sweden)
Macura Perica
2004-01-01
Full Text Available In this paper-work, by application of basic principles of general theory of system (systematic approach, we have formulated a model of marketing information system. Bases for research were basic characteristics of systematic approach and marketing system. Informational base for management of marketing system, i.e. marketing instruments was presented in a way that the most important information for decision making were listed per individual marketing mix instruments. In projected model of marketing information system, information listed in this way create a base for establishing of data bases, i.e. bases of information (data bases of: product, price, distribution, promotion. This paper-work gives basic preconditions for formulation and functioning of the model. Model was presented by explication of elements of its structure (environment, data bases operators, analysts of information system, decision makers - managers, i.e. input, process, output, feedback and relations between these elements which are necessary for its optimal functioning. Beside that, here are basic elements for implementation of the model into business system, as well as conditions for its efficient functioning and development.
Computational modeling of drug-resistant bacteria. Final report
Energy Technology Data Exchange (ETDEWEB)
MacDougall, Preston [Middle Tennessee State Univ., Murfreesboro, TN (United States)
2015-03-12
Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.
Final Report: Center for Programming Models for Scalable Parallel Computing
Energy Technology Data Exchange (ETDEWEB)
Mellor-Crummey, John [William Marsh Rice University
2011-09-13
As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.
Advanced geothermal hydraulics model -- Phase 1 final report, Part 2
Energy Technology Data Exchange (ETDEWEB)
W. Zheng; J. Fu; W. C. Maurer
1999-07-01
An advanced geothermal well hydraulics model (GEODRIL) is being developed to accurately calculate bottom-hole conditions in these hot wells. In Phase 1, real-time monitoring and other improvements were added to GEODRIL. In Phase 2, GEODRIL will be integrated into Marconi's Intelligent Drilling Monitor (IDM) that will use artificial intelligence to detect lost circulation, fluid influxes and other circulation problems in geothermal wells. This software platform has potential for significantly reducing geothermal drilling costs.
Advanced numerical modelling of a fire. Final report
Energy Technology Data Exchange (ETDEWEB)
Heikkilae, L.; Keski-Rahkonen, O. [VTT Building Technology, Espoo (Finland)
1996-03-01
Experience and probabilistic risk assessments show that fires present a major hazard in a nuclear power plant (NPP). The PALOME project (1988-92) improved the quality of numerical simulation of fires to make it a useful tool for fire safety analysis. Some of the most advanced zone model fire simulation codes were acquired. The performance of the codes was studied through literature and personal interviews in earlier studies and BRI2 code from the Japanese Building Research Institute was selected for further use. In PALOME 2 project this work was continued. Information obtained from large-scale fire tests at the German HDR facility allowed reliable prediction of the rate of heat release and was used for code validation. BRI2 code was validated particularly by participation in the CEC standard problem `Prediction of effects caused by a cable fire experiment within the HDR-facility`. Participation in the development of a new field model code SOFIE specifically for fire applications as British-Swedish-Finnish cooperation was one of the goals of the project. SOFIE code was implemented at VTT and the first results of validation simulations were obtained. Well instrumented fire tests on electronic cabinets were carried out to determine source terms for simulation of room fires and to estimate fire spread to adjacent cabinets. The particular aim of this study was to measure the rate of heat release from a fire in an electronic cabinet. From the three tests, differing mainly in the amount of the fire load, data was obtained for source terms in numerical modelling of fires in rooms containing electronic cabinets. On the basis of these tests also a simple natural ventilation model was derived. (19 refs.).
Energy Technology Data Exchange (ETDEWEB)
Pakrasi, Himadri [Washington Univ., St. Louis, MO (United States)
2016-09-01
The overall objective of this project was to use a systems biology approach to evaluate the potentials of a number of cyanobacterial strains for photobiological production of advanced biofuels and/or their chemical precursors. Cyanobacteria are oxygen evolving photosynthetic prokaryotes. Among them, certain unicellular species such as Cyanothece can also fix N_{2}, a process that is exquisitely sensitive to oxygen. To accommodate such incompatible processes in a single cell, Cyanothece produces oxygen during the day, and creates an O_{2}-limited intracellular environment during the night to perform O_{2}-sensitive processes such as N_{2}-fixation. Thus, Cyanothece cells are natural bioreactors for the storage of captured solar energy with subsequent utilization at a different time during a diurnal cycle. Our studies include the identification of a novel, fast-growing, mixotrophic, transformable cyanobacterium. This strain has been sequenced and will be made available to the community. In addition, we have developed genome-scale models for a family of cyanobacteria to assess their metabolic repertoire. Furthermore, we developed a method for rapid construction of metabolic models using multiple annotation sources and a metabolic model of a related organism. This method will allow rapid annotation and screening of potential phenotypes based on the newly available genome sequences of many organisms.
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Regularization of turbulence - a comprehensive modeling approach
Geurts, B. J.
2011-12-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.
Exploring New Physics Beyond the Standard Model: Final Technical Report
Energy Technology Data Exchange (ETDEWEB)
Wang, Liantao [Univ. of Chicago, IL (United States)
2016-10-17
This grant in 2015 to 2016 was for support in the area of theoretical High Energy Physics. The research supported focused mainly on the energy frontier, but it also has connections to both the cosmic and intensity frontiers. Lian-Tao Wang (PI) focused mainly on signal of new physics at colliders. The year 2015 - 2016, covered by this grant, has been an exciting period of digesting the influx of LHC data, understanding its meaning, and using it to refine strategies for deeper exploration. The PI proposed new methods of searching for new physics at the LHC, such as for the compressed stops. He also investigated in detail the signal of composite Higgs models, focusing on spin-1 composite resonances in the di-boson channel. He has also considered di-photon as a probe for such models. He has also made contributions in formulating search strategies of dark matter at the LHC, resulting in two documents with recommendations. The PI has also been active in studying the physics potential of future colliders, including Higgs factories and 100 TeV pp colliders. He has given comprehensive overview of the physics potential of the high energy proton collider, and outline its luminosity targets. He has also studied the use of lepton colliders to probe fermionic Higgs portal and bottom quark couplings to the Z boson.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
[Model for introducing or revitalizing the final monograph].
Saupe, Rosita; Wendhausen, Agueda Lenita Pereira; Machado, Heloisa Beatriz
2004-01-01
The requirement set by the Curricular Guidelines for a Course Conclusion Work in the nursing area is an innovation for the majority of courses in Brazil. Only a few courses have introduced this type of study as a result of their forward-looking vision. This requirement has demanded an effort from the universities, to ensure that these studies do not only represent an academic exercise, but also an institutional quality indicator and a possible contribution to the solution of social problems. Our proposed model includes: defining lines of research, gathering researchers per by area of interest, organizing research groups and centers, defining the preferred types of studies, planning operational agendas, carrying out a follow-up on their introduction and encouraging their publication.
Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach
Directory of Open Access Journals (Sweden)
Alistair McNair Senior
2016-01-01
Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
Energy Technology Data Exchange (ETDEWEB)
Hardy, Robert Douglas; Holcomb, David Joseph; Gettemy, Glen L.; Fossum, Arlo Frederick; Rivas, Raul R.; Bronowski, David R.; Preece, Dale S.
2004-10-01
The purpose of the present work is to increase our understanding of which properties of geomaterials most influence the penetration process with a goal of improving our predictive ability. Two primary approaches were followed: development of a realistic, constitutive model for geomaterials and designing an experimental approach to study penetration from the target's point of view. A realistic constitutive model, with parameters based on measurable properties, can be used for sensitivity analysis to determine the properties that are most important in influencing the penetration process. An immense literature exists that is devoted to the problem of predicting penetration into geomaterials or similar man-made materials such as concrete. Various formulations have been developed that use an analytic or more commonly, numerical, solution for the spherical or cylindrical cavity expansion as a sort of Green's function to establish the forces acting on a penetrator. This approach has had considerable success in modeling the behavior of penetrators, both as to path and depth of penetration. However the approach is not well adapted to the problem of understanding what is happening to the material being penetrated. Without a picture of the stress and strain state imposed on the highly deformed target material, it is not easy to determine what properties of the target are important in influencing the penetration process. We developed an experimental arrangement that allows greater control of the deformation than is possible in actual penetrator tests, yet approximates the deformation processes imposed by a penetrator. Using explosive line charges placed in a central borehole, we loaded cylindrical specimens in a manner equivalent to an increment of penetration, allowing the measurement of the associated strains and accelerations and the retrieval of specimens from the more-or-less intact cylinder. Results show clearly that the deformation zone is highly concentrated
A new approach for Bayesian model averaging
Institute of Scientific and Technical Information of China (English)
TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun
2012-01-01
Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.
Final model independent result of DAMA/LIBRA-phase1
Energy Technology Data Exchange (ETDEWEB)
Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)
2013-12-15
The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)
Community Earth System Model (CESM) Tutorial 2016 Final Report
Energy Technology Data Exchange (ETDEWEB)
Lamarque, Jean-Francois [Univ. Corporation for Atmospheric Research (UCAR) and National Center for Atmospheric Research (NCAR) and Climate and Global Dynamics Laboratory (CGD), Boulder, CO (United States)
2017-05-09
For the 2016 tutorial, NCAR/CGD requested a total budget of $70,000 split equally between DOE and NSF. The funds were used to support student participation (travel, lodging, per diem, etc.). Lectures and practical session support was primarily provided by local participants at no additional cost (see list below). The seventh annual Community Earth System Model (CESM) tutorial (2016) for students and early career scientists was held 8 – 12 August 2016. As has been the case over the last few years, this event was extremely successful and there was greater demand than could be met. There was continued interest in support of the NSF’s EaSM Infrastructure awards, to train these awardees in the application of the CESM. Based on suggestions from previous tutorial participants, the 2016 tutorial experience again provided direct connection to Yellowstone for each individual participant (rather than pairs), and used the NCAR Mesa Library. The 2016 tutorial included lectures on simulating the climate system and practical sessions on running CESM, modifying components, and analyzing data. These were targeted to the graduate student level. In addition, specific talks (“Application” talks) were introduced this year to provide participants with some in-depth knowledge of some specific aspects of CESM.
Huet, Michael; Jacobs, David M.; Camachon, Cyril; Missenard, Olivier; Gray, Rob; Montagne, Gilles
2011-01-01
The present study reports two experiments in which a total of 20 participants without prior flight experience practiced the final approach phase in a fixed-base simulator. All participants received self-controlled concurrent feedback during 180 practice trials. Experiment 1 shows that participants learn more quickly under variable practice…
Huet, Michael; Jacobs, David M.; Camachon, Cyril; Missenard, Olivier; Gray, Rob; Montagne, Gilles
2011-01-01
The present study reports two experiments in which a total of 20 participants without prior flight experience practiced the final approach phase in a fixed-base simulator. All participants received self-controlled concurrent feedback during 180 practice trials. Experiment 1 shows that participants learn more quickly under variable practice…
Development of a risk-analysis model. Final report
Energy Technology Data Exchange (ETDEWEB)
1979-10-01
This report consists of a main body, which provides a presentation of risk analysis and its general and specific application to the needs of the Office of Buildings and Community Systems of the Department of Energy; and several case studies employing the risk-analysis model developed. The highlights include a discussion of how risk analysis is currently used in the private, regulated, and public sectors and how this methodology can be employed to meet the policy-analysis needs of the Office of Buildings and Community Systems of the Department of Energy (BCS/DOE). After a review of the primary methodologies available for risk analysis, it was determined that Monte Carlo simulation techniques provide the greatest degree of visibility into uncertainty in the decision-making process. Although the data-collection requirements can be demanding, the benefits, when compared to other methods, are substantial. The data-collection problem can be significantly reduced, without sacrificing proprietary-information rights, if prior arrangements are made with RD and D contractors to provide responses to reasonable requests for base-case data. A total of three case studies were performed on BCS technologies: a gas-fired heat pump; a 1000 ton/day anaerobic digestion plant; and a district heating and cooling system. The three case studies plus the risk-analysis methodology were issued as separate reports. It is concluded that, based on the overall research of risk analysis and the case-study experience, that the risk-analysis methodology has significant potential as a policy-evaluation tool within BCS.
Final Report for Harvesting a New Wind Crop: Innovative Economic Approaches for Rural America
Energy Technology Data Exchange (ETDEWEB)
Susan Innis; Randy Udall; Project Officer - Keith Bennett
2005-09-30
Final Report for ''Harvesting a New Wind Crop: Innovative Economic Approaches for Rural America'': This project, ''Harvesting a New Wind Crop'', helped stimulate wind development by rural electric cooperatives and municipal utilities in Colorado. To date most of the wind power development in the United States has been driven by large investor-owned utilities serving major metropolitan areas. To meet the 5% by 2020 goal of the Wind Powering America program the 2,000 municipal and 900 rural electric cooperatives in the country must get involved in wind power development. Public power typically serves rural and suburban areas and can play a role in revitalizing communities by tapping into the economic development potential of wind power. One barrier to the involvement of public power in wind development has been the perception that wind power is more expensive than other generation sources. This project focused on two ways to reduce the costs of wind power to make it more attractive to public power entities. The first way was to develop a revenue stream from the sale of green tags. By selling green tags to entities that voluntarily support wind power, rural coops and munis can effectively reduce their cost of wind power. Western Resource Advocates (WRA) and the Community Office for Resource Efficiency (CORE) worked with Lamar Light and Power and Arkansas River Power Authority to develop a strategy to use green tags to help finance their wind project. These utilities are now selling their green tags to Community Energy, Inc., an independent for-profit marketer who in turn sells the tags to consumers around Colorado. The Lamar tags allow the University of Colorado-Boulder, the City of Boulder, NREL and other businesses to support wind power development and make the claim that they are ''wind-powered''. This urban-rural partnership is an important development for the state of Colorado's rural communities
Evertson, Carolyn M.; And Others
A summary is presented of the final report, "Effective Classroom Management and Instruction: An Exploration of Models." The final report presents a set of linked investigations of the effects of training teachers in effective classroom management practices in a series of school-based workshops. Four purposes were addressed by the study: (1) to…
AN AUTOMATIC APPROACH TO BOX & JENKINS MODELLING
MARCELO KRIEGER
1983-01-01
Apesar do reconhecimento amplo da qualidade das previsões obtidas na aplicação de um modelo ARIMA à previsão de séries temporais univariadas, seu uso tem permanecido restrito pela falta de procedimentos automáticos, computadorizados. Neste trabalho este problema é discutido e um algoritmo é proposto. Inspite of general recognition of the good forecasting ability of ARIMA models in predicting time series, this approach is not widely used because of the lack of ...
Modeling in transport phenomena a conceptual approach
Tosun, Ismail
2007-01-01
Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to
Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.
Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J
2016-01-01
Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
Pedagogic process modeling: Humanistic-integrative approach
Directory of Open Access Journals (Sweden)
Boritko Nikolaj M.
2007-01-01
Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .
Nuclear level density: Shell-model approach
Sen'kov, Roman; Zelevinsky, Vladimir
2016-06-01
Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.
Modeling Social Annotation: a Bayesian Approach
Plangprasopchok, Anon
2008-01-01
Collaborative tagging systems, such as del.icio.us, CiteULike, and others, allow users to annotate objects, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations, contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of socially-generated data, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated objects. Unfortunately, our proposed approach had a number of shortcomings, including overfitting, local maxima and the requirement to specify values for some parameters. In this paper we address these shortcomings in two ways. First, we extend the model to a fully Bayesian framework. Second, we describe an infinite ver...
2012-01-01
The main problem of landfill management in Indonesia is the difficulty in getting a location for Final Processing Sites (FPS) due to limited land and high land prices. Besides, about 95% of existing landfills are uncontrolled dumping sites, which could potentially lead to water, soil and air pollution. Based on data from the Ministry of Environment (2010), The Act of the Republic of Indonesia Number 18 Year 2008 Concerning Solid Waste Management, prohibits open dumping at final processing sit...
US Fish and Wildlife Service, Department of the Interior — The final report for the Native Prairie Adaptive Management Database Archive and Competing Model Linkage project covers activities during FY2014. The overall goal of...
Building Energy Modeling: A Data-Driven Approach
Cui, Can
Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling approach for generic buildings. In this study, an integrated computationally efficient and high-fidelity building energy modeling framework is proposed, with the concentration on developing a generalized modeling approach for various types of buildings. First, a number of data-driven simulation models are reviewed and assessed on various types of computationally expensive simulation problems. Motivated by the conclusion that no model outperforms others if amortized over diverse problems, a meta-learning based recommendation system for data-driven simulation modeling is proposed. To test the feasibility of the proposed framework on the building energy system, an extended application of the recommendation system for short-term building energy forecasting is deployed on various buildings. Finally, Kalman filter-based data fusion technique is incorporated into the building recommendation system for on-line energy forecasting. Data fusion enables model calibration to update the state estimation in real-time, which filters out the noise and renders more accurate energy forecast. The framework is composed of two modules: off-line model recommendation module and on-line model calibration module. Specifically, the off-line model recommendation module includes 6 widely used data-driven simulation models, which are ranked by meta-learning recommendation system for off-line energy modeling on a given building scenario. Only a selective set of building physical and operational characteristic features is needed to complete the recommendation task. The on-line calibration module effectively addresses system uncertainties, where data fusion on
Final Report for Bio-Inspired Approaches to Moving-Target Defense Strategies
Energy Technology Data Exchange (ETDEWEB)
Fink, Glenn A.; Oehmen, Christopher S.
2012-09-01
This report records the work and contributions of the NITRD-funded Bio-Inspired Approaches to Moving-Target Defense Strategies project performed by Pacific Northwest National Laboratory under the technical guidance of the National Security Agency’s R6 division. The project has incorporated a number of bio-inspired cyber defensive technologies within an elastic framework provided by the Digital Ants. This project has created the first scalable, real-world prototype of the Digital Ants Framework (DAF)[11] and integrated five technologies into this flexible, decentralized framework: (1) Ant-Based Cyber Defense (ABCD), (2) Behavioral Indicators, (3) Bioinformatic Clas- sification, (4) Moving-Target Reconfiguration, and (5) Ambient Collaboration. The DAF can be used operationally to decentralize many such data intensive applications that normally rely on collection of large amounts of data in a central repository. In this work, we have shown how these component applications may be decentralized and may perform analysis at the edge. Operationally, this will enable analytics to scale far beyond current limitations while not suffering from the bandwidth or computational limitations of centralized analysis. This effort has advanced the R6 Cyber Security research program to secure digital infrastructures by developing a dynamic means to adaptively defend complex cyber systems. We hope that this work will benefit both our client’s efforts in system behavior modeling and cyber security to the overall benefit of the nation.
Implicit moral evaluations: A multinomial modeling approach.
Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael
2017-01-01
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Multicomponent Equilibrium Models for Testing Geothermometry Approaches
Energy Technology Data Exchange (ETDEWEB)
Carl D. Palmer; Robert W. Smith; Travis L. McLing
2013-02-01
Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.
A semiparametric approach to physiological flow models.
Verotta, D; Sheiner, L B; Ebling, W F; Stanski, D R
1989-08-01
By regarding sampled tissues in a physiological model as linear subsystems, the usual advantages of flow models are preserved while mitigating two of their disadvantages, (i) the need for assumptions regarding intratissue kinetics, and (ii) the need to simultaneously fit data from several tissues. To apply the linear systems approach, both arterial blood and (interesting) tissue drug concentrations must be measured. The body is modeled as having an arterial compartment (A) distributing drug to different linear subsystems (tissues), connected in a specific way by blood flow. The response (CA, with dimensions of concentration) of A is measured. Tissues receive input from A (and optionally from other tissues), and send output to the outside or to other parts of the body. The response (CT, total amount of drug in the tissue (T) divided by the volume of T) from the T-th one, for example, of such tissues is also observed. From linear systems theory, CT can be expressed as the convolution of CA with a disposition function, F(t) (with dimensions 1/time). The function F(t) depends on the (unknown) structure of T, but has certain other constant properties: The integral integral infinity0 F(t) dt is the steady state ratio of CT to CA, and the point F(0) is the clearance rate of drug from A to T divided by the volume of T. A formula for the clearance rate of drug from T to outside T can be derived. To estimate F(t) empirically, and thus mitigate disadvantage (i), we suggest that, first, a nonparametric (or parametric) function be fitted to CA data yielding predicted values, CA, and, second, the convolution integral of CA with F(t) be fitted to CT data using a deconvolution method. By so doing, each tissue's data are analyzed separately, thus mitigating disadvantage (ii). A method for system simulation is also proposed. The results of applying the approach to simulated data and to real thiopental data are reported.
Agribusiness model approach to territorial food development
Directory of Open Access Journals (Sweden)
Murcia Hector Horacio
2011-04-01
Full Text Available
Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.
Development of generalised model for grate combustion of biomass. Final report
Energy Technology Data Exchange (ETDEWEB)
Rosendahl, L.
2007-02-15
This project has been divided into two main parts, one of which has focused on modelling and one on designing and constructing a grate fired biomass test rig. The modelling effort has been defined due to a need for improved knowledge of the transport and conversion processes within the bed layer for two reasons: 1) to improve emission understanding and reduction measures and 2) to improve boundary conditions for CFD-based furnace modelling. The selected approach has been based on a diffusion coefficient formulation, where conservation equations for the concentration of fuel are solved in a spatially resolved grid, much in the same manner as in a finite volume CFD code. Within this porous layer of fuel, gas flows according to the Ergun equation. The diffusion coefficient links the properties of the fuel to the grate type and vibration mode, and is determined for each combination of fuel, grate and vibration mode. In this work, 3 grates have been tested as well as 4) types of fuel, drinking straw, wood beads, straw pellets and wood pellets. Although much useful information and knowledge has been obtained on transport processes in fuel layers, the model has proved to be less than perfect, and the recommendation is not to continue along this path. New visual data on the motion of straw on vibrating grates indicate that a diffusion governed motion does not very well represent the transport. Furthermore, it is very difficult to obtain the diffusion coefficient in other places than the surface layer of the grate, and it is not likely that this is representative for the motion within the layer. Finally, as the model complexity grows, model turnover time increases to a level where it is comparable to that of the full furnace model. In order to proceed and address the goals of the first paragraph, it is recommended to return to either a walking column approach or even some other, relatively simple method of prediction, and combine this with a form of randomness, to mimic the
Evaluating face trustworthiness: a model based approach.
Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N
2008-06-01
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.
Modeling middle and final flush effects of urban runoff pollution in an urbanizing catchment
Qin, Hua-peng; He, Kang-mao; Fu, Guangtao
2016-03-01
In current literature, the first flush effect of urban runoff pollution has been studied and reported extensively. However, the effects of middle and final flushes on pollutant flushing were not given much attention. In addition, few previous studies have discussed the suitability of the widely used exponential wash-off model for describing the middle or final flush processes. In this paper, the Shiyan River catchment, a typical rapidly urbanizing catchment in China, is chosen as a study area to analyze the effects of first, middle and final flushes based on monitoring hydrographs and pollutographs. In order to simulate the middle and final flush processes observed in storm events, a new, realistically simple, parsimonious model (named as logistic wash-off model) is developed with the assumption that surface pollutant loads available for wash-off increase with cumulative runoff volume following a logistic curve. The popular exponential wash-off model and the newly developed model are used and compared in simulating the flush processes in storm events. The results indicate that all the three types of pollutant flushing are observed in the experiment; however, the first flush effect is weak, while the middle and final flush effects are substantial. The exponential model has performed well in simulating the first flush process but failed to simulate well the middle and final flush processes. However, the logistic wash-off model has effectively simulated all the three types of pollutant flush, and particularly, it has performed better in simulating the middle and final flush processes than the exponential model.
A Systems Approach to C-130E Aircrew Transitional Training. Final Report.
Valverde, Horace H.; Burkett, Bob P.
The report describes the development and evaluation of a Tactical Air Command (TAC) C-130E transitional aircrew training program based on a systems approach. The systems approach to training emphasizes the importance of specifying objectives derived from a task analysis of the aircrew member's job. A training program was prepared to develop…
Approaches and models of intercultural education
Directory of Open Access Journals (Sweden)
Iván Manuel Sánchez Fontalvo
2013-10-01
Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.
The INTELLIGENT RuleTutor: A Structured Approach to Intelligent Tutoring. Final Report.
Scandura, Alice B.
This final report describes a general purpose system for developing intelligent tutors based on the Structural Learning Theory. The report opens with a discussion of the rules and related constructs that underlie cognitive constructs in all structural learning theories. The remainder of the text provides: (1) an introduction to the Structural…
This final report uses biological data collected by four states in wadeable rivers and streams to examine the components of state and tribal bioassessment and biomonitoring programs that may be vulnerable to climate change. The study investigates the potential to identify biologi...
MODEL-BASED PERFORMANCE EVALUATION APPROACH FOR MOBILE AGENT SYSTEMS
Institute of Scientific and Technical Information of China (English)
Li Xin; Mi Zhengkun; Meng Xudong
2004-01-01
Claimed as the next generation programming paradigm, mobile agent technology has attracted extensive interests in recent years. However, up to now, limited research efforts have been devoted to the performance study of mobile agent system and most of these researches focus on agent behavior analysis resulting in that models are hard to apply to mobile agent systems. To bridge the gap, a new performance evaluation model derived from operation mechanisms of mobile agent platforms is proposed. Details are discussed for the design of companion simulation software, which can provide the system performance such as response time of platform to mobile agent. Further investigation is followed on the determination of model parameters. Finally comparison is made between the model-based simulation results and measurement-based real performance of mobile agent systems. The results show that the proposed model and designed software are effective in evaluating performance characteristics of mobile agent systems. The proposed approach can also be considered as the basis of performance analysis for large systems composed of multiple mobile agent platforms.
THE DEVELOPMENT AND TESTING OF AN EVALUATION MODEL FOR VOCATIONAL PILOT PROGRAMS. FINAL REPORT.
TUCKMAN, BRUCE W.
THE OBJECTIVES OF THE PROJECT WERE (1) TO DEVELOP AN EVALUATION MODEL IN THE FORM OF A HOW-TO-DO-IT MANUAL WHICH OUTLINES PROCEDURES FOR OBTAINING IMMEDIATE INFORMATION REGARDING THE DEGREE TO WHICH A PILOT PROGRAM ACHIEVES ITS STATED FINAL OBJECTIVES, (2) TO EVALUATE THIS MODEL BY USING IT TO EVALUATE TWO ONGOING PILOT PROGRAMS, AND (3) TO…
Partners Plus: Families and Caregivers in Partnerships. Model Demonstration. Final Report.
Garland, Corrine W.; Frank, Adrienne; Ownby, Lisa L.
This final report discusses the activities and outcomes of Partners Plus: Families and Caregivers in Partnerships, a model demonstration project designed to expand respite care options for families of children (birth to 8 years old) with disabilities. The program uses a natural and family-centered model that involves families in the design,…
A Bayesian modeling approach for generalized semiparametric structural equation models.
Song, Xin-Yuan; Lu, Zhao-Hua; Cai, Jing-Heng; Ip, Edward Hak-Sing
2013-10-01
In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types-continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.
A Systems Approach to Bio-Oil Stabilization - Final Technical Report
Energy Technology Data Exchange (ETDEWEB)
Brown, Robert C; Meyer, Terrence; Fox, Rodney; Submramaniam, Shankar; Shanks, Brent; Smith, Ryan G
2011-12-23
CFD model at all flow speeds. This study shows that fully-resolved direct numerical simulation (DNS) is successful in calculating the filter efficiency at all speeds. Aldehydes and acids are thought to play key roles in the stability of bio-oils, so the catalytic stabilization of bio-oils was focused on whether a reaction approach could be employed that simultaneously addressed these two types of molecules in bio-oil. Our approach to post treatment was simultaneous hydrogenation and esterification using bifunctional metal/acidic heterogeneous catalyst in which reactive aldehydes were reduced to alcohols, creating a high enough alcohol concentration so that the carboxylic acids could be esterified.
Vibration Stabilization of a Mechanical Model of a X-Band Linear Collider Final Focus Magnet
Energy Technology Data Exchange (ETDEWEB)
Frisch, Josef; Chang, Allison; Decker, Valentin; Doyle, Eric; Eriksson, Leif; Hendrickson, Linda; Himel, Thomas; Markiewicz, Thomas; Partridge, Richard; Seryi, Andrei; /SLAC
2006-09-28
The small beam sizes at the interaction point of a X-band linear collider require mechanical stabilization of the final focus magnets at the nanometer level. While passive systems provide adequate performance at many potential sites, active mechanical stabilization is useful if the natural or cultural ground vibration is higher than expected. A mechanical model of a room temperature linear collider final focus magnet has been constructed and actively stabilized with an accelerometer based system.
A quark model calculation of yy->pipi including final-state interactions
Blundell, H G; Hay, G; Swanso, E
2000-01-01
A quark model calculation of the processes yy->pi+pi- and yy->pipi is performed. At tree level, only charged pions couple to the initial state photons and neutral pions are not exceeded in the final state. However a small but significant cross section is observed. We demonstrate that this may be accounted for by a rotation in isospin space induced by final-state interactions.
Final report of the TRUE Block Scale project. 3. Modelling of flow and transport
Energy Technology Data Exchange (ETDEWEB)
Poteri, Antti [VTT Processes, Helsinki (Finland); Billaux, Daniel [Itasca Consultants SA, Ecully (France); Dershowitz, William [Golder Associates Inc., Redmond, WA (United States); Gomez-Hernandez, J. Jaime [Univ. Politecnica de Valencia (Spain). Dept. of Hydrahulic and Environmental Engineering; Cvetkovic, Vladimir [Royal Inst. of Tech., Stockholm (Sweden). Div. of Water Resources Engineering; Hautojaervi, Aimo [Posiva Oy, Olkiluoto (Finland); Holton, David [Serco Assurance, Harwell (United Kingdom); Medina, Agustin [UPC, Barcelona (Spain); Winberg, Anders (ed.) [Conterra AB, Uppsala (Sweden)
2002-12-01
A series of tracer experiments were performed as part of the TRUE Block Scale experiment over length scales ranging from 10 to 100 m. The in situ experimentation was preceded by a comprehensive iterative characterisation campaign - the results from one borehole was used to update descriptive models and provide the basis for continued characterisation. Apart from core drilling, various types of laboratory investigations, core logging, borehole TV imaging and various types of hydraulic tests (single hole and cross-hole) were performed. Based on the characterisation data a hydro structural model of the investigated rock volume was constructed including deterministic structures and a stochastic background fracture population, and their material properties. In addition, a generic microstructure conceptual model of the investigated structures was developed. Tracer tests with radioactive sorbing tracers performed in three flow paths were preceded by various pre-tests including tracer dilution tests, which were used to select suitable configurations of tracer injection and pumping in the established borehole array. The in situ experimentation was preceded by formulation of basic questions and associated hypotheses to be addressed by the tracer tests and the subsequent evaluation. The hypotheses included address of the validity of the hydro structural model, the effects of heterogeneity and block scale retention. Model predictions and subsequent evaluation modelling was performed using a wide variety of model concepts. These included stochastic continuum, discrete feature network and channel network models formulated in 3D, which also solved the flow problem. In addition, two 'single channel' approaches (Posiva Streamtube and LaSAR extended to the block scale) were employed. A common basis for transport was formulated. The difference between the approaches was found in how heterogeneity is accounted for, both in terms of number of different types of immobile zones
FINAL REPORT:Observation and Simulations of Transport of Molecules and Ions Across Model Membranes
Energy Technology Data Exchange (ETDEWEB)
MURAD, SOHAIL [University of Illinois at Chicago; JAMESON, CYNTHIA J [University of Illinois at Chicago
2013-10-22
During the this new grant we developed a robust methodology for investigating a wide range of properties of phospho-lipid bilayers. The approach developed is unique because despite using periodic boundary conditions, we can simulate an entire experiment or process in detail. For example, we can follow the entire permeation process in a lipid-membrane. This includes transport from the bulk aqueous phase to the lipid surface; permeation into the lipid; transport inside the lipid; and transport out of the lipid to the bulk aqueous phase again. We studied the transport of small gases in both the lipid itself and in model protein channels. In addition, we have examined the transport of nanocrystals through the lipid membrane, with the main goal of understanding the mechanical behavior of lipids under stress including water and ion leakage and lipid flip flop. Finally we have also examined in detail the deformation of lipids when under the influence of external fields, both mechanical and electrostatic (currently in progress). The important observations and conclusions from our studies are described in the main text of the report
DECOVALEX III PROJECT. Modelling of FEBEX In-Situ Test. Task1 Final Report
Energy Technology Data Exchange (ETDEWEB)
Alonso, E.E.; Alcoverro, J. [Univ. Politecnica de Catalunya, Barcelona (Spain)] (comps.)
2005-02-15
Task 1 of DECOVALEX III was conceived as a benchmark exercise supported by all field and laboratory data generated during the performance of the FEBEX experiment designed to study thermo-hydro-mechanical and thermo-hydro-geochemical processes of the buffer and rock in the near field. The task was defined as a series of three successive blind prediction exercises (Parts A, B and C), which cover the behaviour of both the rock and bentonite barrier. Research teams participating in the FEBEX task were given, for each of the three parts, a set of field and laboratory data theoretically sufficient to generate a proper model and were asked to submit predictions, at given locations and time, for some of the measured variables. The merits and limitations of different modeling approaches were therefore established. The teams could perform additional calculations, once the actual 'solution' was disclosed. Final calculations represented the best approximation that a given team could provide, always within the general time constraints imposed by the General DECOVALEX III Organization. This report presents the works performed for Task 1. It contains the case definitions and evaluations of modelling results for Part A, B and C, and the overall evaluation of the works performed. The report is completed by a CD-ROM containing a set of final reports provided by the modeling teams participating in each of the three parts defined. These reports provide the necessary details to better understand the nature of the blind or final predictions included in this report. The report closes with a set of conclusions, which provides a summary of the main findings and highlights the lessons learned, some of which were summarized below. The best predictions of the water inflow into the excavated tunnel are found when the hydro geological model is properly calibrated on the basis of other known flow measurements in the same area. The particular idealization of the rock mass (equivalent
Lithium battery aging model based on Dakin's degradation approach
Baghdadi, Issam; Briat, Olivier; Delétage, Jean-Yves; Gyan, Philippe; Vinassa, Jean-Michel
2016-09-01
This paper proposes and validates a calendar and power cycling aging model for two different lithium battery technologies. The model development is based on previous SIMCAL and SIMSTOCK project data. In these previous projects, the effect of the battery state of charge, temperature and current magnitude on aging was studied on a large panel of different battery chemistries. In this work, data are analyzed using Dakin's degradation approach. In fact, the logarithms of battery capacity fade and the increase in resistance evolves linearly over aging. The slopes identified from straight lines correspond to battery aging rates. Thus, a battery aging rate expression function of aging factors was deduced and found to be governed by Eyring's law. The proposed model simulates the capacity fade and resistance increase as functions of the influencing aging factors. Its expansion using Taylor series was consistent with semi-empirical models based on the square root of time, which are widely studied in the literature. Finally, the influence of the current magnitude and temperature on aging was simulated. Interestingly, the aging rate highly increases with decreasing and increasing temperature for the ranges of -5 °C-25 °C and 25 °C-60 °C, respectively.
Energy Technology Data Exchange (ETDEWEB)
Tentner, A. M.; Parma, E.; Wei, T.; Wigeland, R.; Nuclear Engineering Division; SNL; INL
2010-03-01
An important goal of the US DOE reactor development program is to conceptualize advanced safety design features for a demonstration Sodium Fast Reactor (SFR). The treatment of severe accidents is one of the key safety issues in the design approach for advanced SFR systems. It is necessary to develop an in-depth understanding of the risk of severe accidents for the SFR so that appropriate risk management measures can be implemented early in the design process. This report presents the results of a review of the SFR features and phenomena that directly influence the sequence of events during a postulated severe accident. The report identifies the safety features used or proposed for various SFR designs in the US and worldwide for the prevention and/or mitigation of Core Disruptive Accidents (CDA). The report provides an overview of the current SFR safety approaches and the role of severe accidents. Mutual understanding of these design features and safety approaches is necessary for future collaborations between the US and its international partners as part of the GEN IV program. The report also reviews the basis for an integrated safety approach to severe accidents for the SFR that reflects the safety design knowledge gained in the US during the Advanced Liquid Metal Reactor (ALMR) and Integral Fast Reactor (IFR) programs. This approach relies on inherent reactor and plant safety performance characteristics to provide additional safety margins. The goal of this approach is to prevent development of severe accident conditions, even in the event of initiators with safety system failures previously recognized to lead directly to reactor damage.
An integrated approach to permeability modeling using micro-models
Energy Technology Data Exchange (ETDEWEB)
Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)
2008-10-15
An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.
Institute of Scientific and Technical Information of China (English)
ZHOU Wen; PENG Xin-jun; LIU Xiang; YAN Zheng-lou; WANG Yi-fei
2008-01-01
In this paper,we develop a modified accelerated stochastic simulation method for chemically reacting systems,called the "final all possible steps"(FAPS)method,which obtains the reliable statistics of all species in any time during the time course with fewer simulation times.Moreover,the FAPS method can be incorporated into the leap methods,which makes the simulation of larger systems more efficient.Numerical results indicate that the proposed methods can be applied to a wide range of chemically reacting systems with a high-precision level and obtain a significant improvement on efficiency over the existing methods.
A Systems Approach to Bio-Oil Stabilization - Final Technical Report
Energy Technology Data Exchange (ETDEWEB)
Brown, Robert C; Meyer, Terrence; Fox, Rodney; Submramaniam, Shankar; Shanks, Brent; Smith, Ryan G
2011-12-23
The objective of this project is to develop practical, cost effective methods for stabilizing biomass-derived fast pyrolysis oil for at least six months of storage under ambient conditions. The U.S. Department of Energy has targeted three strategies for stabilizing bio-oils: (1) reducing the oxygen content of the organic compounds comprising pyrolysis oil; (2) removal of carboxylic acid groups such that the total acid number (TAN) of the pyrolysis oil is dramatically reduced; and (3) reducing the charcoal content, which contains alkali metals known to catalyze reactions that increase the viscosity of bio-oil. Alkali and alkaline earth metals (AAEM), are known to catalyze decomposition reactions of biomass carbohydrates to produce light oxygenates that destabilize the resulting bio-oil. Methods envisioned to prevent the AAEM from reaction with the biomass carbohydrates include washing the AAEM out of the biomass with water or dilute acid or infusing an acid catalyst to passivate the AAEM. Infusion of acids into the feedstock to convert all of the AAEM to salts which are stable at pyrolysis temperatures proved to be a much more economically feasible process. Our results from pyrolyzing acid infused biomass showed increases in the yield of anhydrosugars by greater than 300% while greatly reducing the yield of light oxygenates that are known to destabilize bio-oil. Particulate matter can interfere with combustion or catalytic processing of either syngas or bio-oil. It also is thought to catalyze the polymerization of bio-oil, which increases the viscosity of bio-oil over time. High temperature bag houses, ceramic candle filters, and moving bed granular filters have been variously suggested for syngas cleaning at elevated temperatures. High temperature filtration of bio-oil vapors has also been suggested by the National Renewable Energy Laboratory although there remain technical challenges to this approach. The fast pyrolysis of biomass yields three main organic
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...
Shiver, Thrisha G.
A Nonequivalent Control Group Design, an analytical evaluation technique, was used to assess the effectiveness of an individualized inservice approach for vocational teachers of disadvantaged learners in Pennsylvania. The experimental group consisted of vocational teachers volunteering to participate in an individualized inservice program; the…
A Workflow-Oriented Approach To Propagation Models In Heliophysics
Directory of Open Access Journals (Sweden)
Gabriele Pierantoni
2014-01-01
Full Text Available The Sun is responsible for the eruption of billions of tons of plasma andthe generation of near light-speed particles that propagate throughout the solarsystem and beyond. If directed towards Earth, these events can be damaging toour tecnological infrastructure. Hence there is an effort to understand the causeof the eruptive events and how they propagate from Sun to Earth. However, thephysics governing their propagation is not well understood, so there is a need todevelop a theoretical description of their propagation, known as a PropagationModel, in order to predict when they may impact Earth. It is often difficultto define a single propagation model that correctly describes the physics ofsolar eruptive events, and even more difficult to implement models capable ofcatering for all these complexities and to validate them using real observational data.In this paper, we envisage that workflows offer both a theoretical andpractical framerwork for a novel approach to propagation models. We definea mathematical framework that aims at encompassing the different modalitieswith which workflows can be used, and provide a set of generic building blockswritten in the TAVERNA workflow language that users can use to build theirown propagation models. Finally we test both the theoretical model and thecomposite building blocks of the workflow with a real Science Use Case that wasdiscussed during the 4th CDAW (Coordinated Data Analysis Workshop eventheld by the HELIO project. We show that generic workflow building blocks canbe used to construct a propagation model that succesfully describes the transitof solar eruptive events toward Earth and predict a correct Earth-impact time
Gu, Fei; Wu, Hao
2016-09-01
The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.
Beyond the detection of students´ mental models. An integrative representational approach
Directory of Open Access Journals (Sweden)
Ileana Maria Greca
2002-01-01
Full Text Available In this paper we initially discuss some limitations of the mental model theoretical framework for research in science education. Then, after an analysis of Vergnaud´s conceptual fields theory we propose an approach that integrating elements of both theoretical frameworks could provide a better understanding of some cognitive processes involved in the learning of scientific concepts. Finally, we suggest possible implications of this approach for science teaching as well as for research in this area.
A complete kinematics approach to study multi-particle final state reactions
Energy Technology Data Exchange (ETDEWEB)
Alcorta, M. [Instituto de Estructura de la Materia, CSIC, Serrano 113 bis, Madrid E-28006 (Spain)], E-mail: alcorta@iem.cfmac.csic.es; Kirsebom, O. [Department of Physics and Astronomy, University of Aarhus, DK-8000 Aarhus C (Denmark); Borge, M.J.G. [Instituto de Estructura de la Materia, CSIC, Serrano 113 bis, Madrid E-28006 (Spain); Fynbo, H.O.U.; Riisager, K. [Department of Physics and Astronomy, University of Aarhus, DK-8000 Aarhus C (Denmark); Tengblad, O. [Instituto de Estructura de la Materia, CSIC, Serrano 113 bis, Madrid E-28006 (Spain)
2009-07-01
Detection in complete kinematics using highly segmented detectors can provide detailed information on the structure of excited nuclear states and their decay mechanisms. The detection of final states consisting of several particles gives rise to many challenges. We present here new techniques that allow for the extraction of physics from the many open channels. In particular this complete kinematic analysis technique has been applied to data from low-energy, high Q-value, light-ion reactions {sup 10}B({sup 3}He,p{alpha}{alpha}{alpha}), {sup 11}B({sup 3}He,d{alpha}{alpha}{alpha}), and {sup 7}Li({sup 3}He,p{alpha}{alpha})n, all of them performed at the CMAM tandem accelerator in Madrid.
A Dynamic Approach to Modeling Dependence Between Human Failure Events
Energy Technology Data Exchange (ETDEWEB)
Boring, Ronald Laurids [Idaho National Laboratory
2015-09-01
In practice, most HRA methods use direct dependence from THERP—the notion that error be- gets error, and one human failure event (HFE) may increase the likelihood of subsequent HFEs. In this paper, we approach dependence from a simulation perspective in which the effects of human errors are dynamically modeled. There are three key concepts that play into this modeling: (1) Errors are driven by performance shaping factors (PSFs). In this context, the error propagation is not a result of the presence of an HFE yielding overall increases in subsequent HFEs. Rather, it is shared PSFs that cause dependence. (2) PSFs have qualities of lag and latency. These two qualities are not currently considered in HRA methods that use PSFs. Yet, to model the effects of PSFs, it is not simply a matter of identifying the discrete effects of a particular PSF on performance. The effects of PSFs must be considered temporally, as the PSFs will have a range of effects across the event sequence. (3) Finally, there is the concept of error spilling. When PSFs are activated, they not only have temporal effects but also lateral effects on other PSFs, leading to emergent errors. This paper presents the framework for tying together these dynamic dependence concepts.
A computational toy model for shallow landslides: Molecular dynamics approach
Martelloni, Gianluca; Bagnoli, Franco; Massaro, Emanuele
2013-09-01
The aim of this paper is to propose a 2D computational algorithm for modeling the triggering and propagation of shallow landslides caused by rainfall. We used a molecular dynamics (MD) approach, similar to the discrete element method (DEM), that is suitable to model granular material and to observe the trajectory of a single particle, so to possibly identify its dynamical properties. We consider that the triggering of shallow landslides is caused by the decrease of the static friction along the sliding surface due to water infiltration by rainfall. Thence the triggering is caused by the two following conditions: (a) a threshold speed of the particles and (b) a condition on the static friction, between the particles and the slope surface, based on the Mohr-Coulomb failure criterion. The latter static condition is used in the geotechnical model to estimate the possibility of landslide triggering. The interaction force between particles is modeled, in the absence of experimental data, by means of a potential similar to the Lennard-Jones one. The viscosity is also introduced in the model and for a large range of values of the model's parameters, we observe a characteristic velocity pattern, with acceleration increments, typical of real landslides. The results of simulations are quite promising: the energy and time triggering distribution of local avalanches show a power law distribution, analogous to the observed Gutenberg-Richter and Omori power law distributions for earthquakes. Finally, it is possible to apply the method of the inverse surface displacement velocity [4] for predicting the failure time.
Energy Technology Data Exchange (ETDEWEB)
Knio, Omar [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science
2017-05-05
The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solution can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.
Energy Technology Data Exchange (ETDEWEB)
Allgood, G.O.; Dress, W.B.; Kercel, S.W.
1999-06-01
The objective of this research, and subsequent testing, was to identify specific features of cavitation that could be used as a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor is based on the physics of the phenomena, capturing the salient features of the process dynamics. The test methodology and approach were developed to make the cavitation features the dominant effect in the process and collected signatures. This would allow the accurate characterization of the salient cavitation features at different operational states. By developing such an abstraction, these attributes can be used as a general diagnostic for a system or any of its components. In this study, the particular focus will be pumps. As many as 90% of pump failures are catastrophic. They seem to be operating normally and fail abruptly without warning. This is true whether the failure is sudden hardware damage requiring repair, such as a gasket failure, or a transition into an undesired operating mode, such as cavitation. This means that conventional diagnostic methods fail to predict 90% of incipient failures and that in addressing this problem, model-based methods can add value where it is actually needed.
Brito, David; Campuzano, F. J.; Sobrinho, J.; Fernandes, R.; Neves, R.
2015-12-01
River discharges and loads are essential inputs to coastal seas, and thus for coastal seas modelling, and their properties are the result of all activities and policies carried inland. For these reasons main rivers were object of intense monitoring programs having been generated some important amount of historical data. Due to the decline in the Portuguese hydrometric network and in order to quantify and forecast surface water streamflow and nutrients to coastal areas, the MOHID Land model was applied to the Western Iberia Region with a 2 km horizontal resolution and to the Iberian Peninsula with 10 km horizontal resolution. The domains were populated with land use and soil properties and forced with existing meteorological models. This approach also permits to understand how the flows and loads are generated and to forecast their values which are of utmost importance to perform coastal ocean and estuarine forecasts. The final purpose of the implementation is to obtain fresh water quantity and quality that could be used to support management decisions in the watershed, reservoirs and also to estuaries and coastal areas. A process oriented model as MOHID Land is essential to perform this type of simulations, as the model is independent of the number of river catchments. In this work, the Mohid Land model equations and parameterisations were described and an innovative methodology for watershed modelling is presented and validated for a large international river, the Tagus River, and the largest national river of Portugal, the Mondego River. Precipitation, streamflow and nutrients modelling results for these two rivers were compared with observations near their coastal outlet in order to evaluate the model capacity to represent the main watershed trends. Finally, an annual budget of fresh water and nutrient transported by the main twenty five rivers discharging in the Portuguese coast is presented.
Cellular approach to agricultural genetics. Final report, December 1, 1974-November 30, 1977
Energy Technology Data Exchange (ETDEWEB)
Carlson, P.S.
1977-01-01
This project was focused upon identifying how genetic manipulation of higher plant cells cultured in vitro could provide new methods for crop improvement. Work has taken three major directions: (1) attempts to define conditions for the in vitro culture and subsequent regeneration of entire plants from somatic cells of important crop species, (2) attempts to define novel selection schemes to allow recovery of a wide range of mutant phenotypes, and novel genetic techniques to transfer this genetic information, and (3) a direct interaction with plant breeders in attempting to make use of these in vitro techniques in ongoing crop improvement programs. We have accomplished routine regeneration of entire plants from established long term in vitro callus cultures of wheat and tomato, and from suspension cultures of barley and potato. We have defined conditions for the isolation of temperature sensitive variants of plant cells and for the recovery of mutants altered in tissue specific characteristics. We have utilized two-dimentional gel electrophoresis to characterize the range of polypeptides present in cells cultured in vitro and to observe their alterations in variant cell lines. We have begun the process of defining by using heterotic hybrids which biochemical and physiological traits of whole plants are limiting final yield characteristics. We have defined conditions for chromosome isolation from plants, and initially characterized a higher plant-bacterial recombinant DNA system. We are interacting directly with breeders and individuals involved in plant improvement in projects involving cereal grains, grain legumes, potatoes and other field horticultural crops.
EPA announced the availability of the final report, An Exploratory Study: Assessment of Modeled Dioxin Exposure in Ceramic Art Studios. This report investigates the potential dioxin exposure to artists/hobbyists who use ball clay to make pottery and related products. Derm...
Search for the standard model Higgs boson in tau final states
Abazov, V.M.; et al., [Unknown; Ancu, L.S.; de Jong, S.J.; Filthaut, F.; Galea, C.F.; Hegeman, J.G.; Houben, P.; Meijer, M.M.; Svoisky, P.; van den Berg, P.J.; van Leeuwen, W.M.
2009-01-01
We present a search for the standard model Higgs boson using hadronically decaying tau leptons, in 1 fb(-1) of data collected with the D0 detector at the Fermilab Tevatron p(p)over-bar collider. We select two final states: tau(+/-) plus missing transverse energy and b jets, and tau(+)tau(-) plus jet
Mathematically guided approaches to distinguish models of periodic patterning
2015-01-01
How periodic patterns are generated is an open question. A number of mechanisms have been proposed – most famously, Turing's reaction-diffusion model. However, many theoretical and experimental studies focus on the Turing mechanism while ignoring other possible mechanisms. Here, we use a general model of periodic patterning to show that different types of mechanism (molecular, cellular, mechanical) can generate qualitatively similar final patterns. Observation of final patterns is therefore n...
Do recommender systems benefit users? a modeling approach
Yeung, Chi Ho
2016-04-01
Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.
Final Report on Models, Periodic Progress, Report No D1.3, Globeman21, ESPRIT 26509
DEFF Research Database (Denmark)
Pedersen, Jens Dahl; Tølle, Martin; Vesterager, Johan
1999-01-01
This deliverable D1.3 is the third and final deliverable of WP1 - Global Manufacturing Concept of the European part of the Globeman21 project. The report essentially presents the final models on generic Extended Enterprise Management (EEM) and generic Product Life Cycle Management (PLCM), a colle......This deliverable D1.3 is the third and final deliverable of WP1 - Global Manufacturing Concept of the European part of the Globeman21 project. The report essentially presents the final models on generic Extended Enterprise Management (EEM) and generic Product Life Cycle Management (PLCM......), a collection of practical experiences, and requirements for enhanced methods and tools for modelling. First, the deliverable outlines the GM21 understanding regarding extended enterprises and virtual enterprises. This is done by extracting and synthesising the essence of key definitions and concepts...... enterprise context. Through the set of recommended activities for individual life cycle phases of respectively the network entity, the VE entity, and the product entity of the Extended Enterprise Framework, the methodology supports realisation of extended enterprises. The time sequenced planning...
Energy Technology Data Exchange (ETDEWEB)
Quinn, John
2009-11-30
Work related to this project introduced the idea of an effective monopole strength Q* that acted as the effective angular momentum of the lowest shell of composite Fermions (CF). This allowed us to predict the angular momentum of the lowest band of energy states for any value of the applied magnetic field simply by determining N{sub QP} the number of quasielectrons (QE) or quasiholes (QH) in a partially filled CF shell and adding angular momenta of the N{sub QP} Fermions excitations. The approach reported treated the filled CF level as a vacuum state which could support QE and QH excitations. Numerical diagonalization of small systems allowed us to determine the angular momenta, the energy, and the pair interaction energies of these elementary excitations. The spectra of low energy states could then be evaluated in a Fermi liquid-like picture, treating the much smaller number of quasiparticles and their interactions instead of the larger system of N electrons with Coulomb interactions.
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
Energy Technology Data Exchange (ETDEWEB)
Breitschopf, Barbara [Fraunhofer-Institut fuer System- und Innovationsforschung (ISI), Karlsruhe (Germany); Nathani, Carsten; Resch, Gustav
2011-11-15
full picture of the impacts of RE deployment on the total economy - covering all economic activities like production, service and consumption (industries, households). To get the number of additional jobs caused by RE deployment, they compare a situation without RE (baseline or counterfactual) to a situation under a strong RE deployment. In a second step, we characterize the studies inter alia by their scope, activities and impacts and show the relevant positive and negative effects that are included in gross or net impact assessment studies. The effects are briefly described in Table 0-1. While gross studies mainly include the positive effects listed here, net studies in general include positive and negative effects. Third, we distinguish between methodological approaches assessing impacts. We observe that the more effects are incorporated in the approach, the more data are needed, the more complex and demanding the methodological approach becomes and the more the impacts capture effects of and in the whole economy - representing net impacts. A simple approach requires a few data and allows answering simple questions concerning the impact on the RE-industry - representing gross impacts. We identify six main approaches, three for gross and three for net impacts. They are depicted in Figure 0-2. The methodological approaches are characterized by their effects captured, the complexity of model and additional data requirement (besides data on RE investments, capacities and generation) as well as by their depicted impacts reflecting the economic comprehensiveness. A detailed overview of the diverse studies in table form is given in the Annex to this report. Finally, we suggest to elaborate guidelines for the simple EF-approach, the gross IO-modelling and net IO-modelling approach. The first approach enables policy makers to do a quick assessment on gross effects, while the second is a more sophisticated approach for gross effects. The third approach builds on the gross IO
Final Report on Models, Periodic Progress, Report No D1.3, Globeman21, ESPRIT 26509
DEFF Research Database (Denmark)
Pedersen, Jens Dahl; Tølle, Martin; Vesterager, Johan
1999-01-01
This deliverable D1.3 is the third and final deliverable of WP1 - Global Manufacturing Concept of the European part of the Globeman21 project. The report essentially presents the final models on generic Extended Enterprise Management (EEM) and generic Product Life Cycle Management (PLCM...... accomplished during the GM21EU project. Especially the Extended Enterprise Framework should be accentuated as a key concept of WP1. Building on ISO/DIS 15704 GERAM, the framework constitutes a generic reference model for extended enterprises and hereunder for EEM and PLCM issues. The Extended Enterprise...... classification schema for work preparation in extended enterprises. The classification schema is applied as support for a mapping of the EEM and PLCM pilot projects onto the Extended Enterprise Framework. By mapping the industrial pilot work onto a common reference model experiences have been collected from GM21...
King, Lewis M.; And Others
This paper discusses the Fanon Center's Restoration Model, an "exemplary education" paradigm that seeks to institute a new educational ideology and alternative educational approach based on a "new humanism." The basis of this new humanism is the synthesis of analytic, affective, and sensate ways of discovery and knowing. The model places equal…
ALREST High Fidelity Modeling Program Approach
2011-05-18
Gases and Mixtures of Redlich - Kwong and Peng- Robinson Fluids Assumed pdf Model based on k- ε-g Model in NASA/LaRc Vulcan code Level Set model...Potential Attractiveness Of Liquid Hydrocarbon Engines For Boost Applications • Propensity Of Hydrocarbon Engines For Combustion Instability • Air
Comparison of tree types of models for the prediction of final academic achievement
Directory of Open Access Journals (Sweden)
Silvana Gasar
2002-12-01
Full Text Available For efficient prevention of inappropriate secondary school choices and by that academic failure, school counselors need a tool for the prediction of individual pupil's final academic achievements. Using data mining techniques on pupils' data base and expert modeling, we developed several models for the prediction of final academic achievement in an individual high school educational program. For data mining, we used statistical analyses, clustering and two machine learning methods: developing classification decision trees and hierarchical decision models. Using an expert system shell DEX, an expert system, based on a hierarchical multi-attribute decision model, was developed manually. All the models were validated and evaluated from the viewpoint of their applicability. The predictive accuracy of DEX models and decision trees was equal and very satisfying, as it reached the predictive accuracy of an experienced counselor. With respect on the efficiency and difficulties in developing models, and relatively rapid changing of our education system, we propose that decision trees are used in further development of predictive models.
A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems
Directory of Open Access Journals (Sweden)
Lenardo C. Silva
2015-10-01
Full Text Available Medical Cyber-Physical Systems (MCPS are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.
A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.
Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko
2015-10-30
Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.
Final Report: Transport and its regulation in Marine Microorganisms: A Genomic Based Approach
Energy Technology Data Exchange (ETDEWEB)
Brian Palenik; Bianca Brahamsha; Ian Paulsen
2009-09-03
This grant funded the analysis and annotation of the genomes of Synechococcus and Ostreococcus, major marine primary producers. Particular attention was paid to the analysis of transporters using state of the art bioinformatics analyses. During the analysis of the Synechococcus genome, some of the components of the unique bacterial swimming apparatus of one species of Synechococcus (Clade III, strain WH8102) were determined and these included transporters, novel giant proteins and glycosyltransferases. This grant funded the analysis of gene expression in Synechococcus using whole genome microarrays. These analyses revealed the strategies by which marine cyanobacteria respond to environmental conditions such as the absence of phosphorus, a common limiting nutrient, and the interaction of Synechococcus with other microbes. These analyses will help develop models of gene regulation in cyanobacteria and thus help predict their responses to changes in environmental conditions.
LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL
National Research Council Canada - National Science Library
Eser ÖRDEM
2013-01-01
Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002) and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings...
A model-based multisensor data fusion knowledge management approach
Straub, Jeremy
2014-06-01
A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.
Comparison of two novel approaches to model fibre reinforced concrete
Radtke, F.K.F.; Simone, A.; Sluys, L.J.
2009-01-01
We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity f
Final Technical Report -- Bridging the PSI Knowledge Gap: A Multiscale Approach
Energy Technology Data Exchange (ETDEWEB)
Whyte, Dennis [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
2014-12-12
The Plasma Surface Interactions (PSI) Science Center formed by the grant undertook a multidisciplinary set of studies on the complex interface between the plasma and solid states of matter. The strategy of the center was to combine and integrate the experimental, diagnostic and modeling toolkits from multiple institutions towards specific PSI problems. In this way the Center could tackle integrated science issues which were not addressable by single institutions, as well as evolve the underlying science of the PSI in a more general way than just for fusion applications. The overall strategy proved very successful. The research result and highlights of the MIT portion of the Center are primarily described. A particular highlight is the study of tungsten nano-tendril growth in the presence of helium plasmas. The Center research provided valuable new insights to the mechanisms controlling the nano-tendrils by developing coupled modeling and in situ diagnostic methods which could be directly compared. For example, the role of helium accumulation in tungsten distortion in the surface was followed with unique in situ helium concentration diagnostics developed. These depth-profiled, time-resolved helium concentration measurements continue to challenge the numerical models of nano-tendrils. The Center team also combined its expertise on tungsten nano-tendrils to demonstrate for the first time the growth of the tendrils in a fusion environment on the Alcator C-Mod fusion experiment, thus having significant impact on the broader fusion research effort. A new form of isolated nano-tendril “columns” were identified which are now being used to understand the underlying mechanisms controlling the tendril growth. The Center also advanced PSI science on a broader front with a particular emphasis on developing a wide range of in situ PSI diagnostic tools at the DIONISOS facility at MIT. For example the strong suppression of sputtering by the certain combination of light
Modelling the World Wool Market: A Hybrid Approach
2007-01-01
We present a model of the world wool market that merges two modelling traditions: the partialequilibrium commodity-specific approach and the computable general-equilibrium approach. The model captures the multistage nature of the wool production system, and the heterogeneous nature of raw wool, processed wool and wool garments. It also captures the important wool producing and consuming regions of the world. We illustrate the utility of the model by estimating the effects of tariff barriers o...
An algebraic approach to the Hubbard model
de Leeuw, Marius
2015-01-01
We study the algebraic structure of an integrable Hubbard-Shastry type lattice model associated with the centrally extended su(2|2) superalgebra. This superalgebra underlies Beisert's AdS/CFT worldsheet R-matrix and Shastry's R-matrix. The considered model specializes to the one-dimensional Hubbard model in a certain limit. We demonstrate that Yangian symmetries of the R-matrix specialize to the Yangian symmetry of the Hubbard model found by Korepin and Uglov. Moreover, we show that the Hubbard model Hamiltonian has an algebraic interpretation as the so-called secret symmetry. We also discuss Yangian symmetries of the A and B models introduced by Frolov and Quinn.
Numerical modelling approach for mine backfill
Indian Academy of Sciences (India)
MUHAMMAD ZAKA EMAD
2017-09-01
Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by Canadian hard-rockmetal mines. Mine backfill is an important constituent of mining process. Numerical modelling of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry. In the end, results of numerical model parametric study are shown and discussed.
Regularization of turbulence - a comprehensive modeling approach
Geurts, Bernard J.
2011-01-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fl
Measuring equilibrium models: a multivariate approach
Directory of Open Access Journals (Sweden)
Nadji RAHMANIA
2011-04-01
Full Text Available This paper presents a multivariate methodology for obtaining measures of unobserved macroeconomic variables. The used procedure is the multivariate Hodrick-Prescot which depends on smoothing param eters. The choice of these parameters is crucial. Our approach is based on consistent estimators of these parameters, depending only on the observed data.
Energy Technology Data Exchange (ETDEWEB)
McFarlane, Karis J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-10-28
Boreal peatlands contain large amounts of old carbon, protected by anaerobic and cold conditions. Climate change could result in favorable conditions for the microbial decomposition and release of this old peat carbon as CO2 or CH4 back into the atmosphere. Our goal was to test the potential for this positive biological feedback to climate change at SPRUCE (Spruce and Peatland Response Under Climatic and Environmental Change), a manipulation experiment funded by DOE and occurring in a forested bog in Minnesota. Taking advantage of LLNL’s capabilities and expertise in chemical and isotopic signatures we found that carbon emissions from peat were dominated by recently fixed photosynthates, even after short-term experimental warming. We also found that subsurface hydrologic transport was surprisingly rapid at SPRUCE, supplying microbes with young dissolved organic carbon (DOC). We also identified which microbes oxidize CH4 to CO2 at SPRUCE and found that the most active of these also fix N2 (which means they can utilize atmospheric N, making it accessible for other microbes and plants). These results reflect important interactions between hydrology, carbon cycling, and nitrogen cycling present at the bog and relevant to interpreting experimental results and modeling the wetland response to experimental treatments. LLNL involvement at SPRUCE continues through collaborations and a small contract with ORNL, the lead lab for the SPRUCE experiment.
A graphical approach to analogue behavioural modelling
Moser, Vincent; Nussbaum, Pascal; Amann, Hans-Peter; Astier, Luc; Pellandini, Fausto
2007-01-01
In order to master the growing complexity of analogue electronic systems, modelling and simulation of analogue hardware at various levels is absolutely necessary. This paper presents an original modelling method based on the graphical description of analogue electronic functional blocks. This method is intended to be automated and integrated into a design framework: specialists create behavioural models of existing functional blocks, that can then be used through high-level selection and spec...
A geometrical approach to structural change modeling
Stijepic, Denis
2013-01-01
We propose a model for studying the dynamics of economic structures. The model is based on qualitative information regarding structural dynamics, in particular, (a) the information on the geometrical properties of trajectories (and their domains) which are studied in structural change theory and (b) the empirical information from stylized facts of structural change. We show that structural change is path-dependent in this model and use this fact to restrict the number of future structural cha...
Unitarity of black hole evaporation in final-state projection models
Lloyd, Seth; Preskill, John
2014-08-01
Almheiri et al. have emphasized that otherwise reasonable beliefs about black hole evaporation are incompatible with the monogamy of quantum entanglement, a general property of quantum mechanics. We investigate the final-state projection model of black hole evaporation proposed by Horowitz and Maldacena, pointing out that this model admits cloning of quantum states and polygamous entanglement, allowing unitarity of the evaporation process to be reconciled with smoothness of the black hole event horizon. Though the model seems to require carefully tuned dynamics to ensure exact unitarity of the black hole S-matrix, for a generic final-state boundary condition the deviations from unitarity are exponentially small in the black hole entropy; furthermore observers inside black holes need not detect any deviations from standard quantum mechanics. Though measurements performed inside old black holes could potentially produce causality-violating phenomena, the computational complexity of decoding the Hawking radiation may render the causality violation unobservable. Final-state projection models illustrate how inviolable principles of standard quantum mechanics might be circumvented in a theory of quantum gravity.
Consumer preference models: fuzzy theory approach
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
Energy Technology Data Exchange (ETDEWEB)
Semmens, D.
1987-12-01
3-D gravity modeling was done in the area of the Ennis hot spring in an attempt to determine controlling structure of the Ennis hot spring. The modeling was done in a two-step process where: 1) The topography was modeled by modeling the valley fill from the highest elevation in the modeling area to some elevation below the lowest station elevation using Talwani and Ewing's (1960) method of modeling with vertically-stacked, horizontal, n-sided polygons. Once the gravity contributions of the valley fill included in this ''topographic model'' are calculated, they were removed from the original gravity data; 2) The remaining valley fill was modeled using blocks where the 3-D algorithm for modeling with blocks results from integrating the gravity formula in the X and Z directions and approximating the integration in the Y-direction using a quadrature formula. Finally, an inverse 3-D gravity modeling program was written to automatically adjust the bedrock topography output from this two-step modeling process. The gravity data calculated from the adjusted bedrock topography, output from the inverse modeling program, should match the observed gravity data within the error of the survey. 43 refs., 40 figs., 9 tabs.
A New Approach for Magneto-Static Hysteresis Behavioral Modeling
DEFF Research Database (Denmark)
Astorino, Antonio; Swaminathan, Madhavan; Antonini, Giulio
2016-01-01
In this paper, a new behavioral modeling approach for magneto-static hysteresis is presented. Many accurate models are currently available, but none of them seems to be able to correctly reproduce all the possible B-H paths with low computational cost. By contrast, the approach proposed...... achieved when comparing the measured and simulated results....
Modeling quasi-static poroelastic propagation using an asymptotic approach
Energy Technology Data Exchange (ETDEWEB)
Vasco, D.W.
2007-11-01
solution. Unfortunately, analytic solutions are only available for highly idealized conditions, such as a uniform (Rudnicki(1986)) or one-dimensional (Simon et al.(1984)Simon, Zienkiewicz, & Paul; Gajo & Mongiovi(1995); Wang & Kumpel(2003)) medium. In this paper I derive an asymptotic, semi-analytic solution for coupled deformation and flow. The approach is similar to trajectory- or ray-based methods used to model elastic and electromagnetic wave propagation (Aki & Richards(1980); Kline & Kay(1979); Kravtsov & Orlov(1990); Keller & Lewis(1995)) and, more recently, diffusive propagation (Virieux et al.(1994)Virieux, Flores-Luna, & Gibert; Vasco et al.(2000)Vasco, Karasaki, & Keers; Shapiro et al.(2002)Shapiro, Rothert, Rath, & Rindschwentner; Vasco(2007)). The asymptotic solution is valid in the presence of smoothly-varying, heterogeneous flow properties. The situation I am modeling is that of a formation with heterogeneous flow properties and uniform mechanical properties. The boundaries of the layer may vary arbitrary and can define discontinuities in both flow and mechanical properties. Thus, using the techniques presented here, it is possible to model a stack of irregular layers with differing mechanical properties. Within each layer the hydraulic conductivity and porosity can vary smoothly but with an arbitrarily large magnitude. The advantages of this approach are that it produces explicit, semi-analytic expressions for the arrival time and amplitude of the Biot slow and fast waves, expressions which are valid in a medium with heterogeneous properties. As shown here, the semi-analytic expressions provide insight into the nature of pressure and deformation signals recorded at an observation point. Finally, the technique requires considerably fewer computer resources than does a fully numerical treatment.
Energy Technology Data Exchange (ETDEWEB)
Lovley, Derek R.
2012-10-31
This project successfully accomplished its goal of coupling genome-scale metabolic models with hydrological and geochemical models to predict the activity of subsurface microorganisms during uranium bioremediation. Furthermore, it was demonstrated how this modeling approach can be used to develop new strategies to optimize bioremediation. The approach of coupling genome-scale metabolic models with reactive transport modeling is now well enough established that it has been adopted by other DOE investigators studying uranium bioremediation. Furthermore, the basic principles developed during our studies will be applicable to much broader investigations of microbial activities, not only for other types of bioremediation, but microbial metabolism in diversity of environments. This approach has the potential to make an important contribution to predicting the impact of environmental perturbations on the cycling of carbon and other biogeochemical cycles.
Nucleon Spin Content in a Relativistic Quark Potential Model Approach
Institute of Scientific and Technical Information of China (English)
DONG YuBing; FENG QingGuo
2002-01-01
Based on a relativistic quark model approach with an effective potential U(r) = (ac/2)(1 + γ0)r2, the spin content of the nucleon is investigated. Pseudo-scalar interaction between quarks and Goldstone bosons is employed to calculate the couplings between the Goldstone bosons and the nucleon. Different approaches to deal with the center of mass correction in the relativistic quark potential model approach are discussed.
Validated Models for Radiation Response and Signal Generation in Scintillators: Final Report
Energy Technology Data Exchange (ETDEWEB)
Kerisit, Sebastien N.; Gao, Fei; Xie, YuLong; Campbell, Luke W.; Van Ginhoven, Renee M.; Wang, Zhiguo; Prange, Micah P.; Wu, Dangxin
2014-12-01
This Final Report presents work carried out at Pacific Northwest National Laboratory (PNNL) under the project entitled “Validated Models for Radiation Response and Signal Generation in Scintillators” (Project number: PL10-Scin-theor-PD2Jf) and led by Drs. Fei Gao and Sebastien N. Kerisit. This project was divided into four tasks: 1) Electronic response functions (ab initio data model) 2) Electron-hole yield, variance, and spatial distribution 3) Ab initio calculations of information carrier properties 4) Transport of electron-hole pairs and scintillation efficiency Detailed information on the results obtained in each of the four tasks is provided in this Final Report. Furthermore, published peer-reviewed articles based on the work carried under this project are included in Appendix. This work was supported by the National Nuclear Security Administration, Office of Nuclear Nonproliferation Research and Development (DNN R&D/NA-22), of the U.S. Department of Energy (DOE).
Search for the standard model Higgs boson in tau final states.
Abazov, V M; Abbott, B; Abolins, M; Acharya, B S; Adams, M; Adams, T; Aguilo, E; Ahsan, M; Alexeev, G D; Alkhazov, G; Alton, A; Alverson, G; Alves, G A; Ancu, L S; Andeen, T; Anzelc, M S; Aoki, M; Arnoud, Y; Arov, M; Arthaud, M; Askew, A; Asman, B; Atramentov, O; Avila, C; Backusmayes, J; Badaud, F; Bagby, L; Baldin, B; Bandurin, D V; Banerjee, S; Barberis, E; Barfuss, A-F; Bargassa, P; Baringer, P; Barreto, J; Bartlett, J F; Bassler, U; Bauer, D; Beale, S; Bean, A; Begalli, M; Begel, M; Belanger-Champagne, C; Bellantoni, L; Bellavance, A; Benitez, J A; Beri, S B; Bernardi, G; Bernhard, R; Bertram, I; Besançon, M; Beuselinck, R; Bezzubov, V A; Bhat, P C; Bhatnagar, V; Blazey, G; Blessing, S; Bloom, K; Boehnlein, A; Boline, D; Bolton, T A; Boos, E E; Borissov, G; Bose, T; Brandt, A; Brock, R; Brooijmans, G; Bross, A; Brown, D; Bu, X B; Buchholz, D; Buehler, M; Buescher, V; Bunichev, V; Burdin, S; Burnett, T H; Buszello, C P; Calfayan, P; Calpas, B; Calvet, S; Cammin, J; Carrasco-Lizarraga, M A; Carrera, E; Carvalho, W; Casey, B C K; Castilla-Valdez, H; Chakrabarti, S; Chakraborty, D; Chan, K M; Chandra, A; Cheu, E; Cho, D K; Choi, S; Choudhary, B; Christoudias, T; Cihangir, S; Claes, D; Clutter, J; Cooke, M; Cooper, W E; Corcoran, M; Couderc, F; Cousinou, M-C; Crépé-Renaudin, S; Cuplov, V; Cutts, D; Cwiok, M; Das, A; Davies, G; De, K; de Jong, S J; De La Cruz-Burelo, E; DeVaughan, K; Déliot, F; Demarteau, M; Demina, R; Denisov, D; Denisov, S P; Desai, S; Diehl, H T; Diesburg, M; Dominguez, A; Dorland, T; Dubey, A; Dudko, L V; Duflot, L; Duggan, D; Duperrin, A; Dutt, S; Dyshkant, A; Eads, M; Edmunds, D; Ellison, J; Elvira, V D; Enari, Y; Eno, S; Ermolov, P; Escalier, M; Evans, H; Evdokimov, A; Evdokimov, V N; Facini, G; Ferapontov, A V; Ferbel, T; Fiedler, F; Filthaut, F; Fisher, W; Fisk, H E; Fortner, M; Fox, H; Fu, S; Fuess, S; Gadfort, T; Galea, C F; Garcia-Bellido, A; Gavrilov, V; Gay, P; Geist, W; Geng, W; Gerber, C E; Gershtein, Y; Gillberg, D; Ginther, G; Gómez, B; Goussiou, A; Grannis, P D; Greder, S; Greenlee, H; Greenwood, Z D; Gregores, E M; Grenier, G; Gris, Ph; Grivaz, J-F; Grohsjean, A; Grünendahl, S; Grünewald, M W; Guo, F; Guo, J; Gutierrez, G; Gutierrez, P; Haas, A; Hadley, N J; Haefner, P; Hagopian, S; Haley, J; Hall, I; Hall, R E; Han, L; Harder, K; Harel, A; Hauptman, J M; Hays, J; Hebbeker, T; Hedin, D; Hegeman, J G; Heinson, A P; Heintz, U; Hensel, C; Heredia-De La Cruz, I; Herner, K; Hesketh, G; Hildreth, M D; Hirosky, R; Hoang, T; Hobbs, J D; Hoeneisen, B; Hohlfeld, M; Hossain, S; Houben, P; Hu, Y; Hubacek, Z; Huske, N; Hynek, V; Iashvili, I; Illingworth, R; Ito, A S; Jabeen, S; Jaffré, M; Jain, S; Jakobs, K; Jamin, D; Jarvis, C; Jesik, R; Johns, K; Johnson, C; Johnson, M; Johnston, D; Jonckheere, A; Jonsson, P; Juste, A; Kajfasz, E; Karmanov, D; Kasper, P A; Katsanos, I; Kaushik, V; Kehoe, R; Kermiche, S; Khalatyan, N; Khanov, A; Kharchilava, A; Kharzheev, Y N; Khatidze, D; Kim, T J; Kirby, M H; Kirsch, M; Klima, B; Kohli, J M; Konrath, J-P; Kozelov, A V; Kraus, J; Kuhl, T; Kumar, A; Kupco, A; Kurca, T; Kuzmin, V A; Kvita, J; Lacroix, F; Lam, D; Lammers, S; Landsberg, G; Lebrun, P; Lee, W M; Leflat, A; Lellouch, J; Li, J; Li, L; Li, Q Z; Lietti, S M; Lim, J K; Lincoln, D; Linnemann, J; Lipaev, V V; Lipton, R; Liu, Y; Liu, Z; Lobodenko, A; Lokajicek, M; Love, P; Lubatti, H J; Luna-Garcia, R; Lyon, A L; Maciel, A K A; Mackin, D; Mättig, P; Magerkurth, A; Mal, P K; Malbouisson, H B; Malik, S; Malyshev, V L; Maravin, Y; Martin, B; McCarthy, R; McGivern, C L; Meijer, M M; Melnitchouk, A; Mendoza, L; Menezes, D; Mercadante, P G; Merkin, M; Merritt, K W; Meyer, A; Meyer, J; Mitrevski, J; Mommsen, R K; Mondal, N K; Moore, R W; Moulik, T; Muanza, G S; Mulhearn, M; Mundal, O; Mundim, L; Nagy, E; Naimuddin, M; Narain, M; Neal, H A; Negret, J P; Neustroev, P; Nilsen, H; Nogima, H; Novaes, S F; Nunnemann, T; Obrant, G; Ochando, C; Onoprienko, D; Orduna, J; Oshima, N; Osman, N; Osta, J; Otec, R; Otero Y Garzón, G J; Owen, M; Padilla, M; Padley, P; Pangilinan, M; Parashar, N; Park, S-J; Park, S K; Parsons, J; Partridge, R; Parua, N; Patwa, A; Pawloski, G; Penning, B; Perfilov, M; Peters, K; Peters, Y; Pétroff, P; Piegaia, R; Piper, J; Pleier, M-A; Podesta-Lerma, P L M; Podstavkov, V M; Pogorelov, Y; Pol, M-E; Polozov, P; Popov, A V; Potter, C; Prado da Silva, W L; Protopopescu, S; Qian, J; Quadt, A; Quinn, B; Rakitine, A; Rangel, M S; Ranjan, K; Ratoff, P N; Renkel, P; Rich, P; Rijssenbeek, M; Ripp-Baudot, I; Rizatdinova, F; Robinson, S; Rodrigues, R F; Rominsky, M; Royon, C; Rubinov, P; Ruchti, R; Safronov, G; Sajot, G; Sánchez-Hernández, A; Sanders, M P; Sanghi, B; Savage, G; Sawyer, L; Scanlon, T; Schaile, D; Schamberger, R D; Scheglov, Y; Schellman, H; Schliephake, T; Schlobohm, S; Schwanenberger, C; Schwienhorst, R; Sekaric, J; Severini, H; Shabalina, E; Shamim, M; Shary, V; Shchukin, A A; Shivpuri, R K; Siccardi, V; Simak, V; Sirotenko, V; Skubic, P; Slattery, P; Smirnov, D; Snow, G R; Snow, J; Snyder, S; Söldner-Rembold, S; Sonnenschein, L; Sopczak, A; Sosebee, M; Soustruznik, K; Spurlock, B; Stark, J; Stolin, V; Stoyanova, D A; Strandberg, J; Strandberg, S; Strang, M A; Strauss, E; Strauss, M; Ströhmer, R; Strom, D; Stutte, L; Sumowidagdo, S; Svoisky, P; Takahashi, M; Tanasijczuk, A; Taylor, W; Tiller, B; Tissandier, F; Titov, M; Tokmenin, V V; Torchiani, I; Tsybychev, D; Tuchming, B; Tully, C; Tuts, P M; Unalan, R; Uvarov, L; Uvarov, S; Uzunyan, S; Vachon, B; van den Berg, P J; Van Kooten, R; van Leeuwen, W M; Varelas, N; Varnes, E W; Vasilyev, I A; Verdier, P; Vertogradov, L S; Verzocchi, M; Vilanova, D; Vint, P; Vokac, P; Voutilainen, M; Wagner, R; Wahl, H D; Wang, M H L S; Warchol, J; Watts, G; Wayne, M; Weber, G; Weber, M; Welty-Rieger, L; Wenger, A; Wetstein, M; White, A; Wicke, D; Williams, M R J; Wilson, G W; Wimpenny, S J; Wobisch, M; Wood, D R; Wyatt, T R; Xie, Y; Xu, C; Yacoob, S; Yamada, R; Yang, W-C; Yasuda, T; Yatsunenko, Y A; Ye, Z; Yin, H; Yip, K; Yoo, H D; Youn, S W; Yu, J; Zeitnitz, C; Zelitch, S; Zhao, T; Zhou, B; Zhu, J; Zielinski, M; Zieminska, D; Zivkovic, L; Zutshi, V; Zverev, E G
2009-06-26
We present a search for the standard model Higgs boson using hadronically decaying tau leptons, in 1 fb(-1) of data collected with the D0 detector at the Fermilab Tevatron pp collider. We select two final states: tau+/- plus missing transverse energy and b jets, and tau+ tau- plus jets. These final states are sensitive to a combination of associated W/Z boson plus Higgs boson, vector boson fusion, and gluon-gluon fusion production processes. The observed ratio of the combined limit on the Higgs production cross section at the 95% C.L. to the standard model expectation is 29 for a Higgs boson mass of 115 GeV.
Architectural considerations for agent-based national scale policy models : LDRD final report.
Energy Technology Data Exchange (ETDEWEB)
Backus, George A.; Strip, David R.
2007-09-01
The need to anticipate the consequences of policy decisions becomes ever more important as the magnitude of the potential consequences grows. The multiplicity of connections between the components of society and the economy makes intuitive assessments extremely unreliable. Agent-based modeling has the potential to be a powerful tool in modeling policy impacts. The direct mapping between agents and elements of society and the economy simplify the mapping of real world functions into the world of computation assessment. Our modeling initiative is motivated by the desire to facilitate informed public debate on alternative policies for how we, as a nation, provide healthcare to our population. We explore the implications of this motivation on the design and implementation of a model. We discuss the choice of an agent-based modeling approach and contrast it to micro-simulation and systems dynamics approaches.
A simple approach to modeling ductile failure.
Energy Technology Data Exchange (ETDEWEB)
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
An approach for activity-based DEVS model specification
DEFF Research Database (Denmark)
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
2016-01-01
activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...
Advanced language modeling approaches, case study: Expert search
Hiemstra, Djoerd
2008-01-01
This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
3D Multiscale Integrated Modeling Approach of Complex Rock Mass Structures
Directory of Open Access Journals (Sweden)
Mingchao Li
2014-01-01
Full Text Available Based on abundant geological data of different regions and different scales in hydraulic engineering, a new approach of 3D engineering-scale and statistical-scale integrated modeling was put forward, considering the complex relationships among geological structures and discontinuities and hydraulic structures. For engineering-scale geological structures, the 3D rock mass model of the study region was built by the exact match modeling method and the reliability analysis technique. For statistical-scale jointed rock mass, the random network simulation modeling method was realized, including Baecher structure plane model, Monte Carlo simulation, and dynamic check of random discontinuities, and the corresponding software program was developed. Finally, the refined model was reconstructed integrating with the engineering-scale model of rock structures, the statistical-scale model of discontinuities network, and the hydraulic structures model. It has been applied to the practical hydraulic project and offers the model basis for the analysis of hydraulic rock mass structures.
1993-1994 Final technical report for establishing the SECME Model in the District of Columbia
Energy Technology Data Exchange (ETDEWEB)
Vickers, R.G.
1995-12-31
This is the final report for a program to establish the SECME Model in the District of Columbia. This program has seen the development of a partnership between the District of Columbia Public Schools, the University of the District of Columbia, the Department of Energy, and SECME. This partnership has demonstrated positive achievement in mathematics and science education and learning in students within the District of Columbia.
Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R
2012-08-01
A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three
Random matrix model approach to chiral symmetry
Verbaarschot, J J M
1996-01-01
We review the application of random matrix theory (RMT) to chiral symmetry in QCD. Starting from the general philosophy of RMT we introduce a chiral random matrix model with the global symmetries of QCD. Exact results are obtained for universal properties of the Dirac spectrum: i) finite volume corrections to valence quark mass dependence of the chiral condensate, and ii) microscopic fluctuations of Dirac spectra. Comparisons with lattice QCD simulations are made. Most notably, the variance of the number of levels in an interval containing $n$ levels on average is suppressed by a factor $(\\log n)/\\pi^2 n$. An extension of the random matrix model model to nonzero temperatures and chemical potential provides us with a schematic model of the chiral phase transition. In particular, this elucidates the nature of the quenched approximation at nonzero chemical potential.
Machine Learning Approaches for Modeling Spammer Behavior
Islam, Md Saiful; Islam, Md Rafiqul
2010-01-01
Spam is commonly known as unsolicited or unwanted email messages in the Internet causing potential threat to Internet Security. Users spend a valuable amount of time deleting spam emails. More importantly, ever increasing spam emails occupy server storage space and consume network bandwidth. Keyword-based spam email filtering strategies will eventually be less successful to model spammer behavior as the spammer constantly changes their tricks to circumvent these filters. The evasive tactics that the spammer uses are patterns and these patterns can be modeled to combat spam. This paper investigates the possibilities of modeling spammer behavioral patterns by well-known classification algorithms such as Na\\"ive Bayesian classifier (Na\\"ive Bayes), Decision Tree Induction (DTI) and Support Vector Machines (SVMs). Preliminary experimental results demonstrate a promising detection rate of around 92%, which is considerably an enhancement of performance compared to similar spammer behavior modeling research.
Infectious disease modeling a hybrid system approach
Liu, Xinzhi
2017-01-01
This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.
Second Quantization Approach to Stochastic Epidemic Models
Mondaini, Leonardo
2015-01-01
We show how the standard field theoretical language based on creation and annihilation operators may be used for a straightforward derivation of closed master equations describing the population dynamics of multivariate stochastic epidemic models. In order to do that, we introduce an SIR-inspired stochastic model for hepatitis C virus epidemic, from which we obtain the time evolution of the mean number of susceptible, infected, recovered and chronically infected individuals in a population whose total size is allowed to change.
DEFF Research Database (Denmark)
Jiang, Tao; Sin, Gürkan; Spanjers, Henri
2009-01-01
Activated sludge models (ASM) have been developed and largely applied in conventional activated sludge (CAS) systems. The applicability of ASM to model membrane bioreactors (MBR) and the differences in modeling approaches have not been studied in detail. A laboratory-scale MBR was modeled using ASM...... to the inhibition effect of soluble microbial products (SMP) at elevated concentration. Second, a greater biomass affinity to oxygen and ammonium was found, which was probably related to smaller MBR sludge flocs. Finally, the membrane throughput during membrane backwashing/relaxation can be normalized...
Wickramarachchi, P. N.; Kawamoto, K.; Hamamoto, S.; Nagamori, M.; Moldrup, P.; Komatsu, T.
2011-12-01
for ka than Dp for both fractions. We suggest this is because of compaction effects caused to create well-aligned macropore networks that are available for gas transport through the porous material. Then, the famous predictive models, the water induced linear reduction (WLR) model for Dp and the reference point law (RPL) model for ka were modified with reference point measurements (dry conditions) and model parameters and they correlated linearly to dry bulk density values for both fractions of landfill final cover soil.
"Dispersion modeling approaches for near road | Science ...
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal
Flipped models in Trinification: A Comprehensive Approach
Rodríguez, Oscar; Ponce, William A; Rojas, Eduardo
2016-01-01
By considering the 3-3-1 and the left-right symmetric models as low energy effective theories of the trinification group, alternative versions of these models are found. The new neutral gauge bosons in the universal 3-3-1 model and its flipped versions are considered; also, the left-right symmetric model and the two flipped variants of it are also studied. For these models, the couplings of the $Z'$ bosons to the standard model fermions are reported. The explicit form of the null space of the vector boson mass matrix for an arbitrary Higgs tensor and gauge group is also presented. In the general framework of the trinification gauge group, and by using the LHC experimental results and EW precision data, limits on the $Z'$ mass and the mixing angle between $Z$ and the new gauge bosons $Z'$ are imposed. The general results call for very small mixing angles in the range $10^{-3}$ radians and $M_{Z'}$ > 2.5 TeV.
Hubbard Model Approach to X-ray Spectroscopy
Ahmed, Towfiq
We have implemented a Hubbard model based first-principles approach for real-space calculations of x-ray spectroscopy, which allows one to study excited state electronic structure of correlated systems. Theoretical understanding of many electronic features in d and f electron systems remains beyond the scope of conventional density functional theory (DFT). In this work our main effort is to go beyond the local density approximation (LDA) by incorporating the Hubbard model within the real-space multiple-scattering Green's function (RSGF) formalism. Historically, the first theoretical description of correlated systems was published by Sir Neville Mott and others in 1937. They realized that the insulating gap and antiferromagnetism in the transition metal oxides are mainly caused by the strong on-site Coulomb interaction of the localized unfilled 3d orbitals. Even with the recent progress of first principles methods (e.g. DFT) and model Hamiltonian approaches (e.g., Hubbard-Anderson model), the electronic description of many of these systems remains a non-trivial combination of both. X-ray absorption near edge spectra (XANES) and x-ray emission spectra (XES) are very powerful spectroscopic probes for many electronic features near Fermi energy (EF), which are caused by the on-site Coulomb interaction of localized electrons. In this work we focus on three different cases of many-body effects due to the interaction of localized d electrons. Here, for the first time, we have applied the Hubbard model in the real-space multiple scattering (RSGF) formalism for the calculation of x-ray spectra of Mott insulators (e.g., NiO and MnO). Secondly, we have implemented in our RSGF approach a doping dependent self-energy that was constructed from a single-band Hubbard model for the over doped high-T c cuprate La2-xSrxCuO4. Finally our RSGF calculation of XANES is calculated with the spectral function from Lee and Hedin's charge transfer satellite model. For all these cases our
Lightweight approach to model traceability in a CASE tool
Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita
2017-07-01
A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.
Approaching models of nursing from a postmodernist perspective.
Lister, P
1991-02-01
This paper explores some questions about the use of models of nursing. These questions make various assumptions about the nature of models of nursing, in general and in particular. Underlying these assumptions are various philosophical positions which are explored through an introduction to postmodernist approaches in philosophical criticism. To illustrate these approaches, a critique of the Roper et al. model is developed, and more general attitudes towards models of nursing are examined. It is suggested that postmodernism offers a challenge to many of the assumptions implicit in models of nursing, and that a greater awareness of these assumptions should lead to nursing care being better informed where such models are in use.
Manufacturing Excellence Approach to Business Performance Model
Directory of Open Access Journals (Sweden)
Jesus Cruz Alvarez
2015-03-01
Full Text Available Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing excellence approach that links productivity tools into a broader context of business performance.
Energy Technology Data Exchange (ETDEWEB)
Demkowicz, Michael [Texas A & M Univ., College Station, TX (United States); Schuh, Christopher [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
2016-08-29
This is the final report on project DE-SC0008926. The goal of this project was to create capabilities for constructing, analyzing, and modeling experimental databases of the crystallographic characters and physical properties of thousands of individual grain boundaries (GBs) in polycrystalline metals. This project focused on gallium permeation through aluminum (Al) GBs and hydrogen uptake into nickel (Ni) GBs as model problems. This report summarizes the work done within the duration of this project (including the original three-year award and the subsequent one-year renewal), i.e. from August 1, 2012 until April 30, 2016.
A Bayesian Model Committee Approach to Forecasting Global Solar Radiation
Lauret, Philippe; Muselli, Marc; David, Mathieu; Diagne, Hadja; Voyant, Cyril
2012-01-01
This paper proposes to use a rather new modelling approach in the realm of solar radiation forecasting. In this work, two forecasting models: Autoregressive Moving Average (ARMA) and Neural Network (NN) models are combined to form a model committee. The Bayesian inference is used to affect a probability to each model in the committee. Hence, each model's predictions are weighted by their respective probability. The models are fitted to one year of hourly Global Horizontal Irradiance (GHI) measurements. Another year (the test set) is used for making genuine one hour ahead (h+1) out-of-sample forecast comparisons. The proposed approach is benchmarked against the persistence model. The very first results show an improvement brought by this approach.
MDA based-approach for UML Models Complete Comparison
Chaouni, Samia Benabdellah; Mouline, Salma
2011-01-01
If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model.
A consortium approach to glass furnace modeling.
Energy Technology Data Exchange (ETDEWEB)
Chang, S.-L.; Golchert, B.; Petrick, M.
1999-04-20
Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.
Mixture modeling approach to flow cytometry data.
Boedigheimer, Michael J; Ferbas, John
2008-05-01
Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.
BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ
National Research Council Canada - National Science Library
Wicaksono, Achmad Arief; Syarief, Rizal; Suparno, Ono
2017-01-01
.... This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development...
"Dispersion modeling approaches for near road
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...
and Models: A Self-Similar Approach
Directory of Open Access Journals (Sweden)
José Antonio Belinchón
2013-01-01
equations (FEs admit self-similar solutions. The methods employed allow us to obtain general results that are valid not only for the FRW metric, but also for all the Bianchi types as well as for the Kantowski-Sachs model (under the self-similarity hypothesis and the power-law hypothesis for the scale factors.
Nonperturbative approach to the modified statistical model
Energy Technology Data Exchange (ETDEWEB)
Magdy, M.A.; Bekmezci, A.; Sever, R. [Middle East Technical Univ., Ankara (Turkey)
1993-12-01
The modified form of the statistical model is used without making any perturbation. The mass spectra of the lowest S, P and D levels of the (Q{bar Q}) and the non-self-conjugate (Q{bar q}) mesons are studied with the Song-Lin potential. The authors results are in good agreement with the experimental and theoretical findings.
System Behavior Models: A Survey of Approaches
2016-06-01
Mandana Vaziri, and Frank Tip. 2007. “Finding Bugs Efficiently with a SAT Solver.” In European Software Engineering Conference and the ACM SIGSOFT...Van Gorp. 2005. “A Taxonomy of Model Transformation.” Electronic Notes in Theoretical Computer Science 152: 125–142. Miyazawa, Alvaro, and Ana
Modelling of air quality for Winter and Summer episodes in Switzerland. Final report
Energy Technology Data Exchange (ETDEWEB)
Andreani-Aksoyoglu, S.; Keller, J.; Barmpadimos, L.; Oderbolz, D.; Tinguely, M.; Prevot, A. [Paul Scherrer Institute (PSI), Laboratory of Atmospheric Chemistry, Villigen (Switzerland); Alfarra, R. [University of Manchester, Manchester (United Kingdom); Sandradewi, J. [Jisca Sandradewi, Hoexter (Germany)
2009-05-15
This final report issued by the General Energy Research Department and its Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute (PSI) reports on the results obtained from the modelling of regional air quality for three episodes, January-February 2006, June 2006 and January 2007. The focus of the calculations is on particulate matter concentrations, as well as on ozone levels in summer. The model results were compared with the aerosol data collected by an Aerosol Mass Spectrometer (AMS), which was operated during all three episodes as well as with the air quality monitoring data from further monitoring programs. The air quality model used in this study is described and the results obtained for various types of locations - rural, city, high-altitude and motorway-near - are presented and discussed. The models used are described.
A moving approach for the Vector Hysteron Model
Energy Technology Data Exchange (ETDEWEB)
Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)
2016-04-01
A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.
Integration models: multicultural and liberal approaches confronted
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
A model independent search for new physics in final states containing leptons at the D0 experiment
Energy Technology Data Exchange (ETDEWEB)
Piper, Joel Michael [Michigan State Univ., East Lansing, MI (United States)
2009-01-01
The standard model is known to be the low energy limit of a more general theory. Several consequences of the standard model point to a strong probability of new physics becoming experimentally visible in high energy collisions of a few TeV, resulting in high momentum objects. The specific signatures of these collisions are topics of much debate. Rather than choosing a specific signature, this analysis broadly searches the data, preferring breadth over sensitivity. In searching for new physics, several different approaches are used. These include the comparison of data with standard model background expectation in overall number of events, comparisons of distributions of many kinematic variables, and finally comparisons on the tails of distributions that sum the momenta of the objects in an event. With 1.07 fb^{-}1 at the D0 experiment, we find no evidence of physics beyond the standard model. Several discrepancies from the standard model were found, but none of these provide a compelling case for new physics.
2013-11-26
... Review (3206-NEW); Model Notice of Final Internal Adverse Benefit Determination and Case Intake Form... collection comprises two forms: (1) Model Notice of Final Internal Adverse Benefit Determination, and (2... benefit determinations is accepting requests for external review from MSP enrollees. In addition...
ISM Approach to Model Offshore Outsourcing Risks
Directory of Open Access Journals (Sweden)
Sunand Kumar
2014-07-01
Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing. To this effect, authors have identified various risks through extant review of literature. From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled. Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.
Quantum Machine and SR Approach: a Unified Model
Garola, C; Sozzo, S; Garola, Claudio; Pykacz, Jaroslav; Sozzo, Sandro
2005-01-01
The Geneva-Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts' quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed.
A generalized quarter car modelling approach with frame flexibility and other nonlocal effects
Indian Academy of Sciences (India)
HUSAIN KANCHWALA; ANINDYA CHATTERJEE
2017-07-01
Quarter-car models are popular, simple, unidirectional in kinematics and enable quicker computation than full-car models. However, they do not account for three other wheels and their suspensions, nor for the frame’s flexibility, mass distribution and damping. Here we propose a generalized quarter-car modelling approach, incorporating both the frame as well as other-wheel ground contacts. Our approach is linear, uses Laplace transforms, involves vertical motions of key points of interest and has intermediate complexity with improved realism. Our model uses baseline suspension parameters and responses to step force inputs at suspensionattachment locations on the frame. Subsequently, new suspension parameters and unsprung mass compliance parameters can be incorporated, for which relevant formulas are given. The final expression for the transfer function, between ground displacement and body point response, is approximated using model orderreduction. A simple Matlab code is provided that enables quick parametric studies. Finally, a parametric study and wheel hop analysis are performed for a realistic numerical example. Frequency and time domain responses obtained show clearly the effects of other wheels, which are outside the scope of usual quarter-car models. The displacements obtained from our model are compared against those of the usual quarter-car model and show ways in which predictions of the quarter-car model include errors that can be reduced in our approach. In summary, our approach has intermediate complexity between that of a full-car model and a quarter-car model, and offers corresponding intermediate detail and realism.
Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach
Energy Technology Data Exchange (ETDEWEB)
Liao, James C. [Univ. of California, Los Angeles, CA (United States)
2016-10-01
Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.
Control design approaches for nonlinear systems using multiple models
Institute of Scientific and Technical Information of China (English)
Junyong ZHAI; Shumin FEI; Feipeng DA
2007-01-01
It is difficult to realize control for some complex nonlinear systems operated in different operating regions.Based on developing local models for different operating regions of the process, a novel algorithm using multiple models is proposed. It utilizes dynamic model bank to establish multiple local models, and their membership functions are defined according to respective regions. Then the nonlinear system is approximated to a weighted combination of the local models.The stability of the nonlinear system is proven. Finally, simulations are given to demonstrate the validity of the proposed method.
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body mod
A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING
ANTOULAS, AC; WILLEMS, JC
1993-01-01
The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both re
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body
A market model for stochastic smile: a conditional density approach
Zilber, A.
2005-01-01
The purpose of this paper is to introduce a new approach that allows to construct no-arbitrage market models of for implied volatility surfaces (in other words, stochastic smile models). That is to say, the idea presented here allows us to model prices of liquidly traded vanilla options as separate
Thermoplasmonics modeling: A Green's function approach
Baffou, Guillaume; Quidant, Romain; Girard, Christian
2010-10-01
We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.
Coupling approaches used in atmospheric entry models
Gritsevich, M. I.
2012-09-01
While a planet orbits the Sun, it is subject to impact by smaller objects, ranging from tiny dust particles and space debris to much larger asteroids and comets. Such collisions have taken place frequently over geological time and played an important role in the evolution of planets and the development of life on the Earth. Though the search for near-Earth objects addresses one of the main points of the Asteroid and Comet Hazard, one should not underestimate the useful information to be gleaned from smaller atmospheric encounters, known as meteors or fireballs. Not only do these events help determine the linkages between meteorites and their parent bodies; due to their relative regularity they provide a good statistical basis for analysis. For successful cases with found meteorites, the detailed atmospheric path record is an excellent tool to test and improve existing entry models assuring the robustness of their implementation. There are many more important scientific questions meteoroids help us to answer, among them: Where do these objects come from, what are their origins, physical properties and chemical composition? What are the shapes and bulk densities of the space objects which fully ablate in an atmosphere and do not reach the planetary surface? Which values are directly measured and which are initially assumed as input to various models? How to couple both fragmentation and ablation effects in the model, taking real size distribution of fragments into account? How to specify and speed up the recovery of a recently fallen meteorites, not letting weathering to affect samples too much? How big is the pre-atmospheric projectile to terminal body ratio in terms of their mass/volume? Which exact parameters beside initial mass define this ratio? More generally, how entering object affects Earth's atmosphere and (if applicable) Earth's surface? How to predict these impact consequences based on atmospheric trajectory data? How to describe atmospheric entry
Applied Regression Modeling A Business Approach
Pardoe, Iain
2012-01-01
An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a
Bayesian Approach to Neuro-Rough Models for Modelling HIV
Marwala, Tshilidzi
2007-01-01
This paper proposes a new neuro-rough model for modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Markov Chain Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62% as opposed to 58% obtained from a Bayesian formulated rough set model trained using Markov chain Monte Carlo method and 62% obtained from a Bayesian formulated multi-layered perceptron (MLP) model trained using hybrid Monte. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...
2012-01-01
1572 Final Report Sky Research, Inc. January 2012 xii List of Acronyms 3D Three-Dimensional AIC Akaike Information Criterion APG Aberdeen...Proving Ground BIC Bayesian Information Criterion BOR Body of Revolution BUD Berkeley UXO Discriminator cm Centimeter CRREL Cold Regions...This model-based approach has the desirable traits (1) that it permits the use of objective statistical criteria—like the Akaike Information Criterion
Propagating Linear Waves in Convectively Unstable Stellar Models: a Perturbative Approach
Papini, Emanuele; Birch, Aaron C
2013-01-01
Linear time-domain simulations of acoustic oscillations are unstable in the stellar convection zone. To overcome this problem it is customary to compute the oscillations of a stabilized background stellar model. The stabilization, however, affects the result. Here we propose to use a perturbative approach (running the simulation twice) to approximately recover the acoustic wave field, while preserving seismic reciprocity. To test the method we considered a 1D standard solar model. We found that the mode frequencies of the (unstable) standard solar model are well approximated by the perturbative approach within $1~\\mu$Hz for low-degree modes with frequencies near $3~\\mu$Hz. We also show that the perturbative approach is appropriate for correcting rotational-frequency kernels. Finally, we comment that the method can be generalized to wave propagation in 3D magnetized stellar interiors because the magnetic fields have stabilizing effects on convection.
Continuous Molecular Fields Approach Applied to Structure-Activity Modeling
Baskin, Igor I
2013-01-01
The Method of Continuous Molecular Fields is a universal approach to predict various properties of chemical compounds, in which molecules are represented by means of continuous fields (such as electrostatic, steric, electron density functions, etc). The essence of the proposed approach consists in performing statistical analysis of functional molecular data by means of joint application of kernel machine learning methods and special kernels which compare molecules by computing overlap integrals of their molecular fields. This approach is an alternative to traditional methods of building 3D structure-activity and structure-property models based on the use of fixed sets of molecular descriptors. The methodology of the approach is described in this chapter, followed by its application to building regression 3D-QSAR models and conducting virtual screening based on one-class classification models. The main directions of the further development of this approach are outlined at the end of the chapter.
Monte Carlo path sampling approach to modeling aeolian sediment transport
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient
On modeling approach for embedded real-time software simulation testing
Institute of Scientific and Technical Information of China (English)
Yin Yongfeng; Liu Bin; Zhong Deming; Jiang Tongrain
2009-01-01
Modeling technology has been introduced into software testing field. However, how to carry through the testing modeling effectively is still a difficulty. Based on combination of simulation modeling technology and embedded real-time software testing method, the process of simulation testing modeling is studied first. And then, the supporting environment of simulation testing modeling is put forward. Furthermore, an approach of embedded real-time software simulation testing modeling including modeling of cross-linked equipments of system under testing (SUT), test case, testing scheduling, and testing system service is brought forward. Finally, the formalized description and execution system of testing models are given, with which we can realize real-time, closed loop, and automated system testing for embedded real-time software.
The initial and final state of SNe Ia from the single degenerate model
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Although type Ia supernovae(SNe Ia) show their importance in many astrophysical fields,the nature of the progenitors of SNe Ia is still unclear.At present,the single degenerate(SD) model is presented to be a very likely progenitor model.Following the comprehensive SD model developed by Meng & Yang(2010),we show the initial and final state of the progenitor systems of SNe Ia in an orbital period—the secondary mass(log Pi,M2i) plane.Our results may explain the location of some supersoft X-ray sources and recurrent novae in the(log Pi,M2i) plane,and be helpful to judge whether an SD system is the potential progenitor system of SNe Ia,as well as to simulate the interaction between SN ejecta and its companion.
A forward modeling approach for interpreting impeller flow logs.
Parker, Alison H; West, L Jared; Odling, Noelle E; Bown, Richard T
2010-01-01
A rigorous and practical approach for interpretation of impeller flow log data to determine vertical variations in hydraulic conductivity is presented and applied to two well logs from a Chalk aquifer in England. Impeller flow logging involves measuring vertical flow speed in a pumped well and using changes in flow with depth to infer the locations and magnitudes of inflows into the well. However, the measured flow logs are typically noisy, which leads to spurious hydraulic conductivity values where simplistic interpretation approaches are applied. In this study, a new method for interpretation is presented, which first defines a series of physical models for hydraulic conductivity variation with depth and then fits the models to the data, using a regression technique. Some of the models will be rejected as they are physically unrealistic. The best model is then selected from the remaining models using a maximum likelihood approach. This balances model complexity against fit, for example, using Akaike's Information Criterion.
Energy Technology Data Exchange (ETDEWEB)
Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy
2008-09-01
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model
A Networks Approach to Modeling Enzymatic Reactions.
Imhof, P
2016-01-01
Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
Modeling Approaches for Describing Microbial Population Heterogeneity
DEFF Research Database (Denmark)
Lencastre Fernandes, Rita
, ethanol and biomass throughout the reactor. This work has proven that the integration of CFD and population balance models, for describing the growth of a microbial population in a spatially heterogeneous reactor, is feasible, and that valuable insight on the interplay between flow and the dynamics......Although microbial populations are typically described by averaged properties, individual cells present a certain degree of variability. Indeed, initially clonal microbial populations develop into heterogeneous populations, even when growing in a homogeneous environment. A heterogeneous microbial......) to predict distributions of certain population properties including particle size, mass or volume, and molecular weight. Similarly, PBM allow for a mathematical description of distributed cell properties within microbial populations. Cell total protein content distributions (a measure of cell mass) have been...
Hamiltonian approach to hybrid plasma models
Tronci, Cesare
2010-01-01
The Hamiltonian structures of several hybrid kinetic-fluid models are identified explicitly, upon considering collisionless Vlasov dynamics for the hot particles interacting with a bulk fluid. After presenting different pressure-coupling schemes for an ordinary fluid interacting with a hot gas, the paper extends the treatment to account for a fluid plasma interacting with an energetic ion species. Both current-coupling and pressure-coupling MHD schemes are treated extensively. In particular, pressure-coupling schemes are shown to require a transport-like term in the Vlasov kinetic equation, in order for the Hamiltonian structure to be preserved. The last part of the paper is devoted to studying the more general case of an energetic ion species interacting with a neutralizing electron background (hybrid Hall-MHD). Circulation laws and Casimir functionals are presented explicitly in each case.
Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2
Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.
2008-01-01
This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.
Final-year diagnostic radiography students' perception of role models within the profession.
Conway, Alinya; Lewis, Sarah; Robinson, John
2008-01-01
Within a clinical education setting, the value of role models and prescribed mentors can be seen as an important influence in shaping the student's future as a diagnostic radiographer. A study was undertaken to create a new understanding of how diagnostic radiography students perceive role models and professional behavior in the workforce. The study aimed to determine the impact of clinical education in determining modeling expectations, role model identification and attributes, and the integration of academic education and "hands-on" clinical practice in preparing diagnostic radiography students to enter the workplace. Thirteen final-year (third-year) diagnostic radiography students completed an hour-long interview regarding their experiences and perceptions of role models while on clinical placement. The key concepts that emerged illustrated that students gravitate toward radiographers who enjoy sharing practical experiences with students and are good communicators. Unique to diagnostic radiography, students made distinctions about the presence of role models in private versus public service delivery. This study gives insight to clinical educators in diagnostic radiography and wider allied health into how students perceive role models, interact with preceptors, and combine real-life experiences with formal learning.
Modeling of phase equilibria with CPA using the homomorph approach
DEFF Research Database (Denmark)
Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios
2011-01-01
For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
Modular Modelling and Simulation Approach - Applied to Refrigeration Systems
DEFF Research Database (Denmark)
Sørensen, Kresten Kjær; Stoustrup, Jakob
2008-01-01
This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...
Pattern-based approach for logical traffic isolation forensic modelling
CSIR Research Space (South Africa)
Dlamini, I
2009-08-01
Full Text Available The use of design patterns usually changes the approach of software design and makes software development relatively easy. This paper extends work on a forensic model for Logical Traffic Isolation (LTI) based on Differentiated Services (Diff...
A semantic-web approach for modeling computing infrastructures
M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat
2013-01-01
This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
ICFD modeling of final settlers - developing consistent and effective simulation model structures
DEFF Research Database (Denmark)
Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham
analysis exercises is kept to a minimum (4). Consequently, detailed information related to, for instance, design boundaries, may be ignored, and their effects may only be accounted for through calibration of model parameters used as catchalls, and by arbitrary amendments of structural uncertainty...... of (6). Further details are shown in (5). Results and discussions Factor screening. Factor screening is carried out by imposing statistically designed moderate (under-loaded) and extreme (under-, critical and overloaded) operational boundary conditions on the 2-D CFD SST model (8). Results obtained...
Modelling road accidents: An approach using structural time series
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Functional state modelling approach validation for yeast and bacteria cultivations
Roeva, Olympia; Pencheva, Tania
2014-01-01
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778
Functional state modelling approach validation for yeast and bacteria cultivations.
Roeva, Olympia; Pencheva, Tania
2014-09-03
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.
Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.
2015-01-01
A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles of 0deg, 30deg, 45deg, 60deg and 90deg relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.
Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato
Directory of Open Access Journals (Sweden)
Maarten L. A. T. M. Hertog
2017-04-01
Full Text Available In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. “Savior” was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.
Molecular Modeling Approach to Cardiovascular Disease Targetting
Directory of Open Access Journals (Sweden)
Chandra Sekhar Akula,
2010-05-01
Full Text Available Cardiovascular disease, including stroke, is the leading cause of illness and death in the India. A number of studies have shown that inflammation of blood vessels is one of the major factors that increase the incidence of heart diseases, including arteriosclerosis (clogging of the arteries, stroke and myocardial infraction or heart attack. Studies have associated obesity and other components of metabolic syndrome, cardiovascular risk factors, with lowgradeinflammation. Furthermore, some findings suggest that drugs commonly prescribed to the lower cholesterol also reduce this inflammation, suggesting an additional beneficial effect of the stains. The recent development of angiotensin 11 (Ang11 receptor antagonists has enabled to improve significantly the tolerability profile of thisgroup of drugs while maintaining a high clinical efficacy. ACE2 is expressed predominantly in the endothelium and in renal tubular epithelium, and it thus may be an import new cardiovascular target. In the present study we modeled the structure of ACE and designed an inhibitor through using ARGUS lab and the validation of the Drug molecule is done basing on QSAR properties and Cache for this protein through CADD.
Virtuous organization: A structural equation modeling approach
Directory of Open Access Journals (Sweden)
Majid Zamahani
2013-02-01
Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
Energy Technology Data Exchange (ETDEWEB)
Wanne, Toivo; Johansson, Erik; Potyondy, David [Saanio and Riekkola Oy, Helsinki (Finland)
2004-02-01
SKB is planning to perform a large-scale pillar stability experiment called APSE (Aespoe Pillar Stability Experiment) at Aespoe HRL. The study is focused on understanding and control of progressive rock failure in hard crystalline rock and damage caused by high stresses. The elastic thermo-mechanical modeling was carried out in three dimensions because of the complex test geometry and in-situ stress tensor by using a finite-difference modeling software FLAC3D. Cracking and damage formation were modeled in the area of interest (pillar between two large scale holes) in two dimensions by using the Particle Flow Code (PFC), which is based on particle mechanics. FLAC and PFC were coupled to minimize the computer resources and the computing time. According to the modeling the initial temperature rises from 15 deg C to about 65 deg C in the pillar area during the heating period of 120 days. The rising temperature due to thermal expansion induces stresses in the pillar area and after 120 days heating the stresses have increased about 33% from the excavation induced maximum stress of 150 MPa to 200 MPa in the end of the heating period. The results from FLAC3D model showed that only regions where the crack initiation stress has exceeded were identified and they extended to about two meters down the hole wall. These could be considered the areas where damage may occur during the in-situ test. When the other hole is pressurized with a 0.8 MPa confining pressure it yields that 5 MPa more stress is needed to damage the rock than without confining pressure. This makes the damaged area in some degree smaller. High compressive stresses in addition to some tensile stresses might induce some AE (acoustic emission) activity in the upper part of the hole from the very beginning of the test and are thus potential areas where AE activities may be detected. Monitoring like acoustic emissions will be measured during the test execution. The 2D coupled PFC-FLAC modeling indicated that
A Model and Questionnaire of Language Identity in Iran: A Structural Equation Modelling Approach
Khatib, Mohammad; Rezaei, Saeed
2013-01-01
This study consisted of three main phases including the development of a hypothesised model of language identity in Iran, developing and validating a questionnaire based on this model and finally testing the model based on the questionnaire data. In the first phase of this research, a hypothesised model of language identity in Iran was developed…
Energy Technology Data Exchange (ETDEWEB)
Balmain, Allan [University of California, San Francisco; Song, Ihn Young [University of California, San Francisco
2013-05-15
The ultimate goal of this project is to identify the combinations of genetic variants that confer an individual's susceptibility to the effects of low dose (0.1 Gy) gamma-radiation, in particular with regard to tumor development. In contrast to the known effects of high dose radiation in cancer induction, the responses to low dose radiation (defined as 0.1 Gy or less) are much less well understood, and have been proposed to involve a protective anti-tumor effect in some in vivo scientific models. These conflicting results confound attempts to develop predictive models of the risk of exposure to low dose radiation, particularly when combined with the strong effects of inherited genetic variants on both radiation effects and cancer susceptibility. We have used a Systems Genetics approach in mice that combines genetic background analysis with responses to low and high dose radiation, in order to develop insights that will allow us to reconcile these disparate observations. Using this comprehensive approach we have analyzed normal tissue gene expression (in this case the skin and thymus), together with the changes that take place in this gene expression architecture a) in response to low or high- dose radiation and b) during tumor development. Additionally, we have demonstrated that using our expression analysis approach in our genetically heterogeneous/defined radiation-induced tumor mouse models can uniquely identify genes and pathways relevant to human T-ALL, and uncover interactions between common genetic variants of genes which may lead to tumor susceptibility.
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.
Modelling and Generating Ajax Applications: A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw
Modelling and Generating Ajax Applications: A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction
A novel approach to modeling and diagnosing the cardiovascular system
Energy Technology Data Exchange (ETDEWEB)
Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)
1995-07-01
A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Råde, Anders
2014-01-01
This study concerns different final thesis models in the research on teacher education in Europe and their orientation towards the academy and the teaching profession. In scientific journals, 33 articles support the occurrence of three models: the portfolio model, with a mainly teaching-professional orientation; the thesis model, with a mainly…
Råde, Anders
2014-01-01
This study concerns different final thesis models in the research on teacher education in Europe and their orientation towards the academy and the teaching profession. In scientific journals, 33 articles support the occurrence of three models: the portfolio model, with a mainly teaching-professional orientation; the thesis model, with a mainly…
Mathematical models for therapeutic approaches to control HIV disease transmission
Roy, Priti Kumar
2015-01-01
The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...
Asteroid modeling for testing spacecraft approach and landing.
Martin, Iain; Parkes, Steve; Dunstan, Martin; Rowell, Nick
2014-01-01
Spacecraft exploration of asteroids presents autonomous-navigation challenges that can be aided by virtual models to test and develop guidance and hazard-avoidance systems. Researchers have extended and applied graphics techniques to create high-resolution asteroid models to simulate cameras and other spacecraft sensors approaching and descending toward asteroids. A scalable model structure with evenly spaced vertices simplifies terrain modeling, avoids distortion at the poles, and enables triangle-strip definition for efficient rendering. To create the base asteroid models, this approach uses two-phase Poisson faulting and Perlin noise. It creates realistic asteroid surfaces by adding both crater models adapted from lunar terrain simulation and multiresolution boulders. The researchers evaluated the virtual asteroids by comparing them with real asteroid images, examining the slope distributions, and applying a surface-relative feature-tracking algorithm to the models.
Heuristic approaches to models and modeling in systems biology
MacLeod, Miles
2016-01-01
Prediction and control sufficient for reliable medical and other interventions are prominent aims of modeling in systems biology. The short-term attainment of these goals has played a strong role in projecting the importance and value of the field. In this paper I identify the standard models must m
A Model Management Approach for Co-Simulation Model Evaluation
Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno
2011-01-01
Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software
Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M
2012-03-01
With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.
A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.
Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin
2017-02-01
The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed.
Search for the standard model Higgs boson in tau lepton final states
Energy Technology Data Exchange (ETDEWEB)
Abazov, Victor Mukhamedovich; et al.
2012-08-01
We present a search for the standard model Higgs boson in final states with an electron or muon and a hadronically decaying tau lepton in association with zero, one, or two or more jets using data corresponding to an integrated luminosity of up to 7.3 fb{sup -1} collected with the D0 detector at the Fermilab Tevatron collider. The analysis is sensitive to Higgs boson production via gluon gluon fusion, associated vector boson production, and vector boson fusion, and to Higgs boson decays to tau lepton pairs or W boson pairs. Observed (expected) limits are set on the ratio of 95% C.L. upper limits on the cross section times branching ratio, relative to those predicted by the Standard Model, of 14 (22) at a Higgs boson mass of 115 GeV and 7.7 (6.8) at 165 GeV.
Final Results on Modeling the Spectrum of Ammonia 2ν_2 and ν_4 States
Yu, Shanshan; Pearson, John; Amano, Takayoshi; Pirali, Olivier
2016-06-01
At this symposium in 2013, we reported our preliminary results on modeling the spectrum of ammonia 2ν_2 and ν_4 states (see Paper TB09 in 2013). This presentation reports the final results on our extensive experimental measurements and data analysis for the 2ν_2 and ν_4 inversion-rotation and vibrational transitions. We measured 159 new transition frequencies with microwave precision and assigned 1680 new ones from existing Fourier Transform spectra recorded in Synchrotron SOLEIL. The newly assigned data significantly expand the range of assigned quantum numbers. Combined with all the previously published high-resolution data, the 2ν_2 and ν_4 states are reproduced to 1.3σ using a global model. We will discuss the types of transitions included in our global analysis, and fit statistics for date sets from individual experimental work.
A New Detection Approach Based on the Maximum Entropy Model
Institute of Scientific and Technical Information of China (English)
DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua
2006-01-01
The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.
Fuaad, Norain Farhana Ahmad; Nopiah, Zulkifli Mohd; Tawil, Norgainy Mohd; Othman, Haliza; Asshaari, Izamarlina; Osman, Mohd Hanif; Ismail, Nur Arzilah
2014-06-01
In engineering studies and researches, Mathematics is one of the main elements which express physical, chemical and engineering laws. Therefore, it is essential for engineering students to have a strong knowledge in the fundamental of mathematics in order to apply the knowledge to real life issues. However, based on the previous results of Mathematics Pre-Test, it shows that the engineering students lack the fundamental knowledge in certain topics in mathematics. Due to this, apart from making improvements in the methods of teaching and learning, studies on the construction of questions (items) should also be emphasized. The purpose of this study is to assist lecturers in the process of item development and to monitor the separation of items based on Blooms' Taxonomy and to measure the reliability of the items itself usingRasch Measurement Model as a tool. By using Rasch Measurement Model, the final exam questions of Engineering Mathematics II (Linear Algebra) for semester 2 sessions 2012/2013 were analysed and the results will provide the details onthe extent to which the content of the item providesuseful information about students' ability. This study reveals that the items used in Engineering Mathematics II (Linear Algebra) final exam are well constructed but the separation of the items raises concern as it is argued that it needs further attention, as there is abig gap between items at several levels of Blooms' cognitive skill.
LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL
Directory of Open Access Journals (Sweden)
Eser ÖRDEM
2013-06-01
Full Text Available Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002 and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings. This model was created by the researcher as a result of the studies carried out in applied linguistics (Hill, 20009 and memory (Murphy, 2004. Since one of the main problems of foreign language learners is to retrieve what they have learnt, Lewis (1998 and Wray (2008 assume that lexical approach is an alternative explanation to solve this problem.Unlike grammar translation method, this approach supports the idea that language is not composed of general grammar but strings of word and word combinations.In addition, lexical approach posits the idea that each word has tiw gramamtical properties, and therefore each dictionary is a potential grammar book. Foreign language learners can learn to use collocations, a basic principle of Lexical approach. Thus, learners can increase the level of retention.The concept of retrieval clue (Murphy, 2004 is considered the main element in this collocational study model because the main purpose of this model is boost fluency and help learners gain native-like accuracy while producing the target language. Keywords: Foreign language teaching, lexical approach, collocations, retrieval clue
A Model-Driven Approach for Telecommunications Network Services Definition
Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.
Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.
Transferring Multi-Scale Approaches from 3d City Modeling to Ifc-Based Tunnel Modeling
Borrmann, A.; Kolbe, T. H.; Donaubauer, A.; Steuer, H.; Jubierre, J. R.
2013-09-01
A multi-scale representation of the built environment is required to provide information with the adequate level of detail (LoD) for different use cases and objectives. This applies not only to the visualization of city and building models, but in particular to their use in the context of planning and analysis tasks. While in the field of Geographic Information Systems, the handling of multi-scale representations is well established and understood, no formal approaches for incorporating multi-scale methods exist in the field of Building Information Modeling (BIM) so far. However, these concepts are much needed to better support highly dynamic planning processes that make use of very rough information about the facility under design in the early stages and provide increasingly detailed and fine-grained information in later stages. To meet these demands, this paper presents a comprehensive concept for incorporating multi-scale representations with infrastructural building information models, with a particular focus on the representation of shield tunnels. Based on a detailed analysis of the data modeling methods used in CityGML for capturing multiscale representations and the requirements present in the context of infrastructure planning projects, we discuss potential extensions to the BIM data model Industry Foundation Classes (IFC). Particular emphasis is put on providing means for preserving the consistency of the representation across the different Levels-of-Detail (LoD). To this end we make use of a procedural geometry description which makes it possible to define explicit dependencies between geometric entities on different LoDs. The modification of an object on a coarse level consequently results in an automated update of all dependent objects on the finer levels. Finally we discuss the transformation of the IFC-based multi-scale tunnel model into a CityGML compliant tunnel representation.
Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''
Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.
2011-05-01
The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.
Child human model development: a hybrid validation approach
Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.
2008-01-01
The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a
Modeling Alaska boreal forests with a controlled trend surface approach
Mo Zhou; Jingjing Liang
2012-01-01
An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...
Teaching Service Modelling to a Mixed Class: An Integrated Approach
Deng, Jeremiah D.; Purvis, Martin K.
2015-01-01
Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...
Child human model development: a hybrid validation approach
Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.
2008-01-01
The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Hybrid continuum-atomistic approach to model electrokinetics in nanofluidics
Energy Technology Data Exchange (ETDEWEB)
Amani, Ehsan, E-mail: eamani@aut.ac.ir; Movahed, Saeid, E-mail: smovahed@aut.ac.ir
2016-06-07
In this study, for the first time, a hybrid continuum-atomistic based model is proposed for electrokinetics, electroosmosis and electrophoresis, through nanochannels. Although continuum based methods are accurate enough to model fluid flow and electric potential in nanofluidics (in dimensions larger than 4 nm), ionic concentration is too low in nanochannels for the continuum assumption to be valid. On the other hand, the non-continuum based approaches are too time-consuming and therefore is limited to simple geometries, in practice. Here, to propose an efficient hybrid continuum-atomistic method of modelling the electrokinetics in nanochannels; the fluid flow and electric potential are computed based on continuum hypothesis coupled with an atomistic Lagrangian approach for the ionic transport. The results of the model are compared to and validated by the results of the molecular dynamics technique for a couple of case studies. Then, the influences of bulk ionic concentration, external electric field, size of nanochannel, and surface electric charge on the electrokinetic flow and ionic mass transfer are investigated, carefully. The hybrid continuum-atomistic method is a promising approach to model more complicated geometries and investigate more details of the electrokinetics in nanofluidics. - Highlights: • A hybrid continuum-atomistic model is proposed for electrokinetics in nanochannels. • The model is validated by molecular dynamics. • This is a promising approach to model more complicated geometries and physics.
Modelling diversity in building occupant behaviour: a novel statistical approach
DEFF Research Database (Denmark)
Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm
2016-01-01
We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...
Asteroid fragmentation approaches for modeling atmospheric energy deposition
Register, Paul J.; Mathias, Donovan L.; Wheeler, Lorien F.
2017-03-01
During asteroid entry, energy is deposited in the atmosphere through thermal ablation and momentum-loss due to aerodynamic drag. Analytic models of asteroid entry and breakup physics are used to compute the energy deposition, which can then be compared against measured light curves and used to estimate ground damage due to airburst events. This work assesses and compares energy deposition results from four existing approaches to asteroid breakup modeling, and presents a new model that combines key elements of those approaches. The existing approaches considered include a liquid drop or "pancake" model where the object is treated as a single deforming body, and a set of discrete fragment models where the object breaks progressively into individual fragments. The new model incorporates both independent fragments and aggregate debris clouds to represent a broader range of fragmentation behaviors and reproduce more detailed light curve features. All five models are used to estimate the energy deposition rate versus altitude for the Chelyabinsk meteor impact, and results are compared with an observationally derived energy deposition curve. Comparisons show that four of the five approaches are able to match the overall observed energy deposition profile, but the features of the combined model are needed to better replicate both the primary and secondary peaks of the Chelyabinsk curve.
A Bayesian Approach for Analyzing Longitudinal Structural Equation Models
Song, Xin-Yuan; Lu, Zhao-Hua; Hser, Yih-Ing; Lee, Sik-Yum
2011-01-01
This article considers a Bayesian approach for analyzing a longitudinal 2-level nonlinear structural equation model with covariates, and mixed continuous and ordered categorical variables. The first-level model is formulated for measures taken at each time point nested within individuals for investigating their characteristics that are dynamically…
An Empirical-Mathematical Modelling Approach to Upper Secondary Physics
Angell, Carl; Kind, Per Morten; Henriksen, Ellen K.; Guttersrud, Oystein
2008-01-01
In this paper we describe a teaching approach focusing on modelling in physics, emphasizing scientific reasoning based on empirical data and using the notion of multiple representations of physical phenomena as a framework. We describe modelling activities from a project (PHYS 21) and relate some experiences from implementation of the modelling…
An Alternative Approach for Nonlinear Latent Variable Models
Mooijaart, Ab; Bentler, Peter M.
2010-01-01
In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Energy Technology Data Exchange (ETDEWEB)
Breitschopf, Barbara [Fraunhofer-Institut fuer System- und Innovationsforschung (ISI), Karlsruhe (Germany); Nathani, Carsten; Resch, Gustav
2011-11-15
full picture of the impacts of RE deployment on the total economy - covering all economic activities like production, service and consumption (industries, households). To get the number of additional jobs caused by RE deployment, they compare a situation without RE (baseline or counterfactual) to a situation under a strong RE deployment. In a second step, we characterize the studies inter alia by their scope, activities and impacts and show the relevant positive and negative effects that are included in gross or net impact assessment studies. The effects are briefly described in Table 0-1. While gross studies mainly include the positive effects listed here, net studies in general include positive and negative effects. Third, we distinguish between methodological approaches assessing impacts. We observe that the more effects are incorporated in the approach, the more data are needed, the more complex and demanding the methodological approach becomes and the more the impacts capture effects of and in the whole economy - representing net impacts. A simple approach requires a few data and allows answering simple questions concerning the impact on the RE-industry - representing gross impacts. We identify six main approaches, three for gross and three for net impacts. They are depicted in Figure 0-2. The methodological approaches are characterized by their effects captured, the complexity of model and additional data requirement (besides data on RE investments, capacities and generation) as well as by their depicted impacts reflecting the economic comprehensiveness. A detailed overview of the diverse studies in table form is given in the Annex to this report. Finally, we suggest to elaborate guidelines for the simple EF-approach, the gross IO-modelling and net IO-modelling approach. The first approach enables policy makers to do a quick assessment on gross effects, while the second is a more sophisticated approach for gross effects. The third approach builds on the gross IO
Energy Technology Data Exchange (ETDEWEB)
Strout, Michelle [Colorado State Univ., Fort Collins, CO (United States)
2015-08-15
Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programs through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.
A multilevel approach to modeling of porous bioceramics
Mikushina, Valentina A.; Sidorenko, Yury N.
2015-10-01
The paper is devoted to discussion of multiscale models of heterogeneous materials using principles. The specificity of approach considered is the using of geometrical model of composites representative volume, which must be generated with taking the materials reinforcement structure into account. In framework of such model may be considered different physical processes which have influence on the effective mechanical properties of composite, in particular, the process of damage accumulation. It is shown that such approach can be used to prediction the value of composite macroscopic ultimate strength. As an example discussed the particular problem of the study the mechanical properties of biocomposite representing porous ceramics matrix filled with cortical bones tissue.
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented....... The model in the present paper provides on-line information on overflow volumes, pumping capacities, and remaining storage capacities. A linear overflow relation is found, differing significantly from the traditional deterministic modeling approach. The linearity of the formulas is explained by the inertia...
A study of multidimensional modeling approaches for data warehouse
Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani
2016-08-01
Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.
Meta-analysis a structural equation modeling approach
Cheung, Mike W-L
2015-01-01
Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Energy Technology Data Exchange (ETDEWEB)
Miller, Gregory H. [Univ. of California, Davis, CA (United States); Forest, Gregory [Univ. of California, Davis, CA (United States)
2014-05-01
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
New approach to determine common weights in DEA efficiency evaluation model
Institute of Scientific and Technical Information of China (English)
Feng Yang; Chenchen Yang; Liang Liang; Shaofu Du
2010-01-01
Data envelopment analysis(DEA)is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ranking approaches are based on the self-evaluation efficiencies.In other words,each DMU chooses the weights it prefers to most,so the resulted efficiencies are not suitable to be used as ranking criteria.Therefore this paper proposes a new approach to determine a bundle of common weights in DEA efficiency evaluation model by introducing a multi-objective integer programming.The paper also gives the solving process of this multi-objective integer programming,and the solution is proven a Paroto efficient solution.The solving process ensures that the obtained common weight bundle is acceptable by a groat number of DMUs.Finally a numeral example is given to demonstrate the approach.
Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B; Robinson, Terry E; Khamassi, Mehdi
2014-02-01
Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself - a lever - more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational
Directory of Open Access Journals (Sweden)
Florian Lesaint
2014-02-01
Full Text Available Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US, some rats (sign-trackers come to approach and engage the conditioned stimulus (CS itself - a lever - more and more avidly, whereas other rats (goal-trackers learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in
Mirabolghasemi, M.; Prodanovic, M.; DiCarlo, D. A.
2014-12-01
Filtration is relevant to many disciplines from colloid transport in environmental engineering to formation damage in petroleum engineering. In this study we compare the results of the novel numerical modeling of filtration phenomenon on pore scale with the complementary experimental observations on laboratory scale and discuss how the results of comparison can be used to improve macroscale filtration models for different porous media. The water suspension contained glass beads of 200 micron diameter and flows through a packing of 1mm diameter glass beads, and thus the main filtration mechanism is straining and jamming of particles. The numerical model simulates the flow of suspension through a realistic 3D structure of an imaged, disordered sphere pack, which acts as the filter medium. Particle capture through size exclusion and jamming is modeled via a coupled Discrete Element Method (DEM) and Computational Fluid Dynamics (CFD) approach. The coupled CFD-DEM approach is capable of modeling the majority of particle-particle, particle-wall, and particle-fluid interactions. Note that most of traditional approaches require spherical particles both in suspension and the filtration medium. We adapted the interface between the pore space and the spherical grains to be represented as a triangulated surface and this allows extensions to any imaged media. The numerical and experimental results show that the filtration coefficient of the sphere pack is a function of the flow rate and concentration of the suspension, even for constant total particle flow rate. An increase in the suspension flow rate results in a decrease in the filtration coefficient, which suggests that the hydrodynamic drag force plays the key role in hindering the particle capture in random sphere packs. Further, similar simulations of suspension flow through a sandstone sample, which has a tighter pore space, show that filtration coefficient remains almost constant at different suspension flow rates. This
Metamodelling Approach and Software Tools for Physical Modelling and Simulation
Directory of Open Access Journals (Sweden)
Vitaliy Mezhuyev
2015-02-01
Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.
Social learning in Models and Cases - an Interdisciplinary Approach
Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger
2016-04-01
Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.
Learning the Task Management Space of an Aircraft Approach Model
Krall, Joseph; Menzies, Tim; Davies, Misty
2014-01-01
Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.
Building enterprise reuse program--A model-based approach
Institute of Scientific and Technical Information of China (English)
梅宏; 杨芙清
2002-01-01
Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially from the view of process and management. In our approach, a reusable component model--FLP model for reusable component--is introduced. This model describes components from three dimensions (Form, Level, and Presentation) and views components and their relationships from the perspective of process and management. It determines the sphere of reusable components, the time points of reusing components in the development process, and the needed means to present components in terms of the abstraction level, logic granularity and presentation media. Being the basis on which the management and technical decisions are made, our model will be used as the kernel model to initialize and normalize a systematic enterprise reuse program.
Fractal modeling of natural fracture networks. Final report, June 1994--June 1995
Energy Technology Data Exchange (ETDEWEB)
Ferer, M.V.; Dean, B.H.; Mick, C.
1996-04-01
Recovery from naturally fractured, tight-gas reservoirs is controlled by the fracture network. Reliable characterization of the actual fracture network in the reservoir is severely limited. The location and orientation of fractures intersecting the borehole can be determined, but the length of these fractures cannot be unambiguously determined. Fracture networks can be determined for outcrops, but there is little reason to believe that the network in the reservoir should be identical because of the differences in stresses and history. Because of the lack of detailed information about the actual fracture network, modeling methods must represent the porosity and permeability associated with the fracture network, as accurately as possible with very little apriori information. Three rather different types of approaches have been used: (1) dual porosity simulations; (2) `stochastic` modeling of fracture networks, and (3) fractal modeling of fracture networks. Stochastic models which assume a variety of probability distributions of fracture characteristics have been used with some success in modeling fracture networks. The advantage of these stochastic models over the dual porosity simulations is that real fracture heterogeneities are included in the modeling process. In the sections provided in this paper the authors will present fractal analysis of the MWX site, using the box-counting procedure; (2) review evidence testing the fractal nature of fracture distributions and discuss the advantages of using their fractal analysis over a stochastic analysis; (3) present an efficient algorithm for producing a self-similar fracture networks which mimic the real MWX outcrop fracture network.
Current approaches to model extracellular electrical neural microstimulation
Directory of Open Access Journals (Sweden)
Sébastien eJoucla
2014-02-01
Full Text Available Nowadays, high-density microelectrode arrays provide unprecedented possibilities to precisely activate spatially well-controlled central nervous system (CNS areas. However, this requires optimizing stimulating devices, which in turn requires a good understanding of the effects of microstimulation on cells and tissues. In this context, modeling approaches provide flexible ways to predict the outcome of electrical stimulation in terms of CNS activation. In this paper, we present state-of-the-art modeling methods with sufficient details to allow the reader to rapidly build numerical models of neuronal extracellular microstimulation. These include 1 the computation of the electrical potential field created by the stimulation in the tissue, and 2 the response of a target neuron to this field. Two main approaches are described: First we describe the classical hybrid approach that combines the finite element modeling of the potential field with the calculation of the neuron’s response in a cable equation framework (compartmentalized neuron models. Then, we present a whole finite element approach allows the simultaneous calculation of the extracellular and intracellular potentials, by representing the neuronal membrane with a thin-film approximation. This approach was previously introduced in the frame of neural recording, but has never been implemented to determine the effect of extracellular stimulation on the neural response at a sub-compartment level. Here, we show on an example that the latter modeling scheme can reveal important sub-compartment behavior of the neural membrane that cannot be resolved using the hybrid approach. The goal of this paper is also to describe in detail the practical implementation of these methods to allow the reader to easily build new models using standard software packages. These modeling paradigms, depending on the situation, should help build more efficient high-density neural prostheses for CNS rehabilitation.
Application of the Interface Approach in Quantum Ising Models
Sen, Parongama
1997-01-01
We investigate phase transitions in the Ising model and the ANNNI model in transverse field using the interface approach. The exact result of the Ising chain in a transverse field is reproduced. We find that apart from the interfacial energy, there are two other response functions which show simple scaling behaviour. For the ANNNI model in a transverse field, the phase diagram can be fully studied in the region where a ferromagnetic to paramagnetic phase transition occurs. The other region ca...
A Variable Flow Modelling Approach To Military End Strength Planning
2016-12-01
System Dynamics (SD) model is ideal for strategic analysis as it encompasses all the behaviours of a system and how the behaviours are influenced by...Markov Chain Models Wang describes Markov chain theory as a mathematical tool used to investigate dynamic behaviours of a system in a discrete-time... MODELLING APPROACH TO MILITARY END STRENGTH PLANNING by Benjamin K. Grossi December 2016 Thesis Advisor: Kenneth Doerr Second Reader
New Approaches in Usable Booster System Life Cycle Cost Modeling
2012-01-01
Lean NPD practices (many) • Lean Production & Operations Practices (many) • Supply Chain Operations Reference ( SCOR ) Model , Best Practices Make Deliver...NEW APPROACHES IN REUSABLE BOOSTER SYSTEM LIFE CYCLE COST MODELING Edgar Zapata National Aeronautics and Space Administration Kennedy Space Center...Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC
Final Report: Natural State Models of The Geysers Geothermal System, Sonoma County, California
Energy Technology Data Exchange (ETDEWEB)
T. H. Brikowski; D. L. Norton; D. D. Blackwell
2001-12-31
Final project report of natural state modeling effort for The Geysers geothermal field, California. Initial models examined the liquid-dominated state of the system, based on geologic constraints and calibrated to match observed whole rock delta-O18 isotope alteration. These models demonstrated that the early system was of generally low permeability (around 10{sup -12} m{sup 2}), with good hydraulic connectivity at depth (along the intrusive contact) and an intact caprock. Later effort in the project was directed at development of a two-phase, supercritical flow simulation package (EOS1sc) to accompany the Tough2 flow simulator. Geysers models made using this package show that ''simmering'', or the transient migration of vapor bubbles through the hydrothermal system, is the dominant transition state as the system progresses to vapor-dominated. Such a system is highly variable in space and time, making the rock record more difficult to interpret, since pressure-temperature indicators likely reflect only local, short duration conditions.
Study of GMSB models with photon final states using the ATLAS detector
Energy Technology Data Exchange (ETDEWEB)
Terwort, Mark
2009-11-30
Models with gauge mediated supersymmetry breaking (GMSB) provide a possible mechanism to mediate supersymmetry breaking to the electroweak scale. In these models the lightest-supersymmetric particle is the gravitino, while the next-to-lightest supersymmetric particle is either the lightest neutralino or a slepton. In the former case final states with large missing transverse energy from the gravitinos, multiple jets and two hard photons are expected in pp-collisions at the LHC. Depending on the lifetime of the neutralino the photons might not point back to the interaction vertex, which requires dedicated search strategies. Additionally, this feature can be used to measure the neutralino lifetime using either the timing information from the electromagnetic calorimeter or the reconstructed photon direction. Together with the measurements of kinematic endpoints in invariant mass distributions, the lifetime can be used as input for fits of the GMSB model and for the determination of the underlying parameters. The signal selection and the discovery potential for GMSB models with photons in the nal state are discussed using simulated data of the ATLAS detector. In addition, the measurement of supersymmetric particle masses and of the neutralino lifetime as well as the results of the global GMSB fits are presented. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Figueiredo, Marco Antonio Gaya de; Ricardo Izidoro [Universidade do Estado, Rio de Janeiro, RJ (Brazil). Inst. de Quimica. Dept. de Operacoes e Projetos Industriais]. E-mail: mgaya@uerj.br; Costa, Joao Manuel da [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas e Desenvolvimento Leopoldo Americo Miguez de Mello
2003-07-01
This paper presents the adopted approach by the Department of Industrial Operations and Projects of the Institute of Chemistry in the graduation course Design II, where a group of chemical engineering students elaborates a project to finish their graduation. The differential of our proposal consists in the integration of the previous chemical engineering courses (e.g. heat transfer, unit operations, etc.) and to take the student to an activity with a structure similar to those found in the engineering companies, with large application in the oil and gas industries, especially in the petroleum processing and refining. (author)
Energy Technology Data Exchange (ETDEWEB)
Wessel, Silvia [Ballard Materials Products; Harvey, David [Ballard Materials Products
2013-06-28
The durability of PEM fuel cells is a primary requirement for large scale commercialization of these power systems in transportation and stationary market applications that target operational lifetimes of 5,000 hours and 40,000 hours by 2015, respectively. Key degradation modes contributing to fuel cell lifetime limitations have been largely associated with the platinum-based cathode catalyst layer. Furthermore, as fuel cells are driven to low cost materials and lower catalyst loadings in order to meet the cost targets for commercialization, the catalyst durability has become even more important. While over the past few years significant progress has been made in identifying the underlying causes of fuel cell degradation and key parameters that greatly influence the degradation rates, many gaps with respect to knowledge of the driving mechanisms still exist; in particular, the acceleration of the mechanisms due to different structural compositions and under different fuel cell conditions remains an area not well understood. The focus of this project was to address catalyst durability by using a dual path approach that coupled an extensive range of experimental analysis and testing with a multi-scale modeling approach. With this, the major technical areas/issues of catalyst and catalyst layer performance and durability that were addressed are: 1. Catalyst and catalyst layer degradation mechanisms (Pt dissolution, agglomeration, Pt loss, e.g. Pt in the membrane, carbon oxidation and/or corrosion). a. Driving force for the different degradation mechanisms. b. Relationships between MEA performance, catalyst and catalyst layer degradation and operational conditions, catalyst layer composition, and structure. 2. Materials properties a. Changes in catalyst, catalyst layer, and MEA materials properties due to degradation. 3. Catalyst performance a. Relationships between catalyst structural changes and performance. b. Stability of the three-phase boundary and its effect on
Institute of Scientific and Technical Information of China (English)
Deng Guanqian; Qiu Jing; Liu Guanjun; Lv Kehong
2013-01-01
Associating environmental stresses (ESs) with built-in test (BIT) output is an important means to help diagnose intermittent faults (IFs).Aiming at low efficiency in association of traditional time stress measurement device (TSMD),an association model is built.Thereafter,a novel approach is given to evaluate the integrated environmental stress (IES) level.Firstly,the selection principle and approach of main environmental stresses (MESs) and key characteristic parameters (KCPs) are presented based on fault mode,mechanism,and ESs analysis (FMMEA).Secondly,reference stress events (RSEs) are constructed by dividing IES into three stress levels according to its impact on faults; and then the association model between integrated environmental stress event (IESE) and BIT output is built.Thirdly,an interval grey association approach to evaluate IES level is proposed due to the interval number of IES value.Consequently,the association output can be obtained as well.Finally,a case study is presented to demonstrate the proposed approach.Results show the proposed model and approach are effective and feasible.This approach can be used to guide ESs measure,record,and association.It is well suited for on-line assistant diagnosis of faults,especially IFs.
A dynamic modelling approach to evaluate GHG emissions from wastewater treatment plants
DEFF Research Database (Denmark)
Flores-Alsina, Xavier; Arnell, Magnus; Amerlinck, Youri
2012-01-01
units when evaluating the global warming potential (GWP) of a WWTP. Finally, the paper demonstrates the potential of using the proposed approach as a general model-based tool for determining the most sustainable WWTP operational strategies, which is essential in a water sector where climate change......The widened scope for wastewater treatment plants (WWTP) to consider not only water quality and cost, but also greenhouse gas (GHG) emissions and climate change calls for new tools to evaluate operational strategies/treatment technologies. The IWA Benchmark Simulation Model no. 2 (BSM2) has been......, energy and sustainability are key challenges to be tackled....
A Data-Based Approach for Modeling and Analysis of Vehicle Collision by LPV-ARMAX Models
Directory of Open Access Journals (Sweden)
Qiugang Lu
2013-01-01
Full Text Available Vehicle crash test is considered to be the most direct and common approach to assess the vehicle crashworthiness. However, it suffers from the drawbacks of high experiment cost and huge time consumption. Therefore, the establishment of a mathematical model of vehicle crash which can simplify the analysis process is significantly attractive. In this paper, we present the application of LPV-ARMAX model to simulate the car-to-pole collision with different initial impact velocities. The parameters of the LPV-ARMAX are assumed to have dependence on the initial impact velocities. Instead of establishing a set of LTI models for vehicle crashes with various impact velocities, the LPV-ARMAX model is comparatively simple and applicable to predict the responses of new collision situations different from the ones used for identification. Finally, the comparison between the predicted response and the real test data is conducted, which shows the high fidelity of the LPV-ARMAX model.
Directory of Open Access Journals (Sweden)
H. C. Winsemius
2006-01-01
Full Text Available Variations of water stocks in the upper Zambezi river basin have been determined by 2 different hydrological modelling approaches. The purpose was to provide preliminary terrestrial storage estimates in the upper Zambezi, which will be compared with estimates derived from the Gravity Recovery And Climate Experiment (GRACE in a future study. The first modelling approach is GIS-based, distributed and conceptual (STREAM. The second approach uses Lumped Elementary Watersheds identified and modelled conceptually (LEW. The STREAM model structure has been assessed using GLUE (Generalized Likelihood Uncertainty Estimation a posteriori to determine parameter identifiability. The LEW approach could, in addition, be tested for model structure, because computational efforts of LEW are low. Both models are threshold models, where the non-linear behaviour of the Zambezi river basin is explained by a combination of thresholds and linear reservoirs. The models were forced by time series of gauged and interpolated rainfall. Where available, runoff station data was used to calibrate the models. Ungauged watersheds were generally given the same parameter sets as their neighbouring calibrated watersheds. It appeared that the LEW model structure could be improved by applying GLUE iteratively. Eventually, it led to better identifiability of parameters and consequently a better model structure than the STREAM model. Hence, the final model structure obtained better represents the true hydrology. After calibration, both models show a comparable efficiency in representing discharge. However the LEW model shows a far greater storage amplitude than the STREAM model. This emphasizes the storage uncertainty related to hydrological modelling in data-scarce environments such as the Zambezi river basin. It underlines the need and potential for independent observations of terrestrial storage to enhance our understanding and modelling capacity of the hydrological processes. GRACE
Hot-gas cleanup system model development. Volume I. Final report
Energy Technology Data Exchange (ETDEWEB)
Ushimaru, K.; Bennett, A.; Bekowies, P.J.
1982-11-01
This two-volume report summarizes the state of the art in performance modeling of advanced high-temperature, high-pressure (HTHP) gas cleanup devices. Volume I contains the culmination of the research effort carried over the past 12 months and is a summary of research achievements. Volume II is the user's manual for the computer programs developed under the present research project. In this volume, Section 2 presents background information on pressurized, fluidized-bed combustion concepts, a description of the role of the advanced gas cleanup systems, and a list of advanced gas cleanup systems that are currently in development under DOE sponsorship. Section 3 describes the methodology for the software architecture that forms the basis of the well-disciplined and structured computer programs developed under the present project. Section 4 reviews the fundamental theories that are important in analyzing the cleanup performance of HTHP gas filters. Section 5 discusses the effect of alkali agents in HTHP gas cleanup. Section 6 evaluates the advanced HTHP gas cleanup models based on their mathematical integrity, availability of supporting data, and the likelihood of commercialization. As a result of the evaluation procedure detailed in Section 6, five performance models were chosen to be incorporated into the overall system simulation code, ASPEN. These five models (the electrocyclone, ceramic bag filter, moving granular bed filter, electrostatic granular bed filter, and electrostatic precipitator) are described in Section 7. The method of cost projection for these five models is discussed in Section 8. The supporting data and validation of the computer codes are presented in Section 9, and finally the conclusions and recommendations for the HTHP gas cleanup system model development are given in Section 10. 72 references, 19 figures, 25 tables.
Energy Technology Data Exchange (ETDEWEB)
NONE
2012-06-15
This report is the final report in a series of six reports detailing the findings from the Cowichan Valley Energy Mapping and Modelling project that was carried out from April of 2011 to March of 2012 by Ea Energy Analyses in conjunction with Geographic Resource Analysis and Science (GRAS). The driving force behind the Integrated Energy Mapping and Analysis project was the identification and analysis of a suite of pathways that the Cowichan Valley Regional District (CVRD) can utilise to increase its energy resilience, as well as reduce energy consumption and GHG emissions, with a primary focus on the residential sector. Mapping and analysis undertaken will support provincial energy and GHG reduction targets, and the suite of pathways outlined will address a CVRD internal target that calls for 75% of the region's energy within the residential sector to come from locally sourced renewables by 2050. The target has been developed as a mechanism to meet resilience and climate action target. The maps and findings produced are to be integrated as part of a regional policy framework currently under development. The present report is the final report and presents a summary of the findings of project tasks 1-5 and provides a set of recommendations to the CVRD based on the work done and with an eye towards the next steps in the energy planning process of the CVRD. (LN)
THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?
Ridley-Duff, R.
2015-01-01
This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM) conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs), social and responsible businesses (SRBs) and charitable trading activities (CTAs). The ethics that guide each approach ...
A Cluster-based Approach Towards Detecting and Modeling Network Dictionary Attacks
Directory of Open Access Journals (Sweden)
A. Tajari Siahmarzkooh
2016-12-01
Full Text Available In this paper, we provide an approach to detect network dictionary attacks using a data set collected as flows based on which a clustered graph is resulted. These flows provide an aggregated view of the network traffic in which the exchanged packets in the network are considered so that more internally connected nodes would be clustered. We show that dictionary attacks could be detected through some parameters namely the number and the weight of clusters in time series and their evolution over the time. Additionally, the Markov model based on the average weight of clusters,will be also created. Finally, by means of our suggested model, we demonstrate that artificial clusters of the flows are created for normal and malicious traffic. The results of the proposed approach on CAIDA 2007 data set suggest a high accuracy for the model and, therefore, it provides a proper method for detecting the dictionary attack.
Energy Technology Data Exchange (ETDEWEB)
Burke, J. F., Jr.; Love, R. J.; Macal, C. M.; Decision and Information Sciences
2004-07-01
Argonne National Laboratory (Argonne) developed the transportation system capability (TRANSCAP) model to simulate the deployment of forces from Army bases, in collaboration with and under the sponsorship of the Military Transportation Management Command Transportation Engineering Agency (MTMCTEA). TRANSCAP's design separates its pre- and post-processing modules (developed in Java) from its simulation module (developed in MODSIM III). This paper describes TRANSCAP's modelling approach, emphasizing Argonne's highly detailed, object-oriented, multilanguage software design principles. Fundamental to these design principles is TRANSCAP's implementation of an improved method for standardizing the transmission of simulated data to output analysis tools and the implementation of three Army deployment/redeployment community standards, all of which are in the final phases of community acceptance. The first is the extensive hierarchy and object representation for transport simulations (EXHORT), which is a reusable, object-oriented deployment simulation source code framework of classes. The second and third are algorithms for rail deployment operations at a military base.
Intelligent Transportation and Evacuation Planning A Modeling-Based Approach
Naser, Arab
2012-01-01
Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.
A transformation approach for collaboration based requirement models
Harbouche, Ahmed; Mokhtari, Aicha
2012-01-01
Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).
A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS
Directory of Open Access Journals (Sweden)
Ahmed Harbouche
2012-02-01
Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.
FInal Report: First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality
Energy Technology Data Exchange (ETDEWEB)
Aberg, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sadigh, Babak [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zhou, Fei [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-01-01
This final report presents work carried out on the project “First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality” at Lawrence Livermore National Laboratory during 2013-2015. The scope of the work was to further the physical understanding of the microscopic mechanisms behind scintillator nonproportionality that effectively limits the achievable detector resolution. Thereby, crucial quantitative data for these processes as input to large-scale simulation codes has been provided. In particular, this project was divided into three tasks: (i) Quantum mechanical rates of non-radiative quenching, (ii) The thermodynamics of point defects and dopants, and (iii) Formation and migration of self-trapped polarons. The progress and results of each of these subtasks are detailed.
Modeling of integrated environmental control systems for coal-fired power plants. Final report
Energy Technology Data Exchange (ETDEWEB)
Rubin, E.S.; Salmento, J.S.; Frey, H.C.; Abu-Baker, A.; Berkenpas, M.
1991-05-01
The Integrated Environmental Control Model (IECM) was designed to permit the systematic evaluation of environmental control options for pulverized coal-fired (PC) power plants. Of special interest was the ability to compare the performance and cost of advanced pollution control systems to ``conventional`` technologies for the control of particulate, SO{sub 2} and NO{sub x}. Of importance also was the ability to consider pre-combustion, combustion and post-combustion control methods employed alone or in combination to meet tough air pollution emission standards. Finally, the ability to conduct probabilistic analyses is a unique capability of the IECM. Key results are characterized as distribution functions rather than as single deterministic values. (VC)
MEMFIS - Measuring, modelling and forecasting ice loads on structures - Final report
Energy Technology Data Exchange (ETDEWEB)
Dierer, S.; Cattin, R.
2010-05-15
This illustrated final report for the Swiss Federal Office of Energy (SFOE) takes a look at the icing-up of structures such as overhead power lines, wind turbines and aerial cableways in mountainous or arctic areas. The measurement of icing at three locations in Switzerland, in the high Alps, the alpine foothills and the Jura mountains using a vertical freely-rotating cylinder is described. Problems encountered during the measurement campaigns are described and discussed. The development of a simulation system in parallel to the measurement campaign is also discussed. A comparison of measured and simulated data calculated with the WRF and COSMO models was made difficult as a result of the problems encountered with the apparatus used. The basic effects causing icing-up are discussed and the measurement apparatus used is examined.
Towards the final BSA modeling for the accelerator-driven BNCT facility at INFN LNL
Energy Technology Data Exchange (ETDEWEB)
Ceballos, C. [Centro de Aplicaciones Tecnlogicas y Desarrollo Nuclear, 5ta y30, Miramar, Playa, Ciudad Habana (Cuba); Esposito, J., E-mail: juan.esposito@lnl.infn.it [INFN, Laboratori Nazionali di Legnaro (LNL), via dell' Universita, 2, I-35020 Legnaro (PD) (Italy); Agosteo, S. [Politecnico di Milano, Dipartimento di Energia, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)] [INFN, Sezione di Milano, via Celoria 16, 20133 Milano (Italy); Colautti, P.; Conte, V.; Moro, D. [INFN, Laboratori Nazionali di Legnaro (LNL), via dell' Universita, 2, I-35020 Legnaro (PD) (Italy); Pola, A. [Politecnico di Milano, Dipartimento di Energia, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)] [INFN, Sezione di Milano, via Celoria 16, 20133 Milano (Italy)
2011-12-15
Some remarkable advances have been made in the last years on the SPES-BNCT project of the Istituto Nazionale di Fisica Nucleare (INFN) towards the development of the accelerator-driven thermal neutron beam facility at the Legnaro National Laboratories (LNL), aimed at the BNCT experimental treatment of extended skin melanoma. The compact neutron source will be produced via the {sup 9}Be(p,xn) reactions using the 5 MeV, 30 mA beam driven by the RFQ accelerator, whose modules construction has been recently completed, into a thick beryllium target prototype already available. The Beam Shaping Assembly (BSA) final modeling, using both neutron converter and the new, detailed, Be(p,xn) neutron yield spectra at 5 MeV energy recently measured at the CN Van de Graaff accelerator at LNL, is summarized here.
Towards the final BSA modeling for the accelerator-driven BNCT facility at INFN LNL.
Ceballos, C; Esposito, J; Agosteo, S; Colautti, P; Conte, V; Moro, D; Pola, A
2011-12-01
Some remarkable advances have been made in the last years on the SPES-BNCT project of the Istituto Nazionale di Fisica Nucleare (INFN) towards the development of the accelerator-driven thermal neutron beam facility at the Legnaro National Laboratories (LNL), aimed at the BNCT experimental treatment of extended skin melanoma. The compact neutron source will be produced via the (9)Be(p,xn) reactions using the 5 MeV, 30 mA beam driven by the RFQ accelerator, whose modules construction has been recently completed, into a thick beryllium target prototype already available. The Beam Shaping Assembly (BSA) final modeling, using both neutron converter and the new, detailed, Be(p,xn) neutron yield spectra at 5 MeV energy recently measured at the CN Van de Graaff accelerator at LNL, is summarized here. Copyright Â© 2011 Elsevier Ltd. All rights reserved.
Search for a fermiophobic and standard model Higgs boson in diphoton final states.
Abazov, V M; Abbott, B; Acharya, B S; Adams, M; Adams, T; Alexeev, G D; Alkhazov, G; Alton, A; Alverson, G; Alves, G A; Aoki, M; Arov, M; Askew, A; Åsman, B; Atramentov, O; Avila, C; BackusMayes, J; Badaud, F; Bagby, L; Baldin, B; Bandurin, D V; Banerjee, S; Barberis, E; Baringer, P; Barreto, J; Bartlett, J F; Bassler, U; Bazterra, V; Beale, S; Bean, A; Begalli, M; Begel, M; Belanger-Champagne, C; Bellantoni, L; Beri, S B; Bernardi, G; Bernhard, R; Bertram, I; Besançon, M; Beuselinck, R; Bezzubov, V A; Bhat, P C; Bhatnagar, V; Blazey, G; Blessing, S; Bloom, K; Boehnlein, A; Boline, D; Boos, E E; Borissov, G; Bose, T; Brandt, A; Brandt, O; Brock, R; Brooijmans, G; Bross, A; Brown, D; Brown, J; Bu, X B; Buehler, M; Buescher, V; Bunichev, V; Burdin, S; Burnett, T H; Buszello, C P; Calpas, B; Camacho-Pérez, E; Carrasco-Lizarraga, M A; Casey, B C K; Castilla-Valdez, H; Chakrabarti, S; Chakraborty, D; Chan, K M; Chandra, A; Chen, G; Chevalier-Théry, S; Cho, D K; Cho, S W; Choi, S; Choudhary, B; Cihangir, S; Claes, D; Clutter, J; Cooke, M; Cooper, W E; Corcoran, M; Couderc, F; Cousinou, M-C; Croc, A; Cutts, D; Das, A; Davies, G; De, K; de Jong, S J; De La Cruz-Burelo, E; Déliot, F; Demarteau, M; Demina, R; Denisov, D; Denisov, S P; Desai, S; Deterre, C; DeVaughan, K; Diehl, H T; Diesburg, M; Ding, P F; Dominguez, A; Dorland, T; Dubey, A; Dudko, L V; Duggan, D; Duperrin, A; Dutt, S; Dyshkant, A; Eads, M; Edmunds, D; Ellison, J; Elvira, V D; Enari, Y; Evans, H; Evdokimov, A; Evdokimov, V N; Facini, G; Ferbel, T; Fiedler, F; Filthaut, F; Fisher, W; Fisk, H E; Fortner, M; Fox, H; Fuess, S; Garcia-Bellido, A; Gavrilov, V; Gay, P; Geng, W; Gerbaudo, D; Gerber, C E; Gershtein, Y; Ginther, G; Golovanov, G; Goussiou, A; Grannis, P D; Greder, S; Greenlee, H; Greenwood, Z D; Gregores, E M; Grenier, G; Gris, Ph; Grivaz, J-F; Grohsjean, A; Grünendahl, S; Grünewald, M W; Guillemin, T; Guo, F; Gutierrez, G; Gutierrez, P; Haas, A; Hagopian, S; Haley, J; Han, L; Harder, K; Harel, A; Hauptman, J M; Hays, J; Head, T; Hebbeker, T; Hedin, D; Hegab, H; Heinson, A P; Heintz, U; Hensel, C; Heredia-De La Cruz, I; Herner, K; Hesketh, G; Hildreth, M D; Hirosky, R; Hoang, T; Hobbs, J D; Hoeneisen, B; Hohlfeld, M; Hubacek, Z; Huske, N; Hynek, V; Iashvili, I; Ilchenko, Y; Illingworth, R; Ito, A S; Jabeen, S; Jaffré, M; Jamin, D; Jayasinghe, A; Jesik, R; Johns, K; Johnson, M; Johnston, D; Jonckheere, A; Jonsson, P; Joshi, J; Jung, A W; Juste, A; Kaadze, K; Kajfasz, E; Karmanov, D; Kasper, P A; Katsanos, I; Kehoe, R; Kermiche, S; Khalatyan, N; Khanov, A; Kharchilava, A; Kharzheev, Y N; Kirby, M H; Kohli, J M; Kozelov, A V; Kraus, J; Kulikov, S; Kumar, A; Kupco, A; Kurča, T; Kuzmin, V A; Kvita, J; Lammers, S; Landsberg, G; Lebrun, P; Lee, H S; Lee, S W; Lee, W M; Lellouch, J; Li, L; Li, Q Z; Lietti, S M; Lim, J K; Lincoln, D; Linnemann, J; Lipaev, V V; Lipton, R; Liu, Y; Liu, Z; Lobodenko, A; Lokajicek, M; Lopes de Sa, R; Lubatti, H J; Luna-Garcia, R; Lyon, A L; Maciel, A K A; Mackin, D; Madar, R; Magaña-Villalba, R; Malik, S; Malyshev, V L; Maravin, Y; Martínez-Ortega, J; McCarthy, R; McGivern, C L; Meijer, M M; Melnitchouk, A; Menezes, D; Mercadante, P G; Merkin, M; Meyer, A; Meyer, J; Miconi, F; Mondal, N K; Muanza, G S; Mulhearn, M; Nagy, E; Naimuddin, M; Narain, M; Nayyar, R; Neal, H A; Negret, J P; Neustroev, P; Novaes, S F; Nunnemann, T; Obrant, G; Orduna, J; Osman, N; Osta, J; Otero y Garzón, G J; Padilla, M; Pal, A; Parashar, N; Parihar, V; Park, S K; Parsons, J; Partridge, R; Parua, N; Patwa, A; Penning, B; Perfilov, M; Peters, K; Peters, Y; Petridis, K; Petrillo, G; Pétroff, P; Piegaia, R; Pleier, M-A; Podesta-Lerma, P L M; Podstavkov, V M; Polozov, P; Popov, A V; Prewitt, M; Price, D; Prokopenko, N; Protopopescu, S; Qian, J; Quadt, A; Quinn, B; Rangel, M S; Ranjan, K; Ratoff, P N; Razumov, I; Renkel, P; Rijssenbeek, M; Ripp-Baudot, I; Rizatdinova, F; Rominsky, M; Ross, A; Royon, C; Rubinov, P; Ruchti, R; Safronov, G; Sajot, G; Salcido, P; Sánchez-Hernández, A; Sanders, M P; Sanghi, B; Santos, A S; Savage, G; Sawyer, L; Scanlon, T; Schamberger, R D; Scheglov, Y; Schellman, H; Schliephake, T; Schlobohm, S; Schwanenberger, C; Schwienhorst, R; Sekaric, J; Severini, H; Shabalina, E; Shary, V; Shchukin, A A; Shivpuri, R K; Simak, V; Sirotenko, V; Skubic, P; Slattery, P; Smirnov, D; Smith, K J; Snow, G R; Snow, J; Snyder, S; Söldner-Rembold, S; Sonnenschein, L; Soustruznik, K; Stark, J; Stolin, V; Stoyanova, D A; Strauss, M; Strom, D; Stutte, L; Suter, L; Svoisky, P; Takahashi, M; Tanasijczuk, A; Taylor, W; Titov, M; Tokmenin, V V; Tsai, Y-T; Tsybychev, D; Tuchming, B; Tully, C; Uvarov, L; Uvarov, S; Uzunyan, S; Van Kooten, R; van Leeuwen, W M; Varelas, N; Varnes, E W; Vasilyev, I A; Verdier, P; Vertogradov, L S; Verzocchi, M; Vesterinen, M; Vilanova, D; Vokac, P; Wahl, H D; Wang, M H L S; Warchol, J; Watts, G; Wayne, M; Weber, M; Welty-Rieger, L; White, A; Wicke, D; Williams, M R J; Wilson, G W; Wobisch, M; Wood, D R; Wyatt, T R; Xie, Y; Xu, C; Yacoob, S; Yamada, R; Yang, W-C; Yasuda, T; Yatsunenko, Y A; Ye, Z; Yin, H; Yip, K; Youn, S W; Yu, J; Zelitch, S; Zhao, T; Zhou, B; Zhu, J; Zielinski, M; Zieminska, D; Zivkovic, L
2011-10-07
We present a search for the standard model Higgs boson and a fermiophobic Higgs boson in the diphoton final states based on 8.2 fb(-1) of pp collisions at sqrt[s]=1.96 TeV collected with the D0 detector at the Fermilab Tevatron Collider. No excess of data above background predictions is observed and upper limits at the 95% C.L. on the cross section multiplied by the branching fraction are set which are the most restrictive to date. A fermiophobic Higgs boson with a mass below 112.9 GeV is excluded at the 95% C.L.
An algebraic approach to modeling in software engineering
Energy Technology Data Exchange (ETDEWEB)
Loegel, G.J. [Superconducting Super Collider Lab., Dallas, TX (United States)]|[Michigan Univ., Ann Arbor, MI (United States); Ravishankar, C.V. [Michigan Univ., Ann Arbor, MI (United States)
1993-09-01
Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ``computer science`` objects like abstract data types, but in practice software errors are often caused because ``real-world`` objects are improperly modeled. There is a large semantic gap between the customer`s objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form.
DISTRIBUTED APPROACH to WEB PAGE CATEGORIZATION USING MAPREDUCE PROGRAMMING MODEL
Directory of Open Access Journals (Sweden)
P.Malarvizhi
2011-12-01
Full Text Available The web is a large repository of information and to facilitate the search and retrieval of pages from it,categorization of web documents is essential. An effective means to handle the complexity of information retrieval from the internet is through automatic classification of web pages. Although lots of automatic classification algorithms and systems have been presented, most of the existing approaches are computationally challenging. In order to overcome this challenge, we have proposed a parallel algorithm, known as MapReduce programming model to automatically categorize the web pages. This approach incorporates three concepts. They are web crawler, MapReduce programming model and the proposed web page categorization approach. Initially, we have utilized web crawler to mine the World Wide Web and the crawled web pages are then directly given as input to the MapReduce programming model. Here the MapReduce programming model adapted to our proposed web page categorization approach finds the appropriate category of the web page according to its content. The experimental results show that our proposed parallel web page categorization approach achieves satisfactory results in finding the right category for any given web page.
A new approach towards image based virtual 3D city modeling by using close range photogrammetry
Singh, S. P.; Jain, K.; Mandla, V. R.
2014-05-01
3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country
Yermolayeva, Yevdokiya; Rakison, David H
2014-01-01
Connectionist models have been applied to many phenomena in infant development including perseveration, language learning, categorization, and causal perception. In this article, we discuss the benefits of connectionist networks for the advancement of theories of early development. In particular, connectionist models contribute novel testable predictions, instantiate the theorized mechanism of change, and create a unifying framework for understanding infant learning and development. We relate these benefits to the 2 primary approaches used in connectionist models of infant development. The first approach employs changes in neural processing as the basis for developmental changes, and the second employs changes in infants' experiences. The review sheds light on the unique hurdles faced by each approach as well as the challenges and solutions related to both, particularly with respect to the identification of critical model components, parameter specification, availability of empirical data, and model comparison. Finally, we discuss the future of modeling work as it relates to the study of development. We propose that connectionist networks stand to make a powerful contribution to the generation and revision of theories of early child development. Furthermore, insights from connectionist models of early development can improve the understanding of developmental changes throughout the life span.
Urban water quality modelling: a parsimonious holistic approach for a complex real case study.
Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare
2010-01-01
In the past three decades, scientific research has focused on the preservation of water resources, and in particular, on the polluting impact of urban areas on natural water bodies. One approach to this research has involved the development of tools to describe the phenomena that take place on the urban catchment during both wet and dry periods. Research has demonstrated the importance of the integrated analysis of all the transformation phases that characterise the delivery and treatment of urban water pollutants from source to outfall. With this aim, numerous integrated urban drainage models have been developed to analyse the fate of pollution from urban catchments to the final receiving waters, simulating several physical and chemical processes. Such modelling approaches require calibration, and for this reason, researchers have tried to address two opposing needs: the need for reliable representation of complex systems, and the need to employ parsimonious approaches to cope with the usually insufficient, especially for urban sources, water quality data. The present paper discusses the application of a be-spoke model to a complex integrated catchment: the Nocella basin (Italy). This system is characterised by two main urban areas served by two wastewater treatment plants, and has a small river as the receiving water body. The paper describes the monitoring approach that was used for model calibration, presents some interesting considerations about the monitoring needs for integrated modelling applications, and provides initial results useful for identifying the most relevant polluting sources.
Teaching Service Modelling to a Mixed Class: An Integrated Approach
Directory of Open Access Journals (Sweden)
Jeremiah D. DENG
2015-04-01
Full Text Available Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching with strategies such as problem-solving, visualization, and the use of examples and simulations, has been developed. From assessment on student learning outcomes, it is indicated that the proposed course delivery approach succeeded in bringing out comparable and satisfactory performance from students of different educational backgrounds.
A Spatial Clustering Approach for Stochastic Fracture Network Modelling
Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.
2014-07-01
Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach
Energy Technology Data Exchange (ETDEWEB)
Stasiak, R.T.; Turner, P.A.; Pendyala, R.; Polzin, S.E.
1998-08-01
This research effort was undertaken to develop a simulation capability to be used to evaluate various paratransit service delivery characteristics and policies. The genesis of the research stems from an earlier effort to identify literature addressing the theoretical maximum productivity of paratransit operations in a given demographic environment. Surprisingly, virtually no operations research or simulation work that addressed this topic was found in the literature or known to the experts that were contacted. It became obvious that no systematic evaluation of paratransit productivity issues had ever been carried out using an urban simulation model or optimization approach. Thus, this research effort was undertaken to develop such a capability through a multi-step process involving an urban land use and transportation network, a paratransit trip generation model, and a service delivery model consisting of a vehicle dispatching algorithm. The overall framework was modeled after the simulation efforts common in the 1970s that used network simulation models to test various urban forms and fixed route transit delivery scenarios to evaluate energy efficiency and productivity.
Joint Modeling of Multiple Crimes: A Bayesian Spatial Approach
Directory of Open Access Journals (Sweden)
Hongqiang Liu
2017-01-01
Full Text Available A multivariate Bayesian spatial modeling approach was used to jointly model the counts of two types of crime, i.e., burglary and non-motor vehicle theft, and explore the geographic pattern of crime risks and relevant risk factors. In contrast to the univariate model, which assumes independence across outcomes, the multivariate approach takes into account potential correlations between crimes. Six independent variables are included in the model as potential risk factors. In order to fully present this method, both the multivariate model and its univariate counterpart are examined. We fitted the two models to the data and assessed them using the deviance information criterion. A comparison of the results from the two models indicates that the multivariate model was superior to the univariate model. Our results show that population density and bar density are clearly associated with both burglary and non-motor vehicle theft risks and indicate a close relationship between these two types of crime. The posterior means and 2.5% percentile of type-specific crime risks estimated by the multivariate model were mapped to uncover the geographic patterns. The implications, limitations and future work of the study are discussed in the concluding section.
Reduced order models for thermal analysis : final report : LDRD Project No. 137807.
Energy Technology Data Exchange (ETDEWEB)
Hogan, Roy E., Jr.; Gartling, David K.
2010-09-01
This LDRD Senior's Council Project is focused on the development, implementation and evaluation of Reduced Order Models (ROM) for application in the thermal analysis of complex engineering problems. Two basic approaches to developing a ROM for combined thermal conduction and enclosure radiation problems are considered. As a prerequisite to a ROM a fully coupled solution method for conduction/radiation models is required; a parallel implementation is explored for this class of problems. High-fidelity models of large, complex systems are now used routinely to verify design and performance. However, there are applications where the high-fidelity model is too large to be used repetitively in a design mode. One such application is the design of a control system that oversees the functioning of the complex, high-fidelity model. Examples include control systems for manufacturing processes such as brazing and annealing furnaces as well as control systems for the thermal management of optical systems. A reduced order model (ROM) seeks to reduce the number of degrees of freedom needed to represent the overall behavior of the large system without a significant loss in accuracy. The reduction in the number of degrees of freedom of the ROM leads to immediate increases in computational efficiency and allows many design parameters and perturbations to be quickly and effectively evaluated. Reduced order models are routinely used in solid mechanics where techniques such as modal analysis have reached a high state of refinement. Similar techniques have recently been applied in standard thermal conduction problems e.g. though the general use of ROM for heat transfer is not yet widespread. One major difficulty with the development of ROM for general thermal analysis is the need to include the very nonlinear effects of enclosure radiation in many applications. Many ROM methods have considered only linear or mildly nonlinear problems. In the present study a reduced order model is
Chan, Jennifer S K
2016-05-01
Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies.
A Multiple Model Approach to Modeling Based on Fuzzy Support Vector Machines
Institute of Scientific and Technical Information of China (English)
冯瑞; 张艳珠; 宋春林; 邵惠鹤
2003-01-01
A new multiple models(MM) approach was proposed to model complex industrial process by using Fuzzy Support Vector Machines (F SVMs). By applying the proposed approach to a pH neutralization titration experi-ment, F_SVMs MM not only provides satisfactory approximation and generalization property, but also achieves superior performance to USOCPN multiple modeling method and single modeling method based on standard SVMs.
Software sensors based on the grey-box modelling approach
DEFF Research Database (Denmark)
Carstensen, J.; Harremoës, P.; Strube, Rune
1996-01-01
In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey-box...... models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey-box...
Environmental Radiation Effects on Mammals A Dynamical Modeling Approach
Smirnova, Olga A
2010-01-01
This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...
The standard data model approach to patient record transfer.
Canfield, K; Silva, M; Petrucci, K
1994-01-01
This paper develops an approach to electronic data exchange of patient records from Ambulatory Encounter Systems (AESs). This approach assumes that the AES is based upon a standard data model. The data modeling standard used here is IDEFIX for Entity/Relationship (E/R) modeling. Each site that uses a relational database implementation of this standard data model (or a subset of it) can exchange very detailed patient data with other such sites using industry standard tools and without excessive programming efforts. This design is detailed below for a demonstration project between the research-oriented geriatric clinic at the Baltimore Veterans Affairs Medical Center (BVAMC) and the Laboratory for Healthcare Informatics (LHI) at the University of Maryland.
Model selection and inference a practical information-theoretic approach
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
Real-space renormalization group approach to the Anderson model
Campbell, Eamonn
Many of the most interesting electronic behaviours currently being studied are associated with strong correlations. In addition, many of these materials are disordered either intrinsically or due to doping. Solving interacting systems exactly is extremely computationally expensive, and approximate techniques developed for strongly correlated systems are not easily adapted to include disorder. As a non-interacting disordered model, it makes sense to consider the Anderson model as a first step in developing an approximate method of solution to the interacting and disordered Anderson-Hubbard model. Our renormalization group (RG) approach is modeled on that proposed by Johri and Bhatt [23]. We found an error in their work which we have corrected in our procedure. After testing the execution of the RG, we benchmarked the density of states and inverse participation ratio results against exact diagonalization. Our approach is significantly faster than exact diagonalization and is most accurate in the limit of strong disorder.
Model Convolution: A Computational Approach to Digital Image Interpretation
Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.
2010-01-01
Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132
MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION
Directory of Open Access Journals (Sweden)
Priyanka H U
2016-09-01
Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.
A new approach of high speed cutting modelling: SPH method
LIMIDO, Jérôme; Espinosa, Christine; Salaün, Michel; Lacome, Jean-Luc
2006-01-01
The purpose of this study is to introduce a new approach of high speed cutting numerical modelling. A lagrangian Smoothed Particle Hydrodynamics (SPH) based model is carried out using the Ls-Dyna software. SPH is a meshless method, thus large material distortions that occur in the cutting problem are easily managed and SPH contact control permits a “natural” workpiece/chip separation. Estimated chip morphology and cutting forces are compared to machining dedicated code results and experimenta...
Schwinger boson approach to the fully screened Kondo model.
Rech, J; Coleman, P; Zarand, G; Parcollet, O
2006-01-13
We apply the Schwinger boson scheme to the fully screened Kondo model and generalize the method to include antiferromagnetic interactions between ions. Our approach captures the Kondo crossover from local moment behavior to a Fermi liquid with a nontrivial Wilson ratio. When applied to the two-impurity model, the mean-field theory describes the "Varma-Jones" quantum phase transition between a valence bond state and a heavy Fermi liquid.
Kallen Lehman approach to 3D Ising model
Canfora, F.
2007-03-01
A “Kallen-Lehman” approach to Ising model, inspired by quantum field theory à la Regge, is proposed. The analogy with the Kallen-Lehman representation leads to a formula for the free-energy of the 3D model with few free parameters which could be matched with the numerical data. The possible application of this scheme to the spin glass case is shortly discussed.
Modelling approaches in sedimentology: Introduction to the thematic issue
Joseph, Philippe; Teles, Vanessa; Weill, Pierre
2016-09-01
As an introduction to this thematic issue on "Modelling approaches in sedimentology", this paper gives an overview of the workshop held in Paris on 7 November 2013 during the 14th Congress of the French Association of Sedimentologists. A synthesis of the workshop in terms of concepts, spatial and temporal scales, constraining data, and scientific challenges is first presented, then a discussion on the possibility of coupling different models, the industrial needs, and the new potential domains of research is exposed.
Modeling Electronic Circular Dichroism within the Polarizable Embedding Approach
DEFF Research Database (Denmark)
Nørby, Morten S; Olsen, Jógvan Magnus Haugaard; Steinmann, Casper
2017-01-01
We present a systematic investigation of the key components needed to model single chromophore electronic circular dichroism (ECD) within the polarizable embedding (PE) approach. By relying on accurate forms of the embedding potential, where especially the inclusion of local field effects...... sampling. We show that a significant number of snapshots are needed to avoid artifacts in the calculated electronic circular dichroism parameters due to insufficient configurational sampling, thus highlighting the efficiency of the PE model....
Computational Models of Spreadsheet Development: Basis for Educational Approaches
Hodnigg, Karin; Mittermeir, Roland T
2008-01-01
Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.
Modeling Water Shortage Management Using an Object-Oriented Approach
Wang, J.; Senarath, S.; Brion, L.; Niedzialek, J.; Novoa, R.; Obeysekera, J.
2007-12-01
As a result of the increasing global population and the resulting urbanization, water shortage issues have received increased attention throughout the world . Water supply has not been able to keep up with increased demand for water, especially during times of drought. The use of an object-oriented (OO) approach coupled with efficient mathematical models is an effective tool in addressing discrepancies between water supply and demand. Object-oriented modeling has been proven powerful and efficient in simulating natural behavior. This research presents a way to model water shortage management using the OO approach. Three groups of conceptual components using the OO approach are designed for the management model. The first group encompasses evaluation of natural behaviors and possible related management options. This evaluation includes assessing any discrepancy that might exist between water demand and supply. The second group is for decision making which includes the determination of water use cutback amount and duration using established criteria. The third group is for implementation of the management options which are restrictions of water usage at a local or regional scale. The loop is closed through a feedback mechanism where continuity in the time domain is established. Like many other regions, drought management is very important in south Florida. The Regional Simulation Model (RSM) is a finite volume, fully integrated hydrologic model used by the South Florida Water Management District to evaluate regional response to various planning alternatives including drought management. A trigger module was developed for RSM that encapsulates the OO approach to water shortage management. Rigorous testing of the module was performed using historical south Florida conditions. Keywords: Object-oriented, modeling, water shortage management, trigger module, Regional Simulation Model
Urban Modelling with Typological Approach. Case Study: Merida, Yucatan, Mexico
Rodriguez, A.
2017-08-01
In three-dimensional models of urban historical reconstruction, missed contextual architecture faces difficulties because it does not have much written references in contrast to the most important monuments. This is the case of Merida, Yucatan, Mexico during the Colonial Era (1542-1810), which has lost much of its heritage. An alternative to offer a hypothetical view of these elements is a typological - parametric definition that allows a 3D modeling approach to the most common features of this heritage evidence.
Few-body systems in a shell-model approach
Energy Technology Data Exchange (ETDEWEB)
Toelle, Simon
2014-02-10
In this thesis, I introduce and compare an implementation of two different shell models for physical systems consisting of multiple identical bosons. In the main part, the shell model is used to study the energy spectra of bosons with contact interactions in a harmonic confinement as well as those of unconfined He clusters. The convergence of the shell-model results is investigated in detail as the size of the model space is increased. Furthermore, possible improvements such as the smearing of contact interactions or a unitary transformation of the potentials are utilised and assessed. Systems with up to twelve bosons are considered. Moreover, I test a procedure to determine scattering observables from the energy spectra of fermions in a harmonic confinement. Finally, the position and width of resonances are extracted from the dependence of the energy spectra on the oscillator length.
Empirical Analysis of Xinjiang's Bilateral Trade: Gravity Model Approach
Institute of Scientific and Technical Information of China (English)
CHEN Xuegang; YANG Zhaoping; LIU Xuling
2008-01-01
Based on the basic trade gravity model and Xinjiang's practical situation, new explanatory variables (GDP,GDPpc and SCO) are introduced to build an extended trade gravity model fitting for Xinjiang's bilateral trade. Fromthe empirical analysis of this model, it is proposed that those three variables affect the Xinjiang's bilateral trade posi-tively. Whereas, geographic distance is found to be a significant factor influencing Xinjiang's bilateral trade negatively.Then, by the extended trade gravity model, this article analyzes the present trade situation between Xinjiang and itsmain trade partners quantitatively in 2004. The results indicate that Xinjiang cooperates with its most trade partnerssuccessfully in terms of present economic scale and developing revel. Xinjiang has established successfully trade part-nership with Central Asia, Central Europe and Eastern Europe, Western Europe, East Asia and South Asia. However,the foreign trade development with West Asia is much slower. Finally, some suggestions on developing Xinjiang's for-eign trade are put forward.
Comparative flood damage model assessment: towards a European approach
Directory of Open Access Journals (Sweden)
B. Jongman
2012-12-01
Full Text Available There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth–damage functions and exposure (i.e. asset values, whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.
Similarity transformation approach to identifiability analysis of nonlinear compartmental models.
Vajda, S; Godfrey, K R; Rabitz, H
1989-04-01
Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.
a Study of Urban Stormwater Modeling Approach in Singapore Catchment
Liew, S. C.; Liong, S. Y.; Vu, M. T.
2011-07-01
Urbanization has the direct effect of increasing the amount of surface runoff to be discharged through man-made drainage systems. Thus, Singapore's rapid urbanization has drawn great attention on flooding issues. In view of this, proper stormwater modeling approach is necessary for the assessment planning, design, and control of the storm and combines sewerage system. Impacts of urbanization on surface runoff and catchment flooding in Singapore are studied in this paper. In this study, the application of SOBEK-urban 1D is introduced on model catchments and a hypothetical catchment model is created for simulation purpose. Stormwater modeling approach using SOBEK-urban offers a comprehensive modeling tool for simple or extensive urban drainage systems consisting of sewers and open channels despite its size and complexity of the network. The findings from the present study show that stormwater modeling is able to identify flood area and the impact of the anticipated sea level on urban drainage network. Consequently, the performance of the urban drainage system can be improved and early prevention approaches can be carried out.
The Generalised Ecosystem Modelling Approach in Radiological Assessment
Energy Technology Data Exchange (ETDEWEB)
Klos, Richard
2008-03-15
An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment
Energy Technology Data Exchange (ETDEWEB)
Li, Haiyan [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Huang, Yunbao, E-mail: Huangyblhy@gmail.com [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Jiang, Shaoen, E-mail: Jiangshn@vip.sina.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Jing, Longfei, E-mail: scmyking_2008@163.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Tianxuan, Huang; Ding, Yongkun [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China)
2015-11-15
Highlights: • A unified modeling approach for physical experiment design is presented. • Any laser facility can be flexibly defined and included with two scripts. • Complex targets and laser beams can be parametrically modeled for optimization. • Automatically mapping of laser beam energy facilitates targets shape optimization. - Abstract: Physical experiment design and optimization is very essential for laser driven inertial confinement fusion due to the high cost of each shot. However, only limited experiments with simple structure or shape on several laser facilities can be designed and evaluated in available codes, and targets are usually defined by programming, which may lead to it difficult for complex shape target design and optimization on arbitrary laser facilities. A unified modeling approach for physical experiment design and optimization on any laser facilities is presented in this paper. Its core idea includes: (1) any laser facility can be flexibly defined and included with two scripts, (2) complex shape targets and laser beams can be parametrically modeled based on features, (3) an automatically mapping scheme of laser beam energy onto discrete mesh elements of targets enable targets or laser beams be optimized without any additional interactive modeling or programming, and (4) significant computation algorithms are additionally presented to efficiently evaluate radiation symmetry on the target. Finally, examples are demonstrated to validate the significance of such unified modeling approach for physical experiments design and optimization in laser driven inertial confinement fusion.
A vector relational data modeling approach to Insider threat intelligence
Kelly, Ryan F.; Anderson, Thomas S.
2016-05-01
We address the problem of detecting insider threats before they can do harm. In many cases, co-workers notice indications of suspicious activity prior to insider threat attacks. A partial solution to this problem requires an understanding of how information can better traverse the communication network between human intelligence and insider threat analysts. Our approach employs modern mobile communications technology and scale free network architecture to reduce the network distance between human sensors and analysts. In order to solve this problem, we propose a Vector Relational Data Modeling approach to integrate human "sensors," geo-location, and existing visual analytics tools. This integration problem is known to be difficult due to quadratic increases in cost associated with complex integration solutions. A scale free network integration approach using vector relational data modeling is proposed as a method for reducing network distance without increasing cost.
A discrete Lagrangian based direct approach to macroscopic modelling
Sarkar, Saikat; Nowruzpour, Mohsen; Reddy, J. N.; Srinivasa, A. R.
2017-01-01
A direct discrete Lagrangian based approach, designed at a length scale of interest, to characterize the response of a body is proposed. The main idea is to understand the dynamics of a deformable body via a Lagrangian corresponding to a coupled interaction of rigid particles in the reduced dimension. We argue that the usual practice of describing the laws of a deformable body in the continuum limit is redundant, because for most of the practical problems, analytical solutions are not available. Since continuum limit is not taken, the framework automatically relaxes the requirement of differentiability of field variables. The discrete Lagrangian based approach is illustrated by deriving an equivalent of the Euler-Bernoulli beam model. A few test examples are solved, which demonstrate that the derived non-local model predicts lower deflections in comparison to classical Euler-Bernoulli beam solutions. We have also included crack propagation in thin structures for isotropic and anisotropic cases using the Lagrangian based approach.
Reconciliation with oneself and with others: From approach to model
Directory of Open Access Journals (Sweden)
Nikolić-Ristanović Vesna
2010-01-01
Full Text Available The paper intends to present the approach to dealing with war and its consequences which was developed within Victimology Society of Serbia over the last five years, in the framework of Association Joint Action for Truth and Reconciliation (ZAIP. First, the short review of the Association and the process through which ZAIP approach to dealing with a past was developed is presented. Then, the detailed description of the approach itself, with identification of its most important specificities, is presented. In the conclusion, next steps, aimed at development of the model of reconciliation which will have the basis in ZAIP approach and which will be appropriate to social context of Serbia and its surrounding, are suggested.
EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES
Directory of Open Access Journals (Sweden)
Slavko Arsovski
2009-03-01
Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.
Vibro-acoustics of porous materials - waveguide modeling approach
DEFF Research Database (Denmark)
Darula, Radoslav; Sorokin, Sergey V.
2016-01-01
The porous material is considered as a compound multi-layered waveguide (i.e. a fluid layer surrounded with elastic layers) with traction free boundary conditions. The attenuation of the vibro-acoustic waves in such a material is assessed. This approach is compared with a conventional Biot's model...... in porous materials....
A novel Monte Carlo approach to hybrid local volatility models
A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.
Teaching Modeling with Partial Differential Equations: Several Successful Approaches
Myers, Joseph; Trubatch, David; Winkel, Brian
2008-01-01
We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…
A Behavioral Decision Making Modeling Approach Towards Hedging Services
Pennings, J.M.E.; Candel, M.J.J.M.; Egelkraut, T.M.
2003-01-01
This paper takes a behavioral approach toward the market for hedging services. A behavioral decision-making model is developed that provides insight into how and why owner-managers decide the way they do regarding hedging services. Insight into those choice processes reveals information needed by fi
A fuzzy approach to the Weighted Overlap Dominance model
DEFF Research Database (Denmark)
Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt
2013-01-01
in an interactive way, where input data can take the form of uniquely-graded or interval-valued information. Here we explore the Weighted Overlap Dominance (WOD) model from a fuzzy perspective and its outranking approach to decision support and multidimensional interval analysis. Firstly, imprecision measures...
Methodological Approach for Modeling of Multienzyme in-pot Processes
DEFF Research Database (Denmark)
Andrade Santacoloma, Paloma de Gracia; Roman Martinez, Alicia; Sin, Gürkan;
2011-01-01
This paper presents a methodological approach for modeling multi-enzyme in-pot processes. The methodology is exemplified stepwise through the bi-enzymatic production of N-acetyl-D-neuraminic acid (Neu5Ac) from N-acetyl-D-glucosamine (GlcNAc). In this case study, sensitivity analysis is also used...
Towards modeling future energy infrastructures - the ELECTRA system engineering approach
DEFF Research Database (Denmark)
Uslar, Mathias; Heussen, Kai
2016-01-01
Within this contribution, we provide an overview based on previous work conducted in the ELECTRA project to come up with a consistent method for modeling the ELECTRA WoC approach according to the methods established with the M/490 mandate of the European Commission. We will motivate the use of th...
Pruning Chinese trees : an experimental and modelling approach
Zeng, Bo
2002-01-01
Pruning of trees, in which some branches are removed from the lower crown of a tree, has been extensively used in China in silvicultural management for many purposes. With an experimental and modelling approach, the effects of pruning on tree growth and on the harvest of plant material were studied.
Evaluating Interventions with Multimethod Data: A Structural Equation Modeling Approach
Crayen, Claudia; Geiser, Christian; Scheithauer, Herbert; Eid, Michael
2011-01-01
In many intervention and evaluation studies, outcome variables are assessed using a multimethod approach comparing multiple groups over time. In this article, we show how evaluation data obtained from a complex multitrait-multimethod-multioccasion-multigroup design can be analyzed with structural equation models. In particular, we show how the…
Teaching Modeling with Partial Differential Equations: Several Successful Approaches
Myers, Joseph; Trubatch, David; Winkel, Brian
2008-01-01
We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…
A Metacognitive-Motivational Model of Surface Approach to Studying
Spada, Marcantonio M.; Moneta, Giovanni B.
2012-01-01
In this study, we put forward and tested a model of how surface approach to studying during examination preparation is influenced by the trait variables of motivation and metacognition and the state variables of avoidance coping and evaluation anxiety. A sample of 528 university students completed, one week before examinations, the following…
A New Approach for Testing the Rasch Model
Kubinger, Klaus D.; Rasch, Dieter; Yanagida, Takuya
2011-01-01
Though calibration of an achievement test within psychological and educational context is very often carried out by the Rasch model, data sampling is hardly designed according to statistical foundations. However, Kubinger, Rasch, and Yanagida (2009) recently suggested an approach for the determination of sample size according to a given Type I and…
Comparing State SAT Scores Using a Mixture Modeling Approach
Kim, YoungKoung Rachel
2009-01-01
Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…
The Bipolar Approach: A Model for Interdisciplinary Art History Courses.
Calabrese, John A.
1993-01-01
Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)
Non-frontal model based approach to forensic face recognition
Dutta, Abhishek; Veldhuis, Raymond; Spreeuwers, Luuk
2012-01-01
In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance vie
Smeared crack modelling approach for corrosion-induced concrete damage
DEFF Research Database (Denmark)
Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik
2017-01-01
compared to experimental data obtained by digital image correlation and published in the literature. Excellent agreements between experimentally observed and numerically predicted crack patterns at the micro and macro scale indicate the capability of the modelling approach to accurately capture corrosion...
Towards modeling future energy infrastructures - the ELECTRA system engineering approach
DEFF Research Database (Denmark)
Uslar, Mathias; Heussen, Kai
2016-01-01
Within this contribution, we provide an overview based on previous work conducted in the ELECTRA project to come up with a consistent method for modeling the ELECTRA WoC approach according to the methods established with the M/490 mandate of the European Commission. We will motivate the use...
Atomistic approach for modeling metal-semiconductor interfaces
DEFF Research Database (Denmark)
Stradi, Daniele; Martinez, Umberto; Blom, Anders
2016-01-01
realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces...
CFD Approaches for Modelling Bubble Entrainment by an Impinging Jet
Directory of Open Access Journals (Sweden)
Martin Schmidtke
2009-01-01
Full Text Available This contribution presents different approaches for the modeling of gas entrainment under water by a plunging jet. Since the generation of bubbles happens on a scale which is smaller than the bubbles, this process cannot be resolved in meso-scale simulations, which include the full length of the jet and its environment. This is why the gas entrainment has to be modeled in meso-scale simulations. In the frame of a Euler-Euler simulation, the local morphology of the phases has to be considered in the drag model. For example, the gas is a continuous phase above the water level but bubbly below the water level. Various drag models are tested and their influence on the gas void fraction below the water level is discussed. The algebraic interface area density (AIAD model applies a drag coefficient for bubbles and a different drag coefficient for the free surface. If the AIAD model is used for the simulation of impinging jets, the gas entrainment depends on the free parameters included in this model. The calculated gas entrainment can be adapted via these parameters. Therefore, an advanced AIAD approach could be used in future for the implementation of models (e.g., correlations for the gas entrainment.
Energy Technology Data Exchange (ETDEWEB)
Wessel, Silvia [Ballard Materials Products; Harvey, David [Ballard Materials Products
2013-06-28
The durability of PEM fuel cells is a primary requirement for large scale commercialization of these power systems in transportation and stationary market applications that target operational lifetimes of 5,000 hours and 40,000 hours by 2015, respectively. Key degradation modes contributing to fuel cell lifetime limitations have been largely associated with the platinum-based cathode catalyst layer. Furthermore, as fuel cells are driven to low cost materials and lower catalyst loadings in order to meet the cost targets for commercialization, the catalyst durability has become even more important. While over the past few years significant progress has been made in identifying the underlying causes of fuel cell degradation and key parameters that greatly influence the degradation rates, many gaps with respect to knowledge of the driving mechanisms still exist; in particular, the acceleration of the mechanisms due to different structural compositions and under different fuel cell conditions remains an area not well understood. The focus of this project was to address catalyst durability by using a dual path approach that coupled an extensive range of experimental analysis and testing with a multi-scale modeling approach. With this, the major technical areas/issues of catalyst and catalyst layer performance and durability that were addressed are: 1. Catalyst and catalyst layer degradation mechanisms (Pt dissolution, agglomeration, Pt loss, e.g. Pt in the membrane, carbon oxidation and/or corrosion). a. Driving force for the different degradation mechanisms. b. Relationships between MEA performance, catalyst and catalyst layer degradation and operational conditions, catalyst layer composition, and structure. 2. Materials properties a. Changes in catalyst, catalyst layer, and MEA materials properties due to degradation. 3. Catalyst performance a. Relationships between catalyst structural changes and performance. b. Stability of the three-phase boundary and its effect on
Glass Furnace Model (GFM) development and technology transfer program final report.
Energy Technology Data Exchange (ETDEWEB)
Lottes, S. A.; Petrick, M.; Energy Systems
2007-12-04
indices into the simulation to facilitate optimization studies with regard to productivity, energy use and emissions. Midway through the Part II program, however, at the urging of the industrial consortium members, the decision was made to refocus limited resources on transfer of the existing GFM 2.0 software to the industry to speed up commercialization of the technology. This decision, in turn, necessitated a de-emphasis of the development of the planned final version of the GFM software that had full multiphase capability, GFM 3.0. As a result, version 3.0 was not completed; considerable progress, however, was made before the effort was terminated. The objectives of the Technology Transfer program were to transfer the Glass Furnace Model (GFM) to the glass industry and to promote its widespread use by providing the requisite technical support to allow effective use of the software. GFM Version 2.0 was offered at no cost on a trial, six-month basis to expedite its introduction to and use by the industry. The trial licenses were issued to generate a much more thorough user beta test of the software than the relatively small amount completed by the consortium members prior to the release of version 2.0.
Approach for workflow modeling using π-calculus
Institute of Scientific and Technical Information of China (English)
杨东; 张申生
2003-01-01
As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workilow models using ~-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS ( Labeled Transition Semantics) semantics of π-calculus. The main advantage of the worktlow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of worktlow models can be checked thlx)ugh weak bisimulation theorem in the π-caleulus, thus facilitating the optimizationof business processes.
Approach for workflow modeling using π-calculus
Institute of Scientific and Technical Information of China (English)
杨东; 张申生
2003-01-01
As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workflow models using π-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS (Labeled Transition Semantics) semantics of π-calculus. The main advantage of the workflow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of workflow models can be checked through weak bisimulation theorem in the π-calculus, thus facilitating the optimization of business processes.
Multiphysics modeling using COMSOL a first principles approach
Pryor, Roger W
2011-01-01
Multiphysics Modeling Using COMSOL rapidly introduces the senior level undergraduate, graduate or professional scientist or engineer to the art and science of computerized modeling for physical systems and devices. It offers a step-by-step modeling methodology through examples that are linked to the Fundamental Laws of Physics through a First Principles Analysis approach. The text explores a breadth of multiphysics models in coordinate systems that range from 1D to 3D and introduces the readers to the numerical analysis modeling techniques employed in the COMSOL Multiphysics software. After readers have built and run the examples, they will have a much firmer understanding of the concepts, skills, and benefits acquired from the use of computerized modeling techniques to solve their current technological problems and to explore new areas of application for their particular technological areas of interest.
Evaluation of Workflow Management Systems - A Meta Model Approach
Directory of Open Access Journals (Sweden)
Michael Rosemann
1998-11-01
Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.
Modeling Laser Effects on the Final Optics in Simulated IFE Environments
Energy Technology Data Exchange (ETDEWEB)
Nasr Ghoniem
2004-08-14
When laser light interacts with a material's surface, photons rapidly heat the electronic system, resulting in very fast energy transfer to the underlying atomic crystal structure. The intense rate of energy deposition in the shallow sub-surface layer creates atomic defects, which alter the optical characteristics of the surface itself. In addition, the small fraction of energy absorbed in the mirror leads to its global deformation by thermal and gravity loads (especially for large surface area mirrors). The aim of this research was to model the deformation of mirror surfaces at multiple length and time scales for applications in advanced Inertial Fusion Energy (IFE) systems. The goal is to control micro- and macro-deformations by material system and structural design. A parallel experimental program at UCSD has been set up to validate the modeling efforts. The main objective of the research program was to develop computer models and simulations for Laser-Induced Damage (LID) in reflective and transmissive final optical elements in IFE laser-based systems. A range of materials and material concepts were investigated and verified by experiments at UCSD. Four different classes of materials were considered: (1) High-reflectivity FCC metals (e.g. Cu, Au, Ag, and Al), (2) BCC metals (e.g. Mo, Ta and W), (3) Advanced material concepts (e.g. functionally graded material systems, amorphous coatings, and layered structures), and (4) Transmissive dielectrics (e.g. fused SiO2). In this report, we give a summary of the three-year project, followed by details in three areas: (1) Characterization of laser-induced damage; (2) Theory development for LIDT; and (3) Design of IFE reflective laser mirrors.
A simplified GIS approach to modeling global leaf water isoscapes.
Directory of Open Access Journals (Sweden)
Jason B West
Full Text Available The stable hydrogen (delta(2H and oxygen (delta(18O isotope ratios of organic and inorganic materials record biological and physical processes through the effects of substrate isotopic composition and fractionations that occur as reactions proceed. At large scales, these processes can exhibit spatial predictability because of the effects of coherent climatic patterns over the Earth's surface. Attempts to model spatial variation in the stable isotope ratios of water have been made for decades. Leaf water has a particular importance for some applications, including plant organic materials that record spatial and temporal climate variability and that may be a source of food for migrating animals. It is also an important source of the variability in the isotopic composition of atmospheric gases. Although efforts to model global-scale leaf water isotope ratio spatial variation have been made (especially of delta(18O, significant uncertainty remains in models and their execution across spatial domains. We introduce here a Geographic Information System (GIS approach to the generation of global, spatially-explicit isotope landscapes (= isoscapes of "climate normal" leaf water isotope ratios. We evaluate the approach and the resulting products by comparison with simulation model outputs and point measurements, where obtainable, over the Earth's surface. The isoscapes were generated using biophysical models of isotope fractionation and spatially continuous precipitation isotope and climate layers as input model drivers. Leaf water delta(18O isoscapes produced here generally agreed with latitudinal averages from GCM/biophysical model products, as well as mean values from point measurements. These results show global-scale spatial coherence in leaf water isotope ratios, similar to that observed for precipitation and validate the GIS approach to modeling leaf water isotopes. These results demonstrate that relatively simple models of leaf water enrichment
An efficient approach for shadow detection based on Gaussian mixture model
Institute of Scientific and Technical Information of China (English)
韩延祥; 张志胜; 陈芳; 陈恺
2014-01-01
An efficient approach was proposed for discriminating shadows from moving objects. In the background subtraction stage, moving objects were extracted. Then, the initial classification for moving shadow pixels and foreground object pixels was performed by using color invariant features. In the shadow model learning stage, instead of a single Gaussian distribution, it was assumed that the density function computed on the values of chromaticity difference or bright difference, can be modeled as a mixture of Gaussian consisting of two density functions. Meanwhile, the Gaussian parameter estimation was performed by using EM algorithm. The estimates were used to obtain shadow mask according to two constraints. Finally, experiments were carried out. The visual experiment results confirm the effectiveness of proposed method. Quantitative results in terms of the shadow detection rate and the shadow discrimination rate (the maximum values are 85.79%and 97.56%, respectively) show that the proposed approach achieves a satisfying result with post-processing step.
A dynamic object-oriented architecture approach to ecosystem modeling and simulation.
Energy Technology Data Exchange (ETDEWEB)
Dolph, J. E.; Majerus, K. A.; Sydelko, P. J.; Taxon, T. N.
1999-04-09
Modeling and simulation in support of adaptive ecosystem management can be better accomplished through a dynamic, integrated, and flexible approach that incorporates scientific and technological components into a comprehensive ecosystem-modeling framework. The Integrated Dynamic Landscape Analysis and Modeling System (IDLAMS) integrates ecological models and decision support techniques, through a geographic information system (GIS)-based framework. The Strategic Environmental Research and Development Program (SERDP) sponsored the development of IDLAMS. Initially built upon a GIS framework, IDLAMS is migrating to an object-oriented (OO) architectural framework. An object-oriented architecture is more flexible and modular. It allows disparate applications and dynamic models to be integrated in a manner that minimizes (or eliminates) the need to rework or recreate the system as new models are added to the suite. In addition, an object-oriented design makes it easier to provide run-time feedback among models, thereby making it a more dynamic tool for exploring and providing insight into the interactions among ecosystem processes. Finally, an object-oriented design encourages the reuse of existing technology because OO-IDLAMS is able to integrate disparate models, databases, or applications executed in their native languages. Reuse is also accomplished through a structured approach to building a consistent and reusable object library. This reusability can substantially reduce the time and effort needed to develop future integrated ecosystem simulations.
Polynomial Chaos Expansion Approach to Interest Rate Models
Directory of Open Access Journals (Sweden)
Luca Di Persio
2015-01-01
Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.
Final Report for High Latitude Climate Modeling: ARM Takes Us Beyond Case Studies
Energy Technology Data Exchange (ETDEWEB)
Russell, Lynn M [Scripps/UCSD; Lubin, Dan [Scripps/UCSD
2013-06-18
The main thrust of this project was to devise a method by which the majority of North Slope of Alaska (NSA) meteorological and radiometric data, collected on a daily basis, could be used to evaluate and improve global climate model (GCM) simulations and their parameterizations, particularly for cloud microphysics. Although the standard ARM Program sensors for a less complete suite of instruments for cloud and aerosol studies than the instruments on an intensive field program such as the 2008 Indirect and Semi-Direct Aerosol Campaign (ISDAC), the advantage they offer lies in the long time base and large volume of data that covers a wide range of meteorological and climatological conditions. The challenge has been devising a method to interpret the NSA data in a practical way, so that a wide variety of meteorological conditions in all seasons can be examined with climate models. If successful, climate modelers would have a robust alternative to the usual “case study” approach (i.e., from intensive field programs only) for testing and evaluating their parameterizations’ performance. Understanding climate change on regional scales requires a broad scientific consideration of anthropogenic influences that goes beyond greenhouse gas emissions to also include aerosol-induced changes in cloud properties. For instance, it is now clear that on small scales, human-induced aerosol plumes can exert microclimatic radiative and hydrologic forcing that rivals that of greenhouse gas–forced warming. This project has made significant scientific progress by investigating what causes successive versions of climate models continue to exhibit errors in cloud amount, cloud microphysical and radiative properties, precipitation, and radiation balance, as compared with observations and, in particular, in Arctic regions. To find out what is going wrong, we have tested the models' cloud representation over the full range of meteorological conditions found in the Arctic using the
On a Markovian approach for modeling passive solar devices
Energy Technology Data Exchange (ETDEWEB)
Bottazzi, F.; Liebling, T.M. (Chaire de Recherche Operationelle, Ecole Polytechnique Federale de Lausanne (Switzerland)); Scartezzini, J.L.; Nygaard-Ferguson, M. (Lab. d' Energie Solaire et de Physique du Batiment, Ecole Polytechnique Federale de Lausanne (Switzerland))
1991-01-01
Stochastic models for the analysis of the energy and thermal comfort performances of passive solar devices have been increasingly studied for over a decade. A new approach to thermal building modeling, based on Markov chains, is proposed here to combine both the accuracy of traditional dynamic simulation with the practical advantages of simplified methods. A main difficulty of the Markovian approach is the discretization of the system variables. Efficient procedures have been developed to carry out this discretization and several numerical experiments have been performed to analyze the possibilities and limitations of the Markovian model. Despite its restrictive assumptions, it will be shown that accurate results are indeed obtained by this method. However, due to discretization, computer memory reqirements are more than inversely proportional to accuracy. (orig.).
Disturbed state concept as unified constitutive modeling approach
Directory of Open Access Journals (Sweden)
Chandrakant S. Desai
2016-06-01
Full Text Available A unified constitutive modeling approach is highly desirable to characterize a wide range of engineering materials subjected simultaneously to the effect of a number of factors such as elastic, plastic and creep deformations, stress path, volume change, microcracking leading to fracture, failure and softening, stiffening, and mechanical and environmental forces. There are hardly available such unified models. The disturbed state concept (DSC is considered to be a unified approach and is able to provide material characterization for almost all of the above factors. This paper presents a description of the DSC, and statements for determination of parameters based on triaxial, multiaxial and interface tests. Statements of DSC and validation at the specimen level and at the boundary value problem levels are also presented. An extensive list of publications by the author and others is provided at the end. The DSC is considered to be a unique and versatile procedure for modeling behaviors of engineering materials and interfaces.
Disturbed state concept as unified constitutive modeling approach
Institute of Scientific and Technical Information of China (English)
Chandrakant S. Desai
2016-01-01
A unified constitutive modeling approach is highly desirable to characterize a wide range of engineering materials subjected simultaneously to the effect of a number of factors such as elastic, plastic and creep deformations, stress path, volume change, microcracking leading to fracture, failure and softening, stiffening, and mechanical and environmental forces. There are hardly available such unified models. The disturbed state concept (DSC) is considered to be a unified approach and is able to provide material characterization for almost all of the above factors. This paper presents a description of the DSC, and statements for determination of parameters based on triaxial, multiaxial and interface tests. Statements of DSC and validation at the specimen level and at the boundary value problem levels are also presented. An extensive list of publications by the author and others is provided at the end. The DSC is considered to be a unique and versatile procedure for modeling behaviors of engineering materials and interfaces.
Proposal: A Hybrid Dictionary Modelling Approach for Malay Tweet Normalization
Muhamad, Nor Azlizawati Binti; Idris, Norisma; Arshi Saloot, Mohammad
2017-02-01
Malay Twitter message presents a special deviation from the original language. Malay Tweet widely used currently by Twitter users, especially at Malaya archipelago. Thus, it is important to make a normalization system which can translated Malay Tweet language into the standard Malay language. Some researchers have conducted in natural language processing which mainly focuses on normalizing English Twitter messages, while few studies have been done for normalize Malay Tweets. This paper proposes an approach to normalize Malay Twitter messages based on hybrid dictionary modelling methods. This approach normalizes noisy Malay twitter messages such as colloquially language, novel words, and interjections into standard Malay language. This research will be used Language Model and N-grams model.
DEFF Research Database (Denmark)
Simonsen, Kent Inge; Kristensen, Lars Michael
2013-01-01
and implementation. Our approach has been developed in the context of the Coloured Petri Nets (CPNs) modelling language. We illustrate our approach by presenting a descriptive specification model of the Websocket protocol which is currently under development by the Internet Engineering Task Force (IETF), and we show......Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...
Analytical and Numerical Approaches to Modelling of Reinforcement Corrosion in Concrete
Directory of Open Access Journals (Sweden)
Vořechovská Dita
2014-06-01
Full Text Available Corrosion of reinforcement in concrete is one of the most influencing factors causing the degradation of RC structures. This paper attempts at the application of an analytical and numerical approaches to simulation of concrete cracking due to reinforcement corrosion. At first, a combination with detailed analysis of two analytical models proposed by Liu and Weyers (1998 and Li et al. (2006 is suggested and presented. Four distinct phases of the corrosion process are identified and a detailed guide through the mathematical development is described. Next, numerical computations obtained with nonlinear finite element code are presented. The model features the state-of-the-art in nonlinear fracture mechanics modelling and the heterogeneous structure of concrete is modelled via spatially varying parameters of the constitutive law. Finally, the results of the analytical studies are compared to numerical computations and the paper concludes with the sketch of a real-life numerical example.
ON SOME APPROACHES TO ECONOMICMATHEMATICAL MODELING OF SMALL BUSINESS
Directory of Open Access Journals (Sweden)
Orlov A. I.
2015-04-01
Full Text Available Small business is an important part of modern Russian economy. We give a wide panorama developed by us of possible approaches to the construction of economic-mathematical models that may be useful to describe the dynamics of small businesses, as well as management. As for the description of certain problems of small business can use a variety of types of economic-mathematical and econometric models, we found it useful to consider a fairly wide range of such models, which resulted in quite a short description of the specific models. In this description of the models brought to such a level that an experienced professional in the field of economic-mathematical modeling could, if necessary, to develop their own specific model to the stage of design formulas and numerical results. Particular attention is paid to the use of statistical methods of non-numeric data, the most pressing at the moment. Are considered the problems of economic-mathematical modeling in solving problems of small business marketing. We have accumulated some experience in application of the methodology of economic-mathematical modeling in solving practical problems in small business marketing, in particular in the field of consumer goods and industrial purposes, educational services, as well as in the analysis and modeling of inflation, taxation and others. In marketing models of decision making theory we apply rankings and ratings. Is considered the problem of comparing averages. We present some models of the life cycle of small businesses - flow model projects, model of capture niches, and model of niche selection. We discuss the development of research on economic-mathematical modeling of small businesses
A validated approach for modeling collapse of steel structures
Saykin, Vitaliy Victorovich
A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are
GEOSPATIAL MODELLING APPROACH FOR 3D URBAN DENSIFICATION DEVELOPMENTS
Directory of Open Access Journals (Sweden)
O. Koziatek
2016-06-01
Full Text Available With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D. The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE, and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI’s CityEngine software and the Computer Generated Architecture (CGA language.
Geospatial Modelling Approach for 3d Urban Densification Developments
Koziatek, O.; Dragićević, S.; Li, S.
2016-06-01
With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D). The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE), and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI's CityEngine software and the Computer Generated Architecture (CGA) language.
Evaluating Asset Pricing Models in a Simulated Multifactor Approach
Directory of Open Access Journals (Sweden)
Wagner Piazza Gaglianone
2012-12-01
Full Text Available In this paper a methodology to compare the performance of different stochastic discount factor (SDF models is suggested. The starting point is the estimation of several factor models in which the choice of the fundamental factors comes from different procedures. Then, a Monte Carlo simulation is designed in order to simulate a set of gross returns with the objective of mimicking the temporal dependency and the observed covariance across gross returns. Finally, the artificial returns are used to investigate the performance of the competing asset pricing models through the Hansen and Jagannathan (1997 distance and some goodness-of-fit statistics of the pricing error. An empirical application is provided for the U.S. stock market.
Mass Transport Modelling in low permeability Fractured Rock: Eulerian versus Lagrangian approaches.
Capilla, J. E.; Rodrigo, J.; Llopis, C.; Grisales, C.; Gomez-Hernandez, J. J.
2003-04-01
Modeling flow and mass transport in fractured rocks can not be always successfully addressed by means of discrete fracture models which can fail due to the difficulty to be calibrated to experimental measurements. This is due to the need of having an accurate knowledge of fractures geometry and of the bidimensional distribution of hydrodynamic parameters on them. Besides, these models tend to be too rigid in the sense of not being able to re-adapt themselves correcting deficiencies or errors in the fracture definition. An alternative approach is assuming a pseudo-continuum media in which fractures are represented by the introduction of discretization blocks of very high hydraulic conductivity (K). This kind of model has been successfully tested in some real cases where the stochastic inversion of the flow equation has been performed to obtain equally likely K fields. However, in this framework, Eulerian mass transport modeling yields numerical dispersion and oscillations that make very difficult the analysis of tracer tests and the inversion of concentration data to identify K fields. In this contribution we present flow and mass transport modelling results in a fractured medium approached by a pseudo-continuum. The case study considered is based on data from a low permeability formation and both Eulerian and Lagrangian approaches have been applied. K fields in fractures are modeled as realizations of a stochastic process conditional to piezometric head data. Both a MultiGaussian and a non-multiGaussian approches are evaluated. The final goal of this research is obtaining K fields able to reproduce field tracer tests. Results show the important numerical problems found when applying an Eurelian approach and the possibilities of avoiding them with a 3D implementation of the Lagrangian random walk method. Besides, we see how different can be mass transport predictions when Gaussian and non-Gaussian models are assumed for K fields in fractures.
Analytical approach to the dynamics of facilitated spin models on random networks
Fennell, Peter G.; Gleeson, James P.; Cellai, Davide
2014-09-01
Facilitated spin models were introduced some decades ago to mimic systems characterized by a glass transition. Recent developments have shown that a class of facilitated spin models is also able to reproduce characteristic signatures of the structural relaxation properties of glass-forming liquids. While the equilibrium phase diagram of these models can be calculated analytically, the dynamics are usually investigated numerically. Here we propose a network-based approach, called approximate master equation (AME), to the dynamics of the Fredrickson-Andersen model. The approach correctly predicts the critical temperature at which the glass transition occurs. We also find excellent agreement between the theory and the numerical simulations for the transient regime, except in close proximity of the liquid-glass transition. Finally, we analytically characterize the critical clusters of the model and show that the departures between our AME approach and the Monte Carlo can be related to the large interface between blocked and unblocked spins at temperatures close to the glass transition.
Model-Driven Approach for Body Area Network Application Development.
Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata
2016-05-12
This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.
Model-Driven Approach for Body Area Network Application Development
Directory of Open Access Journals (Sweden)
Algimantas Venčkauskas
2016-05-01
Full Text Available This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS. We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.
Final report of MoReMO 2011-2012. Modelling resilience for maintenance and outage
Energy Technology Data Exchange (ETDEWEB)
Gotcheva, N.; Macchi, L.; Oedewald, P. [Technical Research Centre of Finland (VTT), Espoo (Finland); Eitrheim, M.H.R. [Institute for Energy Technology (IFE) (Norway); Axelsson, C.; Reiman, T.; Pietikaeinen, E. [Ringhals AB (NPP), Vattenfall AB (Sweden)
2013-04-15
The project Modelling Resilience for Maintenance and Outage (MoReMO) represents a two-year joint effort by VTT Technical Research Centre of Finland, Institute for Energy Technology (IFE, Norway) and Vattenfall (Sweden) to develop and test new approaches for safety management. The overall goal of the project was to present concepts on how resilience can be operationalized and built in a safety critical and socio-technical context. Furthermore, the project also aimed at providing guidance for other organizations that strive to develop and improve their safety performance in a business driven industry. We have applied four approaches in different case studies: Organisational Core Task modelling (OCT), Functional Resonance Analysis Method (FRAM), Efficiency Thoroughness Trade-Off (ETTO) analysis, and Work Practice and Culture Characterisation. During 2011 and 2012 the MoReMO project team has collected data through field observations, interviews, workshops, and document analysis on the work practices and adjustments in maintenance and outage in Nordic NPPs. The project consisted of two sub-studies, one focused on identifying and assessing adjustments and supporting resilient work practices in maintenance activities, while the other focused on handling performance trade-offs in maintenance and outage, as follows: A. Adjustments in maintenance work in Nordic nuclear power plants (VTT and Vattenfall). B. Handling performance trade-offs - the support of adaptive capacities (IFE and Vattenfall). The historical perspective of maintenance and outage management (Chapter 1.1) was provided by Vattenfall. Together, the two sub-studies have provided valuable insights for understanding the rationale behind work practices and adjustments, their effects on resilience, promoting flexibility and balancing between flexibility and reliability. (Author)
A systemic approach for modeling biological evolution using Parallel DEVS.
Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo
2015-08-01
A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Kinetic equations modelling wealth redistribution: a comparison of approaches.
Düring, Bertram; Matthes, Daniel; Toscani, Giuseppe
2008-11-01
Kinetic equations modelling the redistribution of wealth in simple market economies is one of the major topics in the field of econophysics. We present a unifying approach to the qualitative study for a large variety of such models, which is based on a moment analysis in the related homogeneous Boltzmann equation, and on the use of suitable metrics for probability measures. In consequence, we are able to classify the most important feature of the steady wealth distribution, namely the fatness of the Pareto tail, and the dynamical stability of the latter in terms of the model parameters. Our results apply, e.g., to the market model with risky investments [S. Cordier, L. Pareschi, and G. Toscani, J. Stat. Phys. 120, 253 (2005)], and to the model with quenched saving propensities [A. Chatterjee, B. K. Chakrabarti, and S. S. Manna, Physica A 335, 155 (2004)]. Also, we present results from numerical experiments that confirm the theoretical predictions.
Gu, Fei; Preacher, Kristopher J; Wu, Wei; Yung, Yiu-Fai
2014-01-01
Although the state space approach for estimating multilevel regression models has been well established for decades in the time series literature, it does not receive much attention from educational and psychological researchers. In this article, we (a) introduce the state space approach for estimating multilevel regression models and (b) extend the state space approach for estimating multilevel factor models. A brief outline of the state space formulation is provided and then state space forms for univariate and multivariate multilevel regression models, and a multilevel confirmatory factor model, are illustrated. The utility of the state space approach is demonstrated with either a simulated or real example for each multilevel model. It is concluded that the results from the state space approach are essentially identical to those from specialized multilevel regression modeling and structural equation modeling software. More importantly, the state space approach offers researchers a computationally more efficient alternative to fit multilevel regression models with a large number of Level 1 units within each Level 2 unit or a large number of observations on each subject in a longitudinal study.
Site-conditions map for Portugal based on VS measurements: methodology and final model
Vilanova, Susana; Narciso, João; Carvalho, João; Lopes, Isabel; Quinta Ferreira, Mario; Moura, Rui; Borges, José; Nemser, Eliza; Pinto, carlos
2017-04-01
In this paper we present a statistically significant site-condition model for Portugal based on shear-wave velocity (VS) data and surface geology. We also evaluate the performance of commonly used Vs30 proxies based on exogenous data and analyze the implications of using those proxies for calculating site amplification in seismic hazard assessment. The dataset contains 161 Vs profiles acquired in Portugal in the context of research projects, technical reports, academic thesis and academic papers. The methodologies involved in characterizing the Vs structure at the sites in the database include seismic refraction, multichannel analysis of seismic waves and refraction microtremor. Invasive measurements were performed in selected locations in order to compare the Vs profiles obtained from both invasive and non-invasive techniques. In general there was good agreement in the subsurface structure of Vs30 obtained from the different methodologies. The database flat-file includes information on Vs30, surface geology at 1:50.000 and 1:500.000 scales, elevation and topographic slope and based on SRTM30 topographic dataset. The procedure used to develop the site-conditions map is based on a three-step process that includes defining a preliminary set of geological units based on the literature, performing statistical tests to assess whether or not the differences in the distributions of Vs30 are statistically significant, and merging of the geological units accordingly. The dataset was, to some extent, affected by clustering and/or preferential sampling and therefore a declustering algorithm was applied. The final model includes three geological units: 1) Igneous, metamorphic and old (Paleogene and Mesozoic) sedimentary rocks; 2) Neogene and Pleistocene formations, and 3) Holocene formations. The evaluation of proxies indicates that although geological analogues and topographic slope are in general unbiased, the latter shows significant bias for particular geological units and
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
Directory of Open Access Journals (Sweden)
W. Bastiaan Kleijn
2005-06-01
Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.
A modal approach to modeling spatially distributed vibration energy dissipation.
Energy Technology Data Exchange (ETDEWEB)
Segalman, Daniel Joseph
2010-08-01
The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.
Validation of models with constant bias: an applied approach
Directory of Open Access Journals (Sweden)
Salvador Medina-Peralta
2014-06-01
Full Text Available Objective. This paper presents extensions to the statistical validation method based on the procedure of Freese when a model shows constant bias (CB in its predictions and illustrate the method with data from a new mechanistic model that predict weight gain in cattle. Materials and methods. The extensions were the hypothesis tests and maximum anticipated error for the alternative approach, and the confidence interval for a quantile of the distribution of errors. Results. The model evaluated showed CB, once the CB is removed and with a confidence level of 95%, the magnitude of the error does not exceed 0.575 kg. Therefore, the validated model can be used to predict the daily weight gain of cattle, although it will require an adjustment in its structure based on the presence of CB to increase the accuracy of its forecasts. Conclusions. The confidence interval for the 1-α quantile of the distribution of errors after correcting the constant bias, allows determining the top limit for the magnitude of the error of prediction and use it to evaluate the evolution of the model in the forecasting of the system. The confidence interval approach to validate a model is more informative than the hypothesis tests for the same purpose.
Energy Technology Data Exchange (ETDEWEB)
Huang, Hsin-Yuan; Hall, Alex
2013-07-24
the mostly dry mountain-breeze circulations force an additional component that results in semi-diurnal variations near the coast. A series of numerical tests, however, reveal sensitivity of the simulations to the choice of vertical grid, limiting the possibility of solid quantitative statements on the amplitudes and phases of the diurnal and semidiurnal components across the domain. According to our experiments, the Mellor-Yamada-Nakanishi-Niino (MYNN) boundary layer scheme and the WSM6 microphysics scheme is the combination of schemes that performs best. For that combination, mean cloud cover, liquid water path, and cloud depth are fairly wellsimulated, while mean cloud top height remains too low in comparison to observations. Both microphysics and boundary layer schemes contribute to the spread in liquid water path and cloud depth, although the microphysics contribution is slightly more prominent. Boundary layer schemes are the primary contributors to cloud top height, degree of adiabaticity, and cloud cover. Cloud top height is closely related to surface fluxes and boundary layer structure. Thus, our study infers that an appropriate tuning of cloud top height would likely improve the low-cloud representation in the model. Finally, we show that entrainment governs the degree of adiabaticity, while boundary layer decoupling is a control on cloud cover. In the intercomparison study using WRF single-column model experiments, most parameterizations show a poor agreement of the vertical boundary layer structure when compared with large-eddy simulation models. We also implement a new Total-Energy/Mass- Flux boundary layer scheme into the WRF model and evaluate its ability to simulate both stratocumulus and shallow cumulus clouds. Result comparisons against large-eddy simulation show that this advanced parameterization based on the new Eddy-Diffusivity/Mass-Flux approach provides a better performance than other boundary layer parameterizations.
Hiemstra, Djoerd
2010-01-01
In this report, we unify two quite distinct approaches to information retrieval: region models and language models. Region models were developed for structured document retrieval. They provide a well-defined behaviour as well as a simple query language that allows application developers to rapidly develop applications. Language models are particularly useful to reason about the ranking of search results, and for developing new ranking approaches. The unified model allows application developers to define complex language modeling approaches as logical queries on a textual database. We show a remarkable one-to-one relationship between region queries and the language models they represent for a wide variety of applications: simple ad-hoc search, cross-language retrieval, video retrieval, and web search.
A new thermal comfort approach comparing adaptive and PMV models
Energy Technology Data Exchange (ETDEWEB)
Orosa, Jose A. [Universidade da Coruna, Departamento de Energia y P. M. Paseo de Ronda, n :51, 15011. A Coruna (Spain); Oliveira, Armando C. [Universidade do Porto, Faculdade de Engenharia, New Energy Tec. Unit. Rua Dr Roberto Frias, 4200-465 Porto (Portugal)
2011-03-15
In buildings with heating, ventilation, and air-conditioning (HVAC), the Predicted Mean Vote index (PMV) was successful at predicting comfort conditions, whereas in naturally ventilated buildings, only adaptive models provide accurate predictions. On the other hand, permeable coverings can be considered as a passive control method of indoor conditions and, consequently, have implications in the perception of indoor air quality, local thermal comfort, and energy savings. These energy savings were measured in terms of the set point temperature established in accordance with adaptive methods. Problems appear when the adaptive model suggests the same neutral temperature for ambiences with the same indoor temperature but different relative humidities. In this paper, a new design of the PMV model is described to compare the neutral temperature to real indoor conditions. Results showed that this new PMV model tends to overestimate thermal neutralities but with a lower value than Fanger's PMV index. On the other hand, this new PMV model considers indoor relative humidity, showing a clear differentiation of indoor ambiences in terms of it, unlike adaptive models. Finally, spaces with permeable coverings present indoor conditions closer to thermal neutrality, with corresponding energy savings. (author)
Dynamic Metabolic Modeling of Denitrifying Bacterial Growth: The Cybernetic Approach
Energy Technology Data Exchange (ETDEWEB)
Song, Hyun-Seob; Liu, Chongxuan
2015-06-29
Denitrification is a multistage reduction process converting nitrate ultimately to nitrogen gas, carried out mostly by facultative bacteria. Modeling of the denitrification process is challenging due to the complex metabolic regulation that modulates sequential formation and consumption of a series of nitrogen oxide intermediates, which serve as the final electron acceptors for denitrifying bacteria. In this work, we examined the effectiveness and accuracy of the cybernetic modeling framework in simulating the growth dynamics of denitrifying bacteria in comparison with kinetic models. In four different case studies using the literature data, we successfully simulated diauxic and triauxic growth patterns observed in anoxic and aerobic conditions, only by tuning two or three parameters. In order to understand the regulatory structure of the cybernetic model, we systematically analyzed the effect of cybernetic control variables on simulation accuracy. The results showed that the consideration of both enzyme synthesis and activity control through u- and v-variables is necessary and relevant and that uvariables are of greater importance in comparison to v-variables. In contrast, simple kinetic models were unable to accurately capture dynamic metabolic shifts across alternative electron acceptors, unless an inhibition term was additionally incorporated. Therefore, the denitrification process represents a reasonable example highlighting the criticality of considering dynamic regulation for successful metabolic modeling.
Reducing outpatient waiting time: a simulation modeling approach.
Aeenparast, Afsoon; Tabibi, Seyed Jamaleddin; Shahanaghi, Kamran; Aryanejhad, Mir Bahador
2013-09-01
The objective of this study was to provide a model for reducing outpatient waiting time by using simulation. A simulation model was constructed by using the data of arrival time, service time and flow of 357 patients referred to orthopedic clinic of a general teaching hospital in Tehran. The simulation model was validated before constructing different scenarios. In this study 10 scenarios were presented for reducing outpatient waiting time. Patients waiting time was divided into three levels regarding their physicians. These waiting times for all scenarios were computed by simulation model. According to the final scores the 9th scenario was selected as the best way for reducing outpatient's waiting time. Using the simulation as a decision making tool helps us to decide how we can reduce outpatient's waiting time. Comparison of outputs of this scenario and the based- case scenario in simulation model shows that combining physician's work time changing with patient's admission time changing (scenario 9) would reduce patient waiting time about 73.09%. Due to dynamic and complex nature of healthcare systems, the application of simulation for the planning, modeling and analysis of these systems has lagged behind traditional manufacturing practices. Rapid growth in health care system expenditures, technology and competition has increased the complexity of health care systems. Simulation is a useful tool for decision making in complex and probable systems.
Approach to Organizational Structure Modelling in Construction Companies
Directory of Open Access Journals (Sweden)
Ilin Igor V.
2016-01-01
Full Text Available Effective management system is one of the key factors of business success nowadays. Construction companies usually have a portfolio of independent projects running at the same time. Thus it is reasonable to take into account project orientation of such kind of business while designing the construction companies’ management system, which main components are business process system and organizational structure. The paper describes the management structure designing approach, based on the project-oriented nature of the construction projects, and propose a model of the organizational structure for the construction company. Application of the proposed approach will enable to assign responsibilities within the organizational structure in construction projects effectively and thus to shorten the time for projects allocation and to provide its smoother running. The practical case of using the approach also provided in the paper.
Hartman, William T.
The research described in this report attempts to estimate the costs of providing an appropriate education to all school-aged handicapped children by 1980-81. The study begins by addressing the aspects of special education that will help to predict future costs--patterns of growth to the present, legal and political mandates, the nature of various…
A self-consistent first-principle based approach to model carrier mobility in organic materials
Energy Technology Data Exchange (ETDEWEB)
Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)
2015-12-31
Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.
An integrated modelling approach to estimate urban traffic emissions
Misra, Aarshabh; Roorda, Matthew J.; MacLean, Heather L.
2013-07-01
An integrated modelling approach is adopted to estimate microscale urban traffic emissions. The modelling framework consists of a traffic microsimulation model developed in PARAMICS, a microscopic emissions model (Comprehensive Modal Emissions Model), and two dispersion models, AERMOD and the Quick Urban and Industrial Complex (QUIC). This framework is applied to a traffic network in downtown Toronto, Canada to evaluate summer time morning peak traffic emissions of carbon monoxide (CO) and nitrogen oxides (NOx) during five weekdays at a traffic intersection. The model predicted results are validated against sensor observations with 100% of the AERMOD modelled CO concentrations and 97.5% of the QUIC modelled NOx concentrations within a factor of two of the corresponding observed concentrations. Availability of local estimates of ambient concentration is useful for accurate comparisons of predicted concentrations with observed concentrations. Predicted and sensor measured concentrations are significantly lower than the hourly threshold Maximum Acceptable Levels for CO (31 ppm, ˜90 times lower) and NO2 (0.4 mg/m3, ˜12 times lower), within the National Ambient Air Quality Objectives established by Environment Canada.
A Novel Approach to Implement Takagi-Sugeno Fuzzy Models.
Chang, Chia-Wen; Tao, Chin-Wang
2017-09-01
This paper proposes new algorithms based on the fuzzy c-regressing model algorithm for Takagi-Sugeno (T-S) fuzzy modeling of the complex nonlinear systems. A fuzzy c-regression state model (FCRSM) algorithm is a T-S fuzzy model in which the functional antecedent and the state-space-model-type consequent are considered with the available input-output data. The antecedent and consequent forms of the proposed FCRSM consists mainly of two advantages: one is that the FCRSM has low computation load due to only one input variable is considered in the antecedent part; another is that the unknown system can be modeled to not only the polynomial form but also the state-space form. Moreover, the FCRSM can be extended to FCRSM-ND and FCRSM-Free algorithms. An algorithm FCRSM-ND is presented to find the T-S fuzzy state-space model of the nonlinear system when the input-output data cannot be precollected and an assumed effective controller is available. In the practical applications, the mathematical model of controller may be hard to be obtained. In this case, an online tuning algorithm, FCRSM-FREE, is designed such that the parameters of a T-S fuzzy controller and the T-S fuzzy state model of an unknown system can be online tuned simultaneously. Four numerical simulations are given to demonstrate the effectiveness of the proposed approach.
Diagnosing Hybrid Systems: a Bayesian Model Selection Approach
McIlraith, Sheila A.
2005-01-01
In this paper we examine the problem of monitoring and diagnosing noisy complex dynamical systems that are modeled as hybrid systems-models of continuous behavior, interleaved by discrete transitions. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. Building on our previous work in this area (MBCG99;MBCG00), our specific focus in this paper ins on the mathematical formulation of the hybrid monitoring and diagnosis task as a Bayesian model tracking algorithm. The nonlinear dynamics of many hybrid systems present challenges to probabilistic tracking. Further, probabilistic tracking of a system for the purposes of diagnosis is problematic because the models of the system corresponding to failure modes are numerous and generally very unlikely. To focus tracking on these unlikely models and to reduce the number of potential models under consideration, we exploit logic-based techniques for qualitative model-based diagnosis to conjecture a limited initial set of consistent candidate models. In this paper we discuss alternative tracking techniques that are relevant to different classes of hybrid systems, focusing specifically on a method for tracking multiple models of nonlinear behavior simultaneously using factored sampling and conditional density propagation. To illustrate and motivate the approach described in this paper we examine the problem of monitoring and diganosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
A Bayesian Approach for Structural Learning with Hidden Markov Models
Directory of Open Access Journals (Sweden)
Cen Li
2002-01-01
Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.
Systematic approach to verification and validation: High explosive burn models
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory
2012-04-16
Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code
Energy Technology Data Exchange (ETDEWEB)
Walton, W.C.; Voorhees, M.L.; Prickett, T.A.
1980-05-23
This technical memorandum was prepared to: (1) describe a typical basalt radionuclide repository site, (2) describe geologic and hydrologic processes associated with regional radionuclide transport in basalts, (3) define the parameters required to model regional radionuclide transport from a basalt repository site, and (4) develop a ''conceptual model'' of radionuclide transport from a basalt repository site. In a general hydrological sense, basalts may be described as layered sequences of aquifers and aquitards. The Columbia River Basalt, centered near the semi-arid Pasco Basin, is considered by many to be typical basalt repository host rock. Detailed description of the flow system including flow velocities with high-low hydraulic conductivity sequences are not possible with existing data. However, according to theory, waste-transport routes are ultimately towards the Columbia River and the lengths of flow paths from the repository to the biosphere may be relatively short. There are many physical, chemical, thermal, and nuclear processes with associated parameters that together determine the possible pattern of radionuclide migration in basalts and surrounding formations. Brief process descriptions and associated parameter lists are provided. Emphasis has been placed on the use of the distribution coefficient in simulating ion exchange. The use of the distribution coefficient approach is limited because it takes into account only relatively fast mass transfer processes. In general, knowledge of hydrogeochemical processes is primitive.
Energy Technology Data Exchange (ETDEWEB)
Walton, W.C.; Voorhees, M.L.; Prickett, T.A.
1980-05-23
This technical memorandum was prepared to: (1) describe a typical basalt radionuclide repository site, (2) describe geologic and hydrologic processes associated with regional radionuclide transport in basalts, (3) define the parameters required to model regional radionuclide transport from a basalt repository site, and (4) develop a ''conceptual model'' of radionuclide transport from a basalt repository site. In a general hydrological sense, basalts may be described as layered sequences of aquifers and aquitards. The Columbia River Basalt, centered near the semi-arid Pasco Basin, is considered by many to be typical basalt repository host rock. Detailed description of the flow system including flow velocities with high-low hydraulic conductivity sequences are not possible with existing data. However, according to theory, waste-transport routes are ultimately towards the Columbia River and the lengths of flow paths from the repository to the biosphere may be relatively short. There are many physical, chemical, thermal, and nuclear processes with associated parameters that together determine the possible pattern of radionuclide migration in basalts and surrounding formations. Brief process descriptions and associated parameter lists are provided. Emphasis has been placed on the use of the distribution coefficient in simulating ion exchange. The use of the distribution coefficient approach is limited because it takes into account only relatively fast mass transfer processes. In general, knowledge of hydrogeochemical processes is primitive.
A Complex Systems Model Approach to Quantified Mineral Resource Appraisal
Gettings, M.E.; Bultman, M.W.; Fisher, F.S.
2004-01-01
For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.
A Nonhydrostatic Model Based On A New Approach
Janjic, Z. I.
Considerable experience with nonhydrostatic mo dels has been accumulated on the scales of convective clouds and storms. However, numerical weather prediction (NWP) deals with motions on a much wider range of temporal and spatial scales. Thus, difficulties that may not be significant on the small scales, may become important in NWP applications. Having in mind these considerations, a new approach has been proposed and applied in developing nonhydrostatic models intended for NWP applications. Namely, instead of extending the cloud models to synoptic scales, the hydrostatic approximation is relaxed in a hydrostatic NWP model. In this way the model validity is extended to nonhydrostatic motions, and at the same time favorable features of the hydrostatic formulation are preserved. In order to apply this approach, the system of nonhydrostatic equations is split into two parts: (a) the part that corresponds to the hydrostatic system, except for corrections due to vertical acceleration, and (b) the system of equations that allows computation of the corrections appearing in the first system. This procedure does not require any additional approximation. In the model, "isotropic" horizontal finite differencing is employed that conserves a number of basic and derived dynamical and quadratic quantities. The hybrid pressure-sigma vertical coordinate has been chosen as the primary option. The forward-backward scheme is used for horizontally propagating fast waves, and an implicit scheme is used for vertically propagating sound waves. The Adams- Bashforth scheme is applied for the advection of the basic dynamical variables and for the Coriolis terms. In real data runs, the nonhydrostatic dynamics does not require extra computational boundary conditions at the top. The philosophy of the physical package and possible future developments of physical parameterizations are also reviewed. A two-dimensional model based on the described approach successfully reproduced classical
Infiltration under snow cover: Modeling approaches and predictive uncertainty
Meeks, Jessica; Moeck, Christian; Brunner, Philip; Hunkeler, Daniel
2017-03-01
Groundwater recharge from snowmelt represents a temporal redistribution of precipitation. This is extremely important because the rate and timing of snowpack drainage has substantial consequences to aquifer recharge patterns, which in turn affect groundwater availability throughout the rest of the year. The modeling methods developed to estimate drainage from a snowpack, which typically rely on temporally-dense point-measurements or temporally-limited spatially-dispersed calibration data, range in complexity from the simple degree-day method to more complex and physically-based energy balance approaches. While the gamut of snowmelt models are routinely used to aid in water resource management, a comparison of snowmelt models' predictive uncertainties had previously not been done. Therefore, we established a snowmelt model calibration dataset that is both temporally dense and represents the integrated snowmelt infiltration signal for the Vers Chez le Brandt research catchment, which functions as a rather unique natural lysimeter. We then evaluated the uncertainty associated with the degree-day, a modified degree-day and energy balance snowmelt model predictions using the null-space Monte Carlo approach. All three melt models underestimate total snowpack drainage, underestimate the rate of early and midwinter drainage and overestimate spring snowmelt rates. The actual rate of snowpack water loss is more constant over the course of the entire winter season than the snowmelt models would imply, indicating that mid-winter melt can contribute as significantly as springtime snowmelt to groundwater recharge in low alpine settings. Further, actual groundwater recharge could be between 2 and 31% greater than snowmelt models suggest, over the total winter season. This study shows that snowmelt model predictions can have considerable uncertainty, which may be reduced by the inclusion of more data that allows for the use of more complex approaches such as the energy balance
Social model: a new approach of the disability theme.
Bampi, Luciana Neves da Silva; Guilhem, Dirce; Alves, Elioenai Dornelles
2010-01-01
The experience of disability is part of the daily lives of people who have a disease, lesion or corporal limitation. Disability is still understood as personal bad luck; moreover, from the social and political points of view, the disabled are seen as a minority. The aim of this study is to contribute to the knowledge about the experience of disability. The research presents a new approach on the theme: the social model. This approach appeared as an alternative to the medical model of disability, which sees the lesion as the primary cause of social inequality and of the disadvantages experienced by the disabled, ignoring the role of social structures in their oppression and marginalization. The study permits reflecting on how the difficulties and barriers society imposed on people considered different make disability a reality and portray social injustice and the vulnerability situation lived by excluded groups.
Lattice percolation approach to 3D modeling of tissue aging
Gorshkov, Vyacheslav; Privman, Vladimir; Libert, Sergiy
2016-11-01
We describe a 3D percolation-type approach to modeling of the processes of aging and certain other properties of tissues analyzed as systems consisting of interacting cells. Lattice sites are designated as regular (healthy) cells, senescent cells, or vacancies left by dead (apoptotic) cells. The system is then studied dynamically with the ongoing processes including regular cell dividing to fill vacant sites, healthy cells becoming senescent or dying, and senescent cells dying. Statistical-mechanics description can provide patterns of time dependence and snapshots of morphological system properties. The developed theoretical modeling approach is found not only to corroborate recent experimental findings that inhibition of senescence can lead to extended lifespan, but also to confirm that, unlike 2D, in 3D senescent cells can contribute to tissue's connectivity/mechanical stability. The latter effect occurs by senescent cells forming the second infinite cluster in the regime when the regular (healthy) cell's infinite cluster still exists.