A study of deterministic models for quantum mechanics
International Nuclear Information System (INIS)
Sutherland, R.
1980-01-01
A theoretical investigation is made into the difficulties encountered in constructing a deterministic model for quantum mechanics and into the restrictions that can be placed on the form of such a model. The various implications of the known impossibility proofs are examined. A possible explanation for the non-locality required by Bell's proof is suggested in terms of backward-in-time causality. The efficacy of the Kochen and Specker proof is brought into doubt by showing that there is a possible way of avoiding its implications in the only known physically realizable situation to which it applies. A new thought experiment is put forward to show that a particle's predetermined momentum and energy values cannot satisfy the laws of momentum and energy conservation without conflicting with the predictions of quantum mechanics. Attention is paid to a class of deterministic models for which the individual outcomes of measurements are not dependent on hidden variables associated with the measuring apparatus and for which the hidden variables of a particle do not need to be randomized after each measurement
Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?
Choustova, Olga
2007-02-01
We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.
Deterministic behavioural models for concurrency
DEFF Research Database (Denmark)
Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn
1993-01-01
This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.
A mathematical theory for deterministic quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)
2007-05-15
Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.
Equivalence relations between deterministic and quantum mechanical systems
International Nuclear Information System (INIS)
Hooft, G.
1988-01-01
Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale
Rossel, F.; Gironas, J. A.
2012-12-01
The link between stream network structure and hydrologic response for natural basins has been extensively studied. It is well known that stream network organization and flow dynamics in the reaches combine to shape the hydrologic response of natural basins. Geomorphologic dispersion and hydrodynamic dispersion along with hillslope processes control to a large extent the overall variance of the hydrograph, particularly under the assumption of constant celerity throughout the basin. In addition, a third mechanism referred as to kinematic dispersion becomes relevant when considering spatial variations of celerity. On contrary, the link between the drainage network structure and overall urban terrain, and the hydrologic response in urban catchments has been much less studied. In particular, the characterization of the different dispersion mechanisms within urban areas remains to be better understood. In such areas artificial elements are expected to contribute to the total dispersion due to the variety of geometries and the spatial distribution of imperviousness. This work quantifies the different dispersion mechanisms in an urban catchment, focusing on their relevance and the spatial scales involved. For this purpose we use the Urban Morpho-climatic Instantaneous Unit Hydrograph model, a deterministic spatially distributed direct hydrograph travel time model, which computes travel times in hillslope, pipe, street and channel cells using formulations derived from kinematic wave theory. The model was applied to the Aubeniere catchment, located in Nantes, France. Unlike stochastic models, this deterministic model allows the quantification of dispersion mechanism at the local scale (i.e. the grid-cell). We found that kinematic dispersion is more relevant for small storm events, whereas geomorphologic dispersion becomes more significant for larger storms, as the mean celerity within the catchment increases. In addition, the total dispersion relates to the drainage area in
A deterministic width function model
Directory of Open Access Journals (Sweden)
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
Mechanics from Newton's laws to deterministic chaos
Scheck, Florian
2018-01-01
This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present 6th edition is updated and revised with more explanations, additional examples and problems with solutions, together with new sections on applications in science. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 150 problems ...
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...
Learning to Act: Qualitative Learning of Deterministic Action Models
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2017-01-01
In this article we study learnability of fully observable, universally applicable action models of dynamic epistemic logic. We introduce a framework for actions seen as sets of transitions between propositional states and we relate them to their dynamic epistemic logic representations as action...... in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...
Dynamic optimization deterministic and stochastic models
Hinderer, Karl; Stieglitz, Michael
2016-01-01
This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.
Deterministic geologic processes and stochastic modeling
International Nuclear Information System (INIS)
Rautman, C.A.; Flint, A.L.
1992-01-01
This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling
Piecewise deterministic processes in biological models
Rudnicki, Ryszard
2017-01-01
This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...
About the Possibility of Creation of a Deterministic Unified Mechanics
International Nuclear Information System (INIS)
Khomyakov, G.K.
2005-01-01
The possibility of creation of a unified deterministic scheme of classical and quantum mechanics, allowing to preserve their achievements is discussed. It is shown that the canonical system of ordinary differential equation of Hamilton classical mechanics can be added with the vector system of ordinary differential equation for the variables of equations. The interpretational problems of quantum mechanics are considered
Deterministic SLIR model for tuberculosis disease mapping
Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat
2017-11-01
Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.
Deterministic models for energy-loss straggling
International Nuclear Information System (INIS)
Prinja, A.K.; Gleicher, F.; Dunham, G.; Morel, J.E.
1999-01-01
Inelastic ion interactions with target electrons are dominated by extremely small energy transfers that are difficult to resolve numerically. The continuous-slowing-down (CSD) approximation is then commonly employed, which, however, only preserves the mean energy loss per collision through the stopping power, S(E) = ∫ 0 ∞ dEprime (E minus Eprime) σ s (E → Eprime). To accommodate energy loss straggling, a Gaussian distribution with the correct mean-squared energy loss (akin to a Fokker-Planck approximation in energy) is commonly used in continuous-energy Monte Carlo codes. Although this model has the unphysical feature that ions can be upscattered, it nevertheless yields accurate results. A multigroup model for energy loss straggling was recently presented for use in multigroup Monte Carlo codes or in deterministic codes that use multigroup data. The method has the advantage that the mean and mean-squared energy loss are preserved without unphysical upscatter and hence is computationally efficient. Results for energy spectra compared extremely well with Gaussian distributions under the idealized conditions for which the Gaussian may be considered to be exact. Here, the authors present more consistent comparisons by extending the method to accommodate upscatter and, further, compare both methods with exact solutions obtained from an analog Monte Carlo simulation, for a straight-ahead transport problem
CSL model checking of deterministic and stochastic Petri nets
Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.
2006-01-01
Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
2017-01-01
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)=Z(t) Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are X 2 -distributed....
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Deterministic and stochastic CTMC models from Zika disease transmission
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-03-17
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.
Deterministic Properties of Serially Connected Distributed Lag Models
Directory of Open Access Journals (Sweden)
Piotr Nowak
2013-01-01
Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract
Line and lattice networks under deterministic interference models
Goseling, Jasper; Gastpar, Michael; Weber, Jos H.
Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of
A deterministic-probabilistic model for contaminant transport. User manual
Energy Technology Data Exchange (ETDEWEB)
Schwartz, F W; Crowe, A
1980-08-01
This manual describes a deterministic-probabilistic contaminant transport (DPCT) computer model designed to simulate mass transfer by ground-water movement in a vertical section of the earth's crust. The model can account for convection, dispersion, radioactive decay, and cation exchange for a single component. A velocity is calculated from the convective transport of the ground water for each reference particle in the modeled region; dispersion is accounted for in the particle motion by adding a readorn component to the deterministic motion. The model is sufficiently general to enable the user to specify virtually any type of water table or geologic configuration, and a variety of boundary conditions. A major emphasis in the model development has been placed on making the model simple to use, and information provided in the User Manual will permit changes to the computer code to be made relatively easily for those that might be required for specific applications. (author)
A new deterministic model of strange stars
Energy Technology Data Exchange (ETDEWEB)
Rahaman, Farook; Shit, G.C. [Jadavpur University, Department of Mathematics, Kolkata, West Bengal (India); Chakraborty, Koushik [Government Training College, Department of Physics, Hooghly, West Bengal (India); Kuhfittig, P.K.F. [Milwaukee School of Engineering, Department of Mathematics, Milwaukee, WI (United States); Rahman, Mosiur [Meghnad Saha Institute of Technology, Department of Mathematics, Kolkata (India)
2014-10-15
The observed evidence for the existence of strange stars and the concomitant observed masses and radii are used to derive an interpolation formula for the mass as a function of the radial coordinate. The resulting general mass function becomes an effective model for a strange star. The analysis is based on the MIT bag model and yields the energy density, as well as the radial and transverse pressures. Using the interpolation function for the mass, it is shown that a mass-radius relation due to Buchdahl is satisfied in our model. We find the surface redshift (Z) corresponding to the compactness of the stars. Finally, from our results, we predict some characteristics of a strange star of radius 9.9 km. (orig.)
The deterministic computational modelling of radioactivity
International Nuclear Information System (INIS)
Damasceno, Ralf M.; Barros, Ricardo C.
2009-01-01
This paper describes a computational applicative (software) that modelling the simply radioactive decay, the stable nuclei decay, and tbe chain decay directly coupled with superior limit of thirteen radioactive decays, and a internal data bank with the decay constants of the various existent decays, facilitating considerably the use of program by people who does not have access to the program are not connected to the nuclear area; this makes access of the program to people that do not have acknowledgment of that area. The paper presents numerical results for typical problem-models
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
Applicability of deterministic methods in seismic site effects modeling
International Nuclear Information System (INIS)
Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.
2005-01-01
The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)
Implemented state automorphisms within the logico-algebraic approach to deterministic mechanics
Energy Technology Data Exchange (ETDEWEB)
Barone, F [Naples Univ. (Italy). Ist. di Matematica della Facolta di Scienze
1981-01-31
The new notion of S/sub 1/-implemented state automorphism is introduced and characterized in quantum logic. Implemented pure state automorphisms are then characterized in deterministic mechanics as automorphisms of the Borel structure on the phase space.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
Classification and unification of the microscopic deterministic traffic models.
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
MIMO capacity for deterministic channel models: sublinear growth
DEFF Research Database (Denmark)
Bentosela, Francois; Cornean, Horia; Marchetti, Nicola
2013-01-01
. In the current paper, we apply those results in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given fixed volume. Under...... some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior....
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
Analysis of deterministic cyclic gene regulatory network models with delays
Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian
2015-01-01
This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.
A deterministic model of nettle caterpillar life cycle
Syukriyah, Y.; Nuraini, N.; Handayani, D.
2018-03-01
Palm oil is an excellent product in the plantation sector in Indonesia. The level of palm oil productivity is very potential to increase every year. However, the level of palm oil productivity is lower than its potential. Pests and diseases are the main factors that can reduce production levels by up to 40%. The existence of pests in plants can be caused by various factors, so the anticipation in controlling pest attacks should be prepared as early as possible. Caterpillars are the main pests in oil palm. The nettle caterpillars are leaf eaters that can significantly decrease palm productivity. We construct a deterministic model that describes the life cycle of the caterpillar and its mitigation by using a caterpillar predator. The equilibrium points of the model are analyzed. The numerical simulations are constructed to give a representation how the predator as the natural enemies affects the nettle caterpillar life cycle.
Comparative analysis of deterministic and probabilistic fracture mechanical assessment tools
Energy Technology Data Exchange (ETDEWEB)
Heckmann, Klaus [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Koeln (Germany); Saifi, Qais [VTT Technical Research Centre of Finland, Espoo (Finland)
2016-11-15
Uncertainties in material properties, manufacturing processes, loading conditions and damage mechanisms complicate the quantification of structural reliability. Probabilistic structure mechanical computing codes serve as tools for assessing leak- and break probabilities of nuclear piping components. Probabilistic fracture mechanical tools were compared in different benchmark activities, usually revealing minor, but systematic discrepancies between results of different codes. In this joint paper, probabilistic fracture mechanical codes are compared. Crack initiation, crack growth and the influence of in-service inspections are analyzed. Example cases for stress corrosion cracking and fatigue in LWR conditions are analyzed. The evolution of annual failure probabilities during simulated operation time is investigated, in order to identify the reasons for differences in the results of different codes. The comparison of the tools is used for further improvements of the codes applied by the partners.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Absorbing phase transitions in deterministic fixed-energy sandpile models
Park, Su-Chan
2018-03-01
We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010), 10.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.
Fisher-Wright model with deterministic seed bank and selection.
Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel
2017-04-01
Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.
Deterministic and heuristic models of forecasting spare parts demand
Directory of Open Access Journals (Sweden)
Ivan S. Milojević
2012-04-01
Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing
Cao, Pengxing; Tan, Xiahui; Donovan, Graham; Sanderson, Michael J; Sneyd, James
2014-08-01
The inositol trisphosphate receptor ([Formula: see text]) is one of the most important cellular components responsible for oscillations in the cytoplasmic calcium concentration. Over the past decade, two major questions about the [Formula: see text] have arisen. Firstly, how best should the [Formula: see text] be modeled? In other words, what fundamental properties of the [Formula: see text] allow it to perform its function, and what are their quantitative properties? Secondly, although calcium oscillations are caused by the stochastic opening and closing of small numbers of [Formula: see text], is it possible for a deterministic model to be a reliable predictor of calcium behavior? Here, we answer these two questions, using airway smooth muscle cells (ASMC) as a specific example. Firstly, we show that periodic calcium waves in ASMC, as well as the statistics of calcium puffs in other cell types, can be quantitatively reproduced by a two-state model of the [Formula: see text], and thus the behavior of the [Formula: see text] is essentially determined by its modal structure. The structure within each mode is irrelevant for function. Secondly, we show that, although calcium waves in ASMC are generated by a stochastic mechanism, [Formula: see text] stochasticity is not essential for a qualitative prediction of how oscillation frequency depends on model parameters, and thus deterministic [Formula: see text] models demonstrate the same level of predictive capability as do stochastic models. We conclude that, firstly, calcium dynamics can be accurately modeled using simplified [Formula: see text] models, and, secondly, to obtain qualitative predictions of how oscillation frequency depends on parameters it is sufficient to use a deterministic model.
Handbook of EOQ inventory problems stochastic and deterministic models and applications
Choi, Tsan-Ming
2013-01-01
This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.
OCA-P, a deterministic and probabilistic fracture-mechanics code for application to pressure vessels
International Nuclear Information System (INIS)
Cheverton, R.D.; Ball, D.G.
1984-05-01
The OCA-P code is a probabilistic fracture-mechanics code that was prepared specifically for evaluating the integrity of pressurized-water reactor vessels when subjected to overcooling-accident loading conditions. The code has two-dimensional- and some three-dimensional-flaw capability; it is based on linear-elastic fracture mechanics; and it can treat cladding as a discrete region. Both deterministic and probabilistic analyses can be performed. For the former analysis, it is possible to conduct a search for critical values of the fluence and the nil-ductility reference temperature corresponding to incipient initiation of the initial flaw. The probabilistic portion of OCA-P is based on Monte Carlo techniques, and simulated parameters include fluence, flaw depth, fracture toughness, nil-ductility reference temperature, and concentrations of copper, nickel, and phosphorous. Plotting capabilities include the construction of critical-crack-depth diagrams (deterministic analysis) and various histograms (probabilistic analysis)
Deterministic Modeling of the High Temperature Test Reactor
International Nuclear Information System (INIS)
Ortensi, J.; Cogliati, J.J.; Pope, M.A.; Ferrer, R.M.; Ougouag, A.M.
2010-01-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL's current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green's Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2-3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control
Progress in nuclear well logging modeling using deterministic transport codes
International Nuclear Information System (INIS)
Kodeli, I.; Aldama, D.L.; Maucec, M.; Trkov, A.
2002-01-01
Further studies in continuation of the work presented in 2001 in Portoroz were performed in order to study and improve the performances, precission and domain of application of the deterministic transport codes with respect to the oil well logging analysis. These codes are in particular expected to complement the Monte Carlo solutions, since they can provide a detailed particle flux distribution in the whole geometry in a very reasonable CPU time. Real-time calculation can be envisaged. The performances of deterministic transport methods were compared to those of the Monte Carlo method. IRTMBA generic benchmark was analysed using the codes MCNP-4C and DORT/TORT. Centric as well as excentric casings were considered using 14 MeV point neutron source and NaI scintillation detectors. Neutron and gamma spectra were compared at two detector positions.(author)
Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.
Gomez, Christophe; Hartung, Niklas
2018-01-01
Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.
Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.
Kang, Yun; Lanchier, Nicolas
2011-06-01
We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the
Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.
Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R
2018-05-01
A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
International Nuclear Information System (INIS)
Wang Zhi-Gang; Gao Rui-Mei; Fan Xiao-Ming; Han Qi-Xing
2014-01-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ 0 , a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ 0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ 0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ 0 , when the stochastic system obeys some conditions and ℛ 0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations. (general)
DEFF Research Database (Denmark)
Hansen, Lisbet Sneftrup; Borup, Morten; Moller, Arne
2014-01-01
drainage models and reduce a number of unavoidable discrepancies between the model and reality. The latter can be achieved partly by inserting measured water levels from the sewer system into the model. This article describes how deterministic updating of model states in this manner affects a simulation...
Szymanowski, Mariusz; Kryza, Maciej
2017-02-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Design of a deterministic link initialization mechanism for serial LVDS interconnects
International Nuclear Information System (INIS)
Schatral, S; Lemke, F; Bruening, U
2014-01-01
The Compressed Baryonic Matter experiment at FAIR in Darmstadt has special requirements for the Data Acquisition Network. One of them is deterministic latency for all the links from the back-end to the front-end, which enables synchronization in the whole read-out tree. Since the front-end electronics (FEE) contain mixed-signal circuits for processing the raw detector data, special ASICs were developed. DDR LVDS links are used to interconnect the FEEs and readout controllers. An adapted link initialization mechanism ensures determinism for them by balancing cable lengths, adjusting for phase differences, and handling environmental behavior. After re-initialization, timing must be accurate to the bit-clock level
On the effect of deterministic terms on the bias in stable AR models
van Giersbergen, N.P.A.
2004-01-01
This paper compares the first-order bias approximation for the autoregressive (AR) coefficients in stable AR models in the presence of deterministic terms. It is shown that the bias due to inclusion of an intercept and trend is twice as large as the bias due to an intercept. For the AR(1) model, the
Deterministic Model for Rubber-Metal Contact Including the Interaction Between Asperities
Deladi, E.L.; de Rooij, M.B.; Schipper, D.J.
2005-01-01
Rubber-metal contact involves relatively large deformations and large real contact areas compared to metal-metal contact. Here, a deterministic model is proposed for the contact between rubber and metal surfaces, which takes into account the interaction between neighboring asperities. In this model,
A deterministic mathematical model for bidirectional excluded flow with Langmuir kinetics.
Zarai, Yoram; Margaliot, Michael; Tuller, Tamir
2017-01-01
In many important cellular processes, including mRNA translation, gene transcription, phosphotransfer, and intracellular transport, biological "particles" move along some kind of "tracks". The motion of these particles can be modeled as a one-dimensional movement along an ordered sequence of sites. The biological particles (e.g., ribosomes or RNAPs) have volume and cannot surpass one another. In some cases, there is a preferred direction of movement along the track, but in general the movement may be bidirectional, and furthermore the particles may attach or detach from various regions along the tracks. We derive a new deterministic mathematical model for such transport phenomena that may be interpreted as a dynamic mean-field approximation of an important model from mechanical statistics called the asymmetric simple exclusion process (ASEP) with Langmuir kinetics. Using tools from the theory of monotone dynamical systems and contraction theory we show that the model admits a unique steady-state, and that every solution converges to this steady-state. Furthermore, we show that the model entrains (or phase locks) to periodic excitations in any of its forward, backward, attachment, or detachment rates. We demonstrate an application of this phenomenological transport model for analyzing ribosome drop off in mRNA translation.
Energy Technology Data Exchange (ETDEWEB)
Kim, Jong Min; Lee, Bong Sang [KAERI, Daejeon (Korea, Republic of)
2016-05-15
In this study, a deterministic/probabilistic fracture mechanics analysis program for reactor pressure vessel, PROFAS-RV, is developed. This program can evaluate failure probability of RPV using recent radiation embrittlement model of 10CFR50.61a and stress intensity factor calculation method of RCC-MRx code as well as the required basic functions of PFM program. Applications of some new radiation embrittlement model, material database, calculation method of stress intensity factors, and others which can improve fracture mechanics assessment of RPV are introduced. The purpose of this study is to develop a probabilistic fracture mechanics (PFM) analysis program for RPV considering above modification and application of newly developed models and calculation methods. In this paper, it deals with the development progress of the PFM analysis program for RPV, PROFAS-RV. The PROFAS-RV is being tested with other codes, and it is expected to revise and upgrade by reflecting the latest model and calculation method continuously. These efforts can minimize the uncertainty of the integrity evaluation for the reactor pressure vessel.
International Nuclear Information System (INIS)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-01-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer
International Nuclear Information System (INIS)
Brunner, W.; Focht, D.D.
1984-01-01
The kinetics of mineralization of carbonaceous substrates has been explained by a deterministic model which is applicable to either growth or nongrowth conditions in soils. The mixed-order nature of the model does not require a priori decisions about reaction order, discontinuity period of lag or stationary phase, or correction for endogenous mineralization rates. The integrated equation is simpler than the integrated form of the Monod equation because of the following: (i) only two, rather than four, interdependent constants have to be determined by nonlinear regression analysis, (ii) substrate or product formation can be expressed explicitly as a function of time, (iii) biomass concentration does not have to be known, and (iv) the required initial estimate for the nonlinear regression analysis can be easily obtained from a linearized form rather than from an interval estimate of a differential equation. 14 CO 2 evolution data from soil have been fitted to the model equation. All data except those from irradiated soil gave us better fits by residual sum of squares (RSS) by assuming growth in soil was linear (RSS =0.71) as opposed to exponential (RSS = 2.87). The underlying reasons for growth (exponential versus linear), no growth, and relative degradation rates of substrates are consistent with the basic mechanisms from which the model is derived. 21 references
Degli Esposti, M.; Giardinà, C.; Graffi, S.; Isola, S.
2001-01-01
We consider the zero-temperature dynamics for the infinite-range, non translation invariant one-dimensional spin model introduced by Marinari, Parisi and Ritort to generate glassy behaviour out of a deterministic interaction. It is argued that there can be a large number of metastable (i.e.,
On competition in a Stackelberg location-design model with deterministic supplier choice
Hendrix, E.M.T.
2016-01-01
We study a market situation where two firms maximize market capture by deciding on the location in the plane and investing in a competing quality against investment cost. Clients choose one of the suppliers; i.e. deterministic supplier choice. To study this situation, a game theoretic model is
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
From Ordinary Differential Equations to Structural Causal Models: the deterministic case
Mooij, J.M.; Janzing, D.; Schölkopf, B.; Nicholson, A.; Smyth, P.
2013-01-01
We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM). Our exposition sheds more light on the concept of causality as expressed within the framework of
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
of stochastic origin can be observed in experiments. The models include a new approach to the platinum phase transition, which allows for a unification of existing models for Pt(100) and Pt(110). The rich nonlinear dynamical behavior of the macroscopic reaction kinetics is investigated and shows good agreement...
Deterministic and stochastic trends in the Lee-Carter mortality model
DEFF Research Database (Denmark)
Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene
2015-01-01
The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics load with identical weights when describing the development of age-specific mortality rates. Effectively this means that the main characteristics of the model simplify to a random walk model with age...... mortality data. We find empirical evidence that this feature of the Lee–Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee......–Carter model will otherwise overestimate the reduction of mortality for the younger age groups and will underestimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee–Carter model instead of a one-factor model should be formulated as a two- (or several...
Deterministic and stochastic trends in the Lee-Carter mortality model
DEFF Research Database (Denmark)
Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene
The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model...... that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find...... that the classical Lee-Carter model will otherwise over estimate the reduction of mortality for the younger age groups and will under estimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee-Carter model instead of a one-factor model should be formulated...
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko; Steyn, Douw G.
2011-01-01
formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate
Sensitivity analysis technique for application to deterministic models
International Nuclear Information System (INIS)
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method
Reasoning with probabilistic and deterministic graphical models exact algorithms
Dechter, Rina
2013-01-01
Graphical models (e.g., Bayesian and constraint networks, influence diagrams, and Markov decision processes) have become a central paradigm for knowledge representation and reasoning in both artificial intelligence and computer science in general. These models are used to perform many reasoning tasks, such as scheduling, planning and learning, diagnosis and prediction, design, hardware and software verification, and bioinformatics. These problems can be stated as the formal tasks of constraint satisfaction and satisfiability, combinatorial optimization, and probabilistic inference. It is well
Liu, Xiangdong; Li, Qingze; Pan, Jianxin
2018-06-01
Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.
A deterministic aggregate production planning model considering quality of products
International Nuclear Information System (INIS)
Madadi, Najmeh; Wong, Kuan Yew
2013-01-01
Aggregate Production Planning (APP) is a medium-term planning which is concerned with the lowest-cost method of production planning to meet customers' requirements and to satisfy fluctuating demand over a planning time horizon. APP problem has been studied widely since it was introduced and formulated in 1950s. However, in several conducted studies in the APP area, most of the researchers have concentrated on some common objectives such as minimization of cost, fluctuation in the number of workers, and inventory level. Specifically, maintaining quality at the desirable level as an objective while minimizing cost has not been considered in previous studies. In this study, an attempt has been made to develop a multi-objective mixed integer linear programming model that serves those companies aiming to incur the minimum level of operational cost while maintaining quality at an acceptable level. In order to obtain the solution to the multi-objective model, the Fuzzy Goal Programming approach and max-min operator of Bellman-Zadeh were applied to the model. At the final step, IBM ILOG CPLEX Optimization Studio software was used to obtain the experimental results based on the data collected from an automotive parts manufacturing company. The results show that incorporating quality in the model imposes some costs, however a trade-off should be done between the cost resulting from producing products with higher quality and the cost that the firm may incur due to customer dissatisfaction and sale losses.
Deterministic integer multiple firing depending on initial state in Wang model
Energy Technology Data Exchange (ETDEWEB)
Xie Yong [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)]. E-mail: yxie@mail.xjtu.edu.cn; Xu Jianxue [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China); Jiang Jun [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)
2006-12-15
We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables.
Deterministic integer multiple firing depending on initial state in Wang model
International Nuclear Information System (INIS)
Xie Yong; Xu Jianxue; Jiang Jun
2006-01-01
We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab
International Nuclear Information System (INIS)
Gilbert, R.O.; Bittner, E.A.; Essington, E.H.
1995-01-01
This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238 Pu is transferred more readily than 239+240 Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)
Randi, Joseph A., III
2005-12-01
This thesis makes use of microindentation, nanoindentation and nanoscratching methods to better understand the mechanical properties of single crystalline silicon, calcium fluoride, and magnesium fluoride. These properties are measured and are used to predict the material's response to material removal, specifically by grinding and polishing, which is a combination of elastic, plastic and fracture processes. The hardness anisotropy during Knoop microindentation, hardness from nanoindentation, and scratch morphology from nanoscratching are reported. This information is related to the surface microroughness from grinding. We show that mechanical property relationships that predict the surface roughness from lapping and deterministic microgrinding of optical glasses are applicable to single crystals. We show the range of hardness from some of the more common crystallographic faces. Magnesium fluoride, having a tetragonal structure, has 2-fold hardness anisotropy. Nanoindentation, as expected provides higher hardness than microindentation, but anisotropy is not observed. Nanoscratching provides the scratch profile during loading, after the load has been removed, and the coefficient of friction during the loading. Ductile and brittle mode scratching is present with brittle mode cracking being orientation specific. Subsurface damage (SSD) measurements are made using a novel process known as the MRF technique. Magnetorheological finishing is used to polish spots into the ground surface where SSD can be viewed. SSD is measured using an optical microscope and knowledge of the spot profile. This technique is calibrated with a previous technique and implemented to accurately measure SSD in single crystals. The data collected are compared to the surface microroughness of the ground surface, resulting in an upper bound relationship. The results indicate that SSD is always less than 1.4 times the peak-to-valley surface microroughness for single crystals regardless of the
Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives
DEFF Research Database (Denmark)
Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle
2002-01-01
In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...... properties. The method provides a systematic way to derive a nominal average model as well as a structures multiplicative input uncertainty model, and it is demonstrated how to apply mu-theory to design a controller based on the models obtained that meets certain robust performance criteria....
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcao
2015-01-01
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with
Energy Technology Data Exchange (ETDEWEB)
Charbonnier, D.
2004-12-15
The physical phenomena observed in turbomachines are generally three-dimensional and unsteady. A recent study revealed that a three-dimensional steady simulation can reproduce the time-averaged unsteady phenomena, since the steady flow field equations integrate deterministic stresses. The objective of this work is thus to develop an unsteady deterministic stresses model. The analogy with turbulence makes it possible to write transport equations for these stresses. The equations are implemented in steady flow solver and e model for the energy deterministic fluxes is also developed and implemented. Finally, this work shows that a three-dimensional steady simulation, by taking into account unsteady effects with transport equations of deterministic stresses, increases the computing time by only approximately 30 %, which remains very interesting compared to an unsteady simulation. (author)
Hilbig, Benjamin E; Moshagen, Morten
2014-12-01
Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.
Bottom-up learning of hierarchical models in a class of deterministic POMDP environments
Directory of Open Access Journals (Sweden)
Itoh Hideaki
2015-09-01
Full Text Available The theory of partially observable Markov decision processes (POMDPs is a useful tool for developing various intelligent agents, and learning hierarchical POMDP models is one of the key approaches for building such agents when the environments of the agents are unknown and large. To learn hierarchical models, bottom-up learning methods in which learning takes place in a layer-by-layer manner from the lowest to the highest layer are already extensively used in some research fields such as hidden Markov models and neural networks. However, little attention has been paid to bottom-up approaches for learning POMDP models. In this paper, we present a novel bottom-up learning algorithm for hierarchical POMDP models and prove that, by using this algorithm, a perfect model (i.e., a model that can perfectly predict future observations can be learned at least in a class of deterministic POMDP environments
Kouteva, M; Paskaleva, I; Romanelli, F
2003-01-01
An analytical deterministic technique, based on the detailed knowledge of the seismic source process and of the propagation of seismic waves, has been applied to generate synthetic seismic signals at Russe, NE Bulgaria, associated to the strongest intermediate-depth Vrancea earthquakes, which occurred during the last century (1940, 1977, 1986 and 1990). The obtained results show that all ground motion components contribute significantly to the seismic loading and that the seismic source parameters influence the shape and the amplitude of the seismic signal. The approach we used proves that realistic seismic input (also at remote distances) can be constructed via waveform modelling, considering all the possible factors influencing the ground motion.
International Nuclear Information System (INIS)
Kouteva, M.; Paskaleva, I.; Panza, G.F.; Romanelli, F.
2003-06-01
An analytical deterministic technique, based on the detailed knowledge of the seismic source process and of the propagation of seismic waves, has been applied to generate synthetic seismic signals at Russe, NE Bulgaria, associated to the strongest intermediate-depth Vrancea earthquakes, which occurred during the last century (1940, 1977, 1986 and 1990). The obtained results show that all ground motion components contribute significantly to the seismic loading and that the seismic source parameters influence the shape and the amplitude of the seismic signal. The approach we used proves that realistic seismic input (also at remote distances) can be constructed via waveform modelling, considering all the possible factors influencing the ground motion. (author)
A deterministic combination of numerical and physical models for coastal waves
DEFF Research Database (Denmark)
Zhang, Haiwen
2006-01-01
of numerical and physical modelling hence provides an attractive alternative to the use of either tool on it's own. The goal of this project has been to develop a deterministically combined numerical/physical model where the physical wave tank is enclosed in a much larger computational domain, and the two......Numerical and physical modelling are the two main tools available for predicting the influence of water waves on coastlines and structures placed in the near-shore environment. Numerical models can cover large areas at the correct scale, but are limited in their ability to capture strong...... nonlinearities, wave breaking, splash, mixing, and other such complicated physics. Physical models naturally include the real physics (at the model scale), but are limited by the physical size of the facility and must contend with the fact that different physical effects scale differently. An integrated use...
FTR power-to-melt study. Phase I. Evaluation of deterministic models
International Nuclear Information System (INIS)
1978-09-01
SIEX is an HEDL fuel thermal performance code. It is designed to be a fast running code for steady state thermal performance analysis of coolant, cladding and fuel temperature throughout the history of a fuel element. The need to have a good predictive model coupled with short running time in the probabilistic analysis has made SIEX one of the potential deterministic models to be adopted. The probabilistic code to be developed will be a general thermal performance code acceptable on a national basis. It is, therefore, necessary to ensure that the physical model meets the requirements of an analytical tool. Since SIEX incorporates some physically-based correlated models, its general validity and limitations should be evaluated before being adopted
Directory of Open Access Journals (Sweden)
A. Campanile
2018-01-01
Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.
International Nuclear Information System (INIS)
Maerker, R.E.; Worley, B.A.
1989-01-01
Interest in research into the field of uncertainty analysis has recently been stimulated as a result of a need in high-level waste repository design assessment for uncertainty information in the form of response complementary cumulative distribution functions (CCDFs) to show compliance with regulatory requirements. The solution to this problem must obviously rely on the analysis of computer code models, which, however, employ parameters that can have large uncertainties. The motivation for the research presented in this paper is a search for a method involving a deterministic uncertainty analysis approach that could serve as an improvement over those methods that make exclusive use of statistical techniques. A deterministic uncertainty analysis (DUA) approach based on the use of first derivative information is the method studied in the present procedure. The present method has been applied to a high-level nuclear waste repository problem involving use of the codes ORIGEN2, SAS, and BRINETEMP in series, and the resulting CDF of a BRINETEMP result of interest is compared with that obtained through a completely statistical analysis
Pest persistence and eradication conditions in a deterministic model for sterile insect release.
Gordillo, Luis F
2015-01-01
The release of sterile insects is an environment friendly pest control method used in integrated pest management programmes. Difference or differential equations based on Knipling's model often provide satisfactory qualitative descriptions of pest populations subject to sterile release at relatively high densities with large mating encounter rates, but fail otherwise. In this paper, I derive and explore numerically deterministic population models that include sterile release together with scarce mating encounters in the particular case of species with long lifespan and multiple matings. The differential equations account separately the effects of mating failure due to sterile male release and the frequency of mating encounters. When insects spatial spread is incorporated through diffusion terms, computations reveal the possibility of steady pest persistence in finite size patches. In the presence of density dependence regulation, it is observed that sterile release might contribute to induce sudden suppression of the pest population.
Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine
2017-06-01
One of the most hazardous physical polluting agents, considering their effects on human health, is acoustical noise. Airports are a strong source of acoustical noise, due to the airplanes turbines, to the aero-dynamical noise of transits, to the acceleration or the breaking during the take-off and landing phases of aircrafts, to the road traffic around the airport, etc.. The monitoring and the prediction of the acoustical level emitted by airports can be very useful to assess the impact on human health and activities. In the airports noise scenario, thanks to flights scheduling, the predominant sources may have a periodic behaviour. Thus, a Time Series Analysis approach can be adopted, considering that a general trend and a seasonal behaviour can be highlighted and used to build a predictive model. In this paper, two different approaches are adopted, thus two predictive models are constructed and tested. The first model is based on deterministic decomposition and is built composing the trend, that is the long term behaviour, the seasonality, that is the periodic component, and the random variations. The second model is based on seasonal autoregressive moving average, and it belongs to the stochastic class of models. The two different models are fitted on an acoustical level dataset collected close to the Nice (France) international airport. Results will be encouraging and will show good prediction performances of both the adopted strategies. A residual analysis is performed, in order to quantify the forecasting error features.
International Nuclear Information System (INIS)
Hwangbo, Soonho; Lee, In-Beum; Han, Jeehoon
2014-01-01
Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network
Modelling the protocol stack in NCS with deterministic and stochastic petri net
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
Modeling a TRIGA Mark II reactor using the Attila three-dimensional deterministic transport code
International Nuclear Information System (INIS)
Keller, S.T.; Palmer, T.S.; Wareing, T.A.
2005-01-01
A benchmark model of a TRIGA reactor constructed using materials and dimensions similar to existing TRIGA reactors was analyzed using MCNP and the recently developed deterministic transport code Attila TM . The benchmark reactor requires no MCNP modeling approximations, yet is sufficiently complex to validate the new modeling techniques. Geometric properties of the benchmark reactor are specified for use by Attila TM with CAD software. Materials are treated individually in MCNP. Materials used in Attila TM that are clad are homogenized. Attila TM uses multigroup energy discretization. Two cross section libraries were constructed for comparison. A 16 group library collapsed from the SCALE 4.4.a 238 group library provided better results than a seven group library calculated with WIMS-ANL. Values of the k-effective eigenvalue and scalar flux as a function of location and energy were calculated by the two codes. The calculated values for k-effective and spatially averaged neutron flux were found to be in good agreement. Flux distribution by space and energy also agreed well. Attila TM results could be improved with increased spatial and angular resolution and revised energy group structure. (authors)
International Nuclear Information System (INIS)
Boyer, D; Miramontes, O; Larralde, H
2009-01-01
Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ∼ k -β , in some range of the exponent β, the foraging medium induces movements that are similar to Levy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.
Scribner, Richard; Ackleh, Azmy S; Fitzpatrick, Ben G; Jacquez, Geoffrey; Thibodeaux, Jeremy J; Rommel, Robert; Simonsen, Neal
2009-09-01
The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by "wetness" and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately "dry" campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately "wet" campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy episodic drinking on various types of campuses.
Scribner, Richard; Ackleh, Azmy S.; Fitzpatrick, Ben G.; Jacquez, Geoffrey; Thibodeaux, Jeremy J.; Rommel, Robert; Simonsen, Neal
2009-01-01
Objective: The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. Method: A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. Results: First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by “wetness” and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately “dry” campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately “wet” campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). Conclusions: A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy
Identification of LHC beam loss mechanism: a deterministic treatment of loss patterns
International Nuclear Information System (INIS)
Marsili, A.
2012-01-01
The goal of this work was to identify patterns in the beam loss profiles, both in their spatial distribution and time evolution. CERN's Large Hadron Collider (LHC) is the largest device ever built, with a total circumference of 26.7 km; and it is the most powerful accelerator ever, both in beam energy and beam intensity. The main magnets are superconducting, and contain the particles into two counter circulating beams which collide in four interaction points. CERN and the LHC will be described in chapter 1. The superconducting magnets of the LHC have to be protected against particle losses. Depending on the number of lost particles, the coils of the magnets could become normal conducting and/or will be damaged. To avoid these events a beam loss monitoring (BLM) system was installed to measure the particle loss rates. If the predefined safe thresholds of loss rates are exceeded, the beams are directed out of the accelerator ring towards the beam dump. The detectors of the BLM system are mainly ionization chambers located outside of the cryostats. In total, about 3600 ionisation chambers are installed. Further challenges include the high dynamical range of losses (chamber currents ranging between 2 pA and 1 mA). The BLM system will be further described in chapter 2. The subject of this thesis is to study the loss patterns and nd the origin of the losses in a deterministic way, by comparing measured losses to well understood loss scenarios. This is done through a case study: different techniques were used on a restrained set of loss scenarios, as a proof of concept of the possibility to extract information from a loss profile. Finding the origin of the losses should allow acting in response. A justification of the doctoral work will be given at the end of chapter 2. This thesis will then focus on the theoretical understanding and the implementation of the decomposition of a measured loss profile as a linear combination of the reference scenarios; and the evaluation of
Diffraction and interference of single de Broglie-wavelets. Deterministic wave mechanics
International Nuclear Information System (INIS)
Barut, A.O.
1993-05-01
Wavelets are localized nonspreading solutions of massless wave equations which move like massive quantum particles. They form a bridge between classical mechanics of point particles and wave functions of probabilistic quantum mechanics, both of which can be obtained by limiting processes. Here we develop a theory of the propagation of wavelets in the presence of boundaries and derive interference phenomena of quantum theory from the behavior of single events with ''hidden parameters''. (author). 8 refs, 1 fig
A deterministic model for the growth of non-conducting electrical tree structures
International Nuclear Information System (INIS)
Dodd, S J
2003-01-01
Electrical treeing is of interest to the electrical generation, transmission and distribution industries as it is one of the causes of insulation failure in electrical machines, switchgear and transformer bushings. In this paper a deterministic electrical tree growth model is described. The model is based on electrostatics and local electron avalanches to model partial discharge activity within the growing tree structure. Damage to the resin surrounding the tree structure is dependent on the local electrostatic energy dissipation by partial discharges within the tree structure and weighted by the magnitudes of the local electric fields in the resin surrounding the tree structure. The model is successful in simulating the formation of branched structures without the need of a random variable, a requirement of previous stochastic models. Instability in the spatial development of partial discharges within the tree structure takes the role of the stochastic element as used in previous models to produce branched tree structures. The simulated electrical trees conform to the experimentally observed behaviour; tree length versus time and electrical tree growth rate as a function of applied voltage for non-conducting electrical trees. The phase synchronous partial discharge activity and the spatial distribution of emitted light from the tree structure are also in agreement with experimental data for non-conducting trees as grown in a flexible epoxy resin and in polyethylene. The fact that similar tree growth behaviour is found using pure amorphous (epoxy resin) and semicrystalline (polyethylene) materials demonstrate that neither annealed or quenched noise, representing material inhomogeneity, is required for the formation of irregular branched structures (electrical trees). Instead, as shown in this paper, branched growth can occur due to the instability of individual discharges within the tree structure
Directory of Open Access Journals (Sweden)
Yuichi eYamashita
2011-04-01
Full Text Available How the brain learns and generates temporal sequences is a fundamental issue in neuroscience. The production of birdsongs, a process which involves complex learned sequences, provides researchers with an excellent biological model for this topic. The Bengalese finch in particular learns a highly complex song with syntactical structure. The nucleus HVC (HVC, a premotor nucleus within the avian song system, plays a key role in generating the temporal structures of their songs. From lesion studies, the nucleus interfacialis (NIf projecting to the HVC is considered one of the essential regions that contribute to the complexity of their songs. However, the types of interaction between the HVC and the NIf that can produce complex syntactical songs remain unclear. In order to investigate the function of interactions between the HVC and NIf, we have proposed a neural network model based on previous biological evidence. The HVC is modeled by a recurrent neural network (RNN that learns to generate temporal patterns of songs. The NIf is modeled as a mechanism that provides auditory feedback to the HVC and generates random noise that feeds into the HVC. The model showed that complex syntactical songs can be replicated by simple interactions between deterministic dynamics of the RNN and random noise. In the current study, the plausibility of the model is tested by the comparison between the changes in the songs of actual birds induced by pharmacological inhibition of the NIf and the changes in the songs produced by the model resulting from modification of parameters representing NIf functions. The efficacy of the model demonstrates that the changes of songs induced by pharmacological inhibition of the NIf can be interpreted as a trade-off between the effects of noise and the effects of feedback on the dynamics of the RNN of the HVC. These facts suggest that the current model provides a convincing hypothesis for the functional role of NIf-HVC interaction.
A deterministic computer simulation model of life-cycle lamb and wool production.
Wang, C T; Dickerson, G E
1991-11-01
A deterministic mathematical computer model was developed to simulate effects on life-cycle efficiency of lamb and wool production from genetic improvement of performance traits under alternative management systems. Genetic input parameters can be varied for age at puberty, length of anestrus, fertility, precocity of fertility, number born, milk yield, mortality, growth rate, body fat, and wool growth. Management options include mating systems, lambing intervals, feeding levels, creep feeding, weaning age, marketing age or weight, and culling policy. Simulated growth of animals is linear from birth to inflection point, then slows asymptotically to specified mature empty BW and fat content when nutrition is not limiting. The ME intake requirement to maintain normal condition is calculated daily or weekly for maintenance, protein and fat deposition, wool growth, gestation, and lactation. Simulated feed intake is the minimum of availability, DM physical limit, or ME physiological limit. Tissue catabolism occurs when intake is below the requirement for essential functions. Mortality increases when BW is depressed. Equations developed for calculations of biological functions were validated with published and unpublished experimental data. Lifetime totals are accumulated for TDN, DM, and protein intake and for market lamb equivalent output values of empty body or carcass lean and wool from both lambs and ewes. These measures of efficiency for combinations of genetic, management, and marketing variables can provide the relative economic weighting of traits needed to derive optimal criteria for genetic selection among and within breeds under defined industry production systems.
A deterministic partial differential equation model for dose calculation in electron radiotherapy.
Duclous, R; Dubroca, B; Frank, M
2010-07-07
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung
A deterministic partial differential equation model for dose calculation in electron radiotherapy
Duclous, R.; Dubroca, B.; Frank, M.
2010-07-01
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung
Petersen, Øyvind Wiig
2014-01-01
Force identification in structural dynamics is an inverse problem concerned with finding loads from measured structural response. The main objective of this thesis is to perform and study state (displacement and velocity) and force estimation by Kalman filtering. Theory on optimal control and state-space models are presented, adapted to linear structural dynamics. Accommodation for measurement noise and model inaccuracies are attained by stochastic-deterministic coupling. Explicit requirem...
International Nuclear Information System (INIS)
Scott, B.R.
1995-01-01
Individuals who work at nuclear reactor facilities can be at risk for deterministic effects in the skin from exposure to discrete Β- and γ-emitting (ΒγE) sources (e.g., ΒγE hot particles) on the skin or clothing. Deterministic effects are non-cancer effects that have a threshold and increase in severity as dose increases (e.g., ulcer in skin). Hot ΒγE particles are 60 Co- or nuclear fuel-derived particles with diameters > 10 μm and < 3 mm and contain at least 3.7 kBq (0.1 μCi) of radioactivity. For such ΒγE sources on the skin, it is the beta component of the dose that is most important. To develop exposure limitation systems that adequately control exposure of workers to discrete ΒγE sources, models are needed for systems that adequately control exposure of workers to discrete ΒγE sources, models are needed for evaluating the risk of deterministic effects of localized Β irradiation of the skin. The purpose of this study was to develop dose-rate and irradiated-area dependent, response-surface models for evaluating risks of significant deterministic effects of localized irradiation of the skin by discrete ΒγE sources and to use modeling results to recommend approaches to limiting occupational exposure to such sources. The significance of the research results as follows: (1) response-surface models are now available for evaluating the risk of specific deterministic effects of localized irradiation of the skin; (2) modeling results have been used to recommend approaches to limiting occupational exposure of workers to Β radiation from ΒγE sources on the skin or on clothing; and (3) the generic irradiated-volume, weighting-factor approach to limiting exposure can be applied to other organs including the eye, the ear, and organs of the respiratory or gastrointestinal tract and can be used for both deterministic and stochastic effects
Energy Technology Data Exchange (ETDEWEB)
Scott, B.R.
1995-12-01
Individuals who work at nuclear reactor facilities can be at risk for deterministic effects in the skin from exposure to discrete {Beta}- and {gamma}-emitting ({Beta}{gamma}E) sources (e.g., {Beta}{gamma}E hot particles) on the skin or clothing. Deterministic effects are non-cancer effects that have a threshold and increase in severity as dose increases (e.g., ulcer in skin). Hot {Beta}{gamma}E particles are {sup 60}Co- or nuclear fuel-derived particles with diameters > 10 {mu}m and < 3 mm and contain at least 3.7 kBq (0.1 {mu}Ci) of radioactivity. For such {Beta}{gamma}E sources on the skin, it is the beta component of the dose that is most important. To develop exposure limitation systems that adequately control exposure of workers to discrete {Beta}{gamma}E sources, models are needed for systems that adequately control exposure of workers to discrete {Beta}{gamma}E sources, models are needed for evaluating the risk of deterministic effects of localized {Beta} irradiation of the skin. The purpose of this study was to develop dose-rate and irradiated-area dependent, response-surface models for evaluating risks of significant deterministic effects of localized irradiation of the skin by discrete {Beta}{gamma}E sources and to use modeling results to recommend approaches to limiting occupational exposure to such sources. The significance of the research results as follows: (1) response-surface models are now available for evaluating the risk of specific deterministic effects of localized irradiation of the skin; (2) modeling results have been used to recommend approaches to limiting occupational exposure of workers to {Beta} radiation from {Beta}{gamma}E sources on the skin or on clothing; and (3) the generic irradiated-volume, weighting-factor approach to limiting exposure can be applied to other organs including the eye, the ear, and organs of the respiratory or gastrointestinal tract and can be used for both deterministic and stochastic effects.
International Nuclear Information System (INIS)
Calloo, A.A.
2012-01-01
In reactor physics, calculation schemes with deterministic codes are validated with respect to a reference Monte Carlo code. The remaining biases are attributed to the approximations and models induced by the multigroup theory (self-shielding models and expansion of the scattering law using Legendre polynomials) to represent physical phenomena (resonant absorption and scattering anisotropy respectively). This work focuses on the relevance of a polynomial expansion to model the scattering law. Since the outset of reactor physics, the latter has been expanded on a truncated Legendre polynomial basis. However, the transfer cross sections are highly anisotropic, with non-zero values for a very small range of the cosine of the scattering angle. Besides, the finer the energy mesh and the lighter the scattering nucleus, the more exacerbated is the peaked shape of this cross section. As such, the Legendre expansion is less suited to represent the scattering law. Furthermore, this model induces negative values which are non-physical. In this work, various scattering laws are briefly described and the limitations of the existing model are pointed out. Hence, piecewise-constant functions have been used to represent the multigroup scattering cross section. This representation requires a different model for the diffusion source. The discrete ordinates method which is widely employed to solve the transport equation has been adapted. Thus, the finite volume method for angular discretization has been developed and implemented in Paris environment which hosts the S n solver, Snatch. The angular finite volume method has been compared to the collocation method with Legendre moments to ensure its proper performance. Moreover, unlike the latter, this method is adapted for both the Legendre moments and the piecewise-constant functions representations of the scattering cross section. This hybrid-source method has been validated for different cases: fuel cell in infinite lattice
Aperiodic dynamics in a deterministic adaptive network model of attitude formation in social groups
Ward, Jonathan A.; Grindrod, Peter
2014-07-01
Adaptive network models, in which node states and network topology coevolve, arise naturally in models of social dynamics that incorporate homophily and social influence. Homophily relates the similarity between pairs of nodes' states to their network coupling strength, whilst social influence causes coupled nodes' states to convergence. In this paper we propose a deterministic adaptive network model of attitude formation in social groups that includes these effects, and in which the attitudinal dynamics are represented by an activato-inhibitor process. We illustrate that consensus, corresponding to all nodes adopting the same attitudinal state and being fully connected, may destabilise via Turing instability, giving rise to aperiodic dynamics with sensitive dependence on initial conditions. These aperiodic dynamics correspond to the formation and dissolution of sub-groups that adopt contrasting attitudes. We discuss our findings in the context of cultural polarisation phenomena. Social influence. This reflects the fact that people tend to modify their behaviour and attitudes in response to the opinions of others [22-26]. We model social influence via diffusion: agents adjust their state according to a weighted sum (dictated by the evolving network) of the differences between their state and the states of their neighbours. Homophily. This relates the similarity of individuals' states to their frequency and strength of interaction [27]. Thus in our model, homophily drives the evolution of the weighted ‘social' network. A precise formulation of our model is given in Section 2. Social influence and homophily underpin models of social dynamics [21], which cover a wide range of sociological phenomena, including the diffusion of innovations [28-32], complex contagions [33-36], collective action [37-39], opinion dynamics [19,20,40,10,11,13,15,41,16], the emergence of social norms [42-44], group stability [45], social differentiation [46] and, of particular relevance
An approach to model reactor core nodalization for deterministic safety analysis
Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat @ Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd
2016-01-01
Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH1.6, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.
An approach to model reactor core nodalization for deterministic safety analysis
Energy Technology Data Exchange (ETDEWEB)
Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my; Samsudin, Mohd Rafie, E-mail: rafies@tnb.com.my [Nuclear Energy Department, Regulatory Economics & Planning Division, Tenaga Nasional Berhad (Malaysia); Mamat Ibrahim, Mohd Rizal, E-mail: m-rizal@nuclearmalaysia.gov.my [Prototypes & Plant Development Center, Malaysian Nuclear Agency (Malaysia); Roslan, Ridha, E-mail: ridha@aelb.gov.my; Sadri, Abd Aziz [Nuclear Installation Divisions, Atomic Energy Licensing Board (Malaysia); Farid, Mohd Fairus Abd [Reactor Technology Center, Malaysian Nuclear Agency (Malaysia)
2016-01-22
Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH{sub 1.6}, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D{sup ®} computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.
Seismic zonation of Bucharest by using a deterministic approach of numerical modeling
International Nuclear Information System (INIS)
Moldoveanu, C.L.; Panza, G.F.; Cioflan, C.; Radulian, M.; Marmureanu, Gh.
2002-01-01
Bucharest city represents the largest European center (about 2 million inhabitants and 230 km 2 constructed area) periodically subjected to the strong intermediate-depth earthquakes originating in Vrancea region. The statistics indicate a recurrence interval of 25 years for M w ≥7.0 Vrancea events and a significant earthquake hazard for the city location with a 50% chance for an event of M w >7.6 every 50 years. The strongest Vrancea events of the last century occurred in 1908 (M w =7.1), 1940 (M w =7.7), 1977 (M w =7.4) and 1986 (M w =7.1) and inflicted heavy damage and casualties in Bucharest. Under these circumstances, the ground motion evaluation for the city area represents an essential step toward the mitigation of the local seismic risk. This paper presents the new insights coming from direct instrumental observation and interpretation of the local effects as well as realistic numerical modeling that update and improve the input data necessary for a detailed microzoning map of the Romanian capital. Our results show that the synthetic local hazard distribution we obtain with the deterministic approach supplies a realistic estimation of the seismic input, highly sensitive not only to the local conditions, but also to the source and the path structure parameters. The complex hybrid method we use offers the chance to merge the different specific accumulated information in reasonably well constrained scenarios for a level C realistic microzonation of Bucharest area to be use to mitigate the effects of future strong events originating in Vrancea region. (authors)
An approach to model reactor core nodalization for deterministic safety analysis
International Nuclear Information System (INIS)
Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd
2016-01-01
Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH 1.6 , stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D ® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M
Hahl, Sayuri K; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
International Nuclear Information System (INIS)
Passon, Oliver
2010-01-01
Bohm's mechanics belong to the alternative formulations of quantum mechanics, deviates in their knowledge-theoretical implications however radially from the usual Copenhagen interpretation. Their importance lies by this above all in the region of fundamental questions and the interpretation of quantum mechanics, because they allow yet a solution of the measurement problem discussed since decades controversy. Simultaneously all predictions of usual quantum mechanics can be reproduced. Even though on the German-language textbook market hitherto an elementary introduction to this topic lacked. New in the second edition is a short draft of the relativistic and quantum-field theoretical generalizations.
Large-scale modeling of rain fields from a rain cell deterministic model
FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia
2006-04-01
A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.
Height-Deterministic Pushdown Automata
DEFF Research Database (Denmark)
Nowotka, Dirk; Srba, Jiri
2007-01-01
We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
CSIR Research Space (South Africa)
Salmon, BP
2017-09-01
Full Text Available be de- rived for Equation (8) as shown in [27]. The Poincare´- Bendixson theorem states that a differential equation with a three-dimensional phase plane can be chaotic [28]. Hence Equation (8) is a nonlinear deterministic system that can exert... model parameters. Lemma 1. The characteristics of a differential equation can be investigated with the aid of a phase plane plot, which illustrates the limit cycles of the solutions. A three-dimensional phase plane representation that is autonomous can...
The integrated model for solving the single-period deterministic inventory routing problem
Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik
2016-08-01
This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.
DEFF Research Database (Denmark)
Ghoreishi, Maryam
2018-01-01
Many models within the field of optimal dynamic pricing and lot-sizing models for deteriorating items assume everything is deterministic and develop a differential equation as the core of analysis. Two prominent examples are the papers by Rajan et al. (Manag Sci 38:240–262, 1992) and Abad (Manag......, we will try to expose the model by Abad (1996) and Rajan et al. (1992) to stochastic inputs; however, designing these stochastic inputs such that they as closely as possible are aligned with the assumptions of those papers. We do our investigation through a numerical test where we test the robustness...... of the numerical results reported in Rajan et al. (1992) and Abad (1996) in a simulation model. Our numerical results seem to confirm that the results stated in these papers are indeed robust when being imposed to stochastic inputs....
Deterministic chaos in the pitting phenomena of passivable alloys
International Nuclear Information System (INIS)
Hoerle, Stephane
1998-01-01
It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr
Energy Technology Data Exchange (ETDEWEB)
Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr [Université Paris-Est, LAMA (UMR 8050), UPEMLV, UPEC, CNRS (France)
2016-10-15
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.
International Nuclear Information System (INIS)
Goreac, Dan; Kobylanski, Magdalena; Martinez, Miguel
2016-01-01
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.
Nonlinear Markov processes: Deterministic case
International Nuclear Information System (INIS)
Frank, T.D.
2008-01-01
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
A strongly nonlinear reaction-diffusion model for a deterministic diffusive epidemic
International Nuclear Information System (INIS)
Kirane, M.; Kouachi, S.
1992-10-01
In the present paper the mathematical validity of a model on the spread of an infectious disease is proved. This model was proposed by Bailey. The mathematical validity is proved by means of a positivity, uniqueness and existence theorem. In spite of the apparent simplicity of the problem, the solution requires a delicate set of techniques. It seems very difficult to extend these techniques to a model in more than one dimension without imposing conditions on the diffusivities. (author). 7 refs
Wood, Stephen A; Armitage, James M; Binnington, Matthew J; Wania, Frank
2016-09-14
A population's exposure to persistent organic pollutants, e.g., polychlorinated biphenyls (PCBs), is typically assessed through national biomonitoring programs, such as the United States National Health and Nutrition Examination Survey (NHANES). To complement statistical methods, we use a deterministic modeling approach to establish mechanistic links between human contaminant concentrations and factors (e.g. age, diet, lipid mass) deemed responsible for the often considerable variability in these concentrations. Lifetime exposures to four PCB congeners in 6128 participants from NHANES 1999-2004 are simulated using the ACC-Human model supplied with individualized input parameters obtained from NHANES questionnaires (e.g., birth year, sex, body mass index, dietary composition, reproductive behavior). Modeled and measured geometric mean PCB-153 concentrations in NHANES participants of 13.3 and 22.0 ng g -1 lipid, respectively, agree remarkably well, although lower model-measurement agreement for air, water, and food suggests that this is partially due to fortuitous error cancellation. The model also reproduces trends in the measured data with key factors such as age, parity and sex. On an individual level, 62% of all modeled concentrations are within a factor of three of their corresponding measured values (Spearman r s = 0.44). However, the model attributes more of the inter-individual variability to differences in dietary lipid intake than is indicated by the measured data. While the model succeeds in predicting levels and trends on the population level, the accuracy of individual-specific predictions would need to be improved for refined exposure characterization in epidemiological studies.
An Innovative Real-time Environment for Unified Deterministic and Stochastic Groundwater Modeling
Li, S.; Liu, Q.
2003-12-01
Despite an exponential growth of computational capability over the last two decades-one that has allowed computational science and engineering to become a unique, powerful tool for scientific discovery-the extreme cost of groundwater modeling continues to limit its use. This occurs primarily because the modeling paradigm that has been employed for decades limits our ability to take full advantage of recent developments in computer, communication, graphic, and visualization technologies. In this presentation we introduce an innovative and sophisticated computational environment for groundwater modeling that promises to eliminate the current bottleneck and greatly expand the utility of computational tools for scientific discovery related to groundwater. Based on a set of efficient and robust computational algorithms, the new software system, called Interactive Groundwater (IGW), allows simulating complex flow and transport in aquifers subject to both systematic and "randomly" varying stresses and geological and chemical heterogeneity. Adopting a new paradigm, IGW eliminates a major bottleneck inherent in the traditional fragmented modeling technologies and enables real-time modeling, real-time visualization, real-time analysis, and real-time presentation. IGW functions as a "numerical laboratory" in which a researcher can freely explore in real-time: creating visually an aquifer of desired configurations, interactively imposing desired stresses, and then immediately investigating and visualizing the geology and the processes of flow and contaminant transport and transformation. A modeler can pause to edit at any time and interact on-line with any aspects (e.g., conceptual and numerical representation, boundary conditions, model solvers, and ways of visualization and analysis) of the integrated modeling process; he/she can initiate or stop, whenever needed, particle tracking, plume modeling, subscale modeling, cross-sectional modeling, stochastic modeling, monitoring
Creating a stage-based deterministic PVA model - the western prairie fringed orchid [Exercise 12
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
Contemporary efforts to conserve populations and species often employ population viability analysis (PVA), a specific application of population modeling that estimates the effects of environmental and demographic processes on population growth rates. These models can also be used to estimate probabilities that a population will fall below a certain level. This...
A strongly nonlinear reaction diffusion model for a deterministic diffusive epidemic
International Nuclear Information System (INIS)
Kirane, M.; Kouachi, S.
1993-04-01
In the present paper the mathematical validity of a model on the spread of an infectious disease is proved. This model was proposed by Bailey. The mathematical validity is proved by means of a positivity, uniqueness and existence theorem. Moreover the large time behaviour of the global solutions is analyzed. In spite of the apparent simplicity of the problem, the solution requires a delicate set of techniques. It seems very difficult to extend these techniques to a model in more than one dimension without imposing conditions on the data. (author). 9 refs
Deterministic Compilation of Temporal Safety Properties in Explicit State Model Checking
National Aeronautics and Space Administration — The translation of temporal logic specifications constitutes an essen- tial step in model checking and a major influence on the efficiency of formal verification via...
Directory of Open Access Journals (Sweden)
Anand Joshi
2013-01-01
Full Text Available This paper presents use of semiempirical method for seismic hazard zonation. The seismotectonically important region of Uttarakhand Himalaya has been considered in this work. Ruptures along the lineaments in the area identified from tectonic map are modeled deterministically using semi empirical approach given by Midorikawa (1993. This approach makes use of attenuation relation of peak ground acceleration for simulating strong ground motion at any site. Strong motion data collected over a span of three years in this region have been used to develop attenuation relation of peak ground acceleration of limited magnitude and distance applicability. The developed attenuation relation is used in the semi empirical method to predict peak ground acceleration from the modeled rupture planes in the area. A set of values of peak ground acceleration from possible ruptures in the area at the point of investigation is further used to compute probability of exceedance of peak ground acceleration of values 100 and 200 gals. The prepared map shows that regions like Tehri, Chamoli, Almora, Srinagar, Devprayag, Bageshwar, and Pauri fall in a zone of 10% probability of exceedence of peak ground acceleration of value 200 gals.
International Nuclear Information System (INIS)
Bremen, Lueder von
2007-01-01
Large-scale wind farms will play an important role in the future worldwide energy supply. However, with increasing wind power penetration all stakeholders on the electricity market will ask for more skilful wind power predictions regarding save grid integration and to increase the economic value of wind power. A Neural Network is used to calculate Model Output Statistics (MOS) for each individual forecast model (ECMWF and HIRLAM) and to model the aggregated power curve of the Middelgrunden offshore wind farm. We showed that the combination of two NWP models clearly outperforms the better single model. The normalized day-ahead RMSE forecast error for Middelgrunden can be reduced by 1% compared to single ECMWF. This is a relative improvement of 6%. For lead times >24h it is worthwhile to use a more sophisticated model combination approach than simple linear weighting. The investigated principle component regression is able to extract the uncorrelated information from two NWP forecasts. The spread of Ensemble Predictions is related to the skill of wind power forecasts. Simple contingency diagrams show that low spread corresponds is more often related to low forecast errors and high spread to large forecast errors
Energy Technology Data Exchange (ETDEWEB)
Stephens, Michael B. (Geological Survey of Sweden, Uppsala (Sweden)); Simeonov, Assen (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Isaksson, Hans (GeoVista AB, Luleaa (Sweden))
2008-12-15
The Swedish Nuclear Fuel and Waste Management Company is in the process of completing site descriptive modelling at two locations in Sweden, with the objective to site a deep geological repository for spent nuclear fuel. At Forsmark, the results of the stage 2.2 geological modelling formed the input for downstream users. Since complementary ground and borehole geological and geophysical data, acquired after model stage 2.2, were not planned to be included in the deterministic rock domain, fracture domain and deformation zone models supplied to the users, it was deemed necessary to evaluate the implications of these stage 2.3 data for the stage 2.2 deterministic geological models and, if possible, to make use of these data to verify the models. This report presents the results of the analysis of the complementary stage 2.3 geological and geophysical data. Model verification from borehole data has been implemented in the form of a prediction-outcome test. The stage 2.3 geological and geophysical data at Forsmark mostly provide information on the bedrock outside the target volume. Additional high-resolution ground magnetic data and the data from the boreholes KFM02B, KFM11A, KFM12A and HFM33 to HFM37 can be included in this category. Other data complement older information of identical character, both inside and outside this volume. These include the character and kinematics of deformation zones and fracture mineralogy. In general terms, it can be stated that all these new data either confirm the geological modelling work completed during stage 2.2 or are in good agreement with the data that were used in this work. In particular, although the new high-resolution ground magnetic data modify slightly the position and trace length of some stage 2.2 deformation zones at the ground surface, no new or modified deformation zones with a trace length longer than 3,000 m at the ground surface have emerged. It is also apparent that the revision of fracture orientation data
International Nuclear Information System (INIS)
Stephens, Michael B.; Simeonov, Assen; Isaksson, Hans
2008-12-01
The Swedish Nuclear Fuel and Waste Management Company is in the process of completing site descriptive modelling at two locations in Sweden, with the objective to site a deep geological repository for spent nuclear fuel. At Forsmark, the results of the stage 2.2 geological modelling formed the input for downstream users. Since complementary ground and borehole geological and geophysical data, acquired after model stage 2.2, were not planned to be included in the deterministic rock domain, fracture domain and deformation zone models supplied to the users, it was deemed necessary to evaluate the implications of these stage 2.3 data for the stage 2.2 deterministic geological models and, if possible, to make use of these data to verify the models. This report presents the results of the analysis of the complementary stage 2.3 geological and geophysical data. Model verification from borehole data has been implemented in the form of a prediction-outcome test. The stage 2.3 geological and geophysical data at Forsmark mostly provide information on the bedrock outside the target volume. Additional high-resolution ground magnetic data and the data from the boreholes KFM02B, KFM11A, KFM12A and HFM33 to HFM37 can be included in this category. Other data complement older information of identical character, both inside and outside this volume. These include the character and kinematics of deformation zones and fracture mineralogy. In general terms, it can be stated that all these new data either confirm the geological modelling work completed during stage 2.2 or are in good agreement with the data that were used in this work. In particular, although the new high-resolution ground magnetic data modify slightly the position and trace length of some stage 2.2 deformation zones at the ground surface, no new or modified deformation zones with a trace length longer than 3,000 m at the ground surface have emerged. It is also apparent that the revision of fracture orientation data
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
Verification of Overall Safety Factors In Deterministic Design Of Model Tested Breakwaters
DEFF Research Database (Denmark)
Burcharth, H. F.
2001-01-01
The paper deals with concepts of safety implementation in design. An overall safety factor concept is evaluated on the basis of a reliability analysis of a model tested rubble mound breakwater with monolithic super structure. Also discussed are design load identification and failure mode limit...
A deterministic model for the planning of microcellular mobile radio communication systems
Klaassen, M.G.J.J.; Mawira, A.
1994-01-01
A ray model for field strength prediction for the planning of microcellular mobile radio communication systems is presented. The software developed at Eindhoven University of Technology for LMSS has been adapted for application in microcellular mobile radio communication systems. The adaption
Deterministically patterned biomimetic human iPSC-derived hepatic model via rapid 3D bioprinting.
Ma, Xuanyi; Qu, Xin; Zhu, Wei; Li, Yi-Shuan; Yuan, Suli; Zhang, Hong; Liu, Justin; Wang, Pengrui; Lai, Cheuk Sun Edwin; Zanella, Fabian; Feng, Gen-Sheng; Sheikh, Farah; Chien, Shu; Chen, Shaochen
2016-02-23
The functional maturation and preservation of hepatic cells derived from human induced pluripotent stem cells (hiPSCs) are essential to personalized in vitro drug screening and disease study. Major liver functions are tightly linked to the 3D assembly of hepatocytes, with the supporting cell types from both endodermal and mesodermal origins in a hexagonal lobule unit. Although there are many reports on functional 2D cell differentiation, few studies have demonstrated the in vitro maturation of hiPSC-derived hepatic progenitor cells (hiPSC-HPCs) in a 3D environment that depicts the physiologically relevant cell combination and microarchitecture. The application of rapid, digital 3D bioprinting to tissue engineering has allowed 3D patterning of multiple cell types in a predefined biomimetic manner. Here we present a 3D hydrogel-based triculture model that embeds hiPSC-HPCs with human umbilical vein endothelial cells and adipose-derived stem cells in a microscale hexagonal architecture. In comparison with 2D monolayer culture and a 3D HPC-only model, our 3D triculture model shows both phenotypic and functional enhancements in the hiPSC-HPCs over weeks of in vitro culture. Specifically, we find improved morphological organization, higher liver-specific gene expression levels, increased metabolic product secretion, and enhanced cytochrome P450 induction. The application of bioprinting technology in tissue engineering enables the development of a 3D biomimetic liver model that recapitulates the native liver module architecture and could be used for various applications such as early drug screening and disease modeling.
Zieher, T.; Rutzinger, M.; Bremer, M.; Meissl, G.; Geitner, C.
2014-12-01
The potentially stabilizing effects of forest cover in respect of slope stability have been the subject of many studies in the recent past. Hence, the effects of trees are also considered in many deterministic landslide susceptibility models. TRIGRS 2.0 (Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability; USGS) is a dynamic, physically-based model designed to estimate shallow landslide susceptibility in space and time. In the original version the effects of forest cover are not considered. As for further studies in Vorarlberg (Austria) TRIGRS 2.0 is intended to be applied in selected catchments that are densely forested, the effects of trees on slope stability were implemented in the model. Besides hydrological impacts such as interception or transpiration by tree canopies and stems, root cohesion directly influences the stability of slopes especially in case of shallow landslides while the additional weight superimposed by trees is of minor relevance. Detailed data on tree positions and further attributes such as tree height and diameter at breast height were derived throughout the study area (52 km²) from high-resolution airborne laser scanning data. Different scenarios were computed for spruce (Picea abies) in the study area. Root cohesion was estimated area-wide based on published correlations between root reinforcement and distance to tree stems depending on the stem diameter at breast height. In order to account for decreasing root cohesion with depth an exponential distribution was assumed and implemented in the model. Preliminary modelling results show that forest cover can have positive effects on slope stability yet strongly depending on tree age and stand structure. This work has been conducted within C3S-ISLS, which is funded by the Austrian Climate and Energy Fund, 5th ACRP Program.
Simple deterministic model of the hydraulic buffer effect in septic tanks
Forquet, N.; Dufresne, M.
2015-01-01
Septic tanks are widely used in on-site wastewater treatment systems. In addition to anaerobic pre-treatment, hydraulic buffering is one of the roles attributed to septic tanks. However there is still no tool for assessing it, especially in dynamic conditions. For gravity fed system, it could help both researchers and system designers. This technical note reports a simple mechanistic model based on the assumption of flow transition between the septic tank and the outflow pipe. The only parame...
A Reference-Dependent Regret Model for Deterministic Trade-off Studies
Energy Technology Data Exchange (ETDEWEB)
Kujawski, Edouard
2005-02-25
Today's typical multi-criteria decision analysis is based on classical expected utility theory that assumes a mythical ''Rational Individual'' immune to psychological influences such as anticipated regret. It is therefore in conflict with rational individuals who trade-off some benefits and forgo the alternative with the highest total classical utility for a more balanced alternative in order to reduce their levels of anticipated regret. This paper focuses on decision making under certainty. It presents a reference-dependent regret model (RDRM) in which the level of regret that an individual experiences depends on the absolute values rather than the differences of the utilities of the chosen and forgone alternatives. The RDRM best choice may differ from the conventional linear additive utility model, the analytic hierarchy process, and the regret theory of Bell and Loomes and Sugden. Examples are presented that indicate that RDRM is the better predictive descriptor for decision making under certainty. RDRM satisfies transitivity of the alternatives under pairwise comparisons and models rank reversal consistent with observed reasonable choices under dynamic or distinct situations. Like regret theory, the RDRM utilities of all the alternatives under consideration are interrelated. For complex trade-off studies regret is incorporated as an element of a cost-utility-regret analysis that characterizes each alternative in terms of its monetary cost, an aggregate performance utility, and a regret value. This provides decision makers adequate information to compare the alternatives and depending on their values they may trade-off some performance and/or cost to avoid high levels of regret. The result is a well-balanced alternative often preferred by reasonable decision makers to the optimal choice of classical multi-attribute utility analysis. The model can readily be extended to incorporate rejoicing to suit decision makers who seek it. The
TOWARD A DETERMINISTIC MODEL OF PLANETARY FORMATION. VII. ECCENTRICITY DISTRIBUTION OF GAS GIANTS
International Nuclear Information System (INIS)
Ida, S.; Lin, D. N. C.; Nagasawa, M.
2013-01-01
The ubiquity of planets and diversity of planetary systems reveal that planet formation encompasses many complex and competing processes. In this series of papers, we develop and upgrade a population synthesis model as a tool to identify the dominant physical effects and to calibrate the range of physical conditions. Recent planet searches have led to the discovery of many multiple-planet systems. Any theoretical models of their origins must take into account dynamical interactions between emerging protoplanets. Here, we introduce a prescription to approximate the close encounters between multiple planets. We apply this method to simulate the growth, migration, and dynamical interaction of planetary systems. Our models show that in relatively massive disks, several gas giants and rocky/icy planets emerge, migrate, and undergo dynamical instability. Secular perturbation between planets leads to orbital crossings, eccentricity excitation, and planetary ejection. In disks with modest masses, two or less gas giants form with multiple super-Earths. Orbital stability in these systems is generally maintained and they retain the kinematic structure after gas in their natal disks is depleted. These results reproduce the observed planetary mass-eccentricity and semimajor axis-eccentricity correlations. They also suggest that emerging gas giants can scatter residual cores to the outer disk regions. Subsequent in situ gas accretion onto these cores can lead to the formation of distant (∼> 30 AU) gas giants with nearly circular orbits
Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan
2016-12-01
The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.
The effects of a driving mechanism on Oslo models
International Nuclear Information System (INIS)
Pan, Gui-Jun; Pan, Yong-Cai
2010-01-01
Isotropic and anisotropic Oslo models (IOM and AOM) under a bulk driving mechanism have been investigated. We apply the moment analysis to evaluate critical exponents and the finite size scaling method to consistently test the obtained results. We find that both types of Oslo model have different critical behaviour. However, the critical exponents are the same for deterministic and random IOMs, and are independent of the strength of the anisotropy for the AOM. In contrast to a boundary driving mechanism, we find that the critical exponents depend crucially on the driving mechanism for the IOM, and are independent of the driving mechanism for the AOM.
International Nuclear Information System (INIS)
Farmer, J.C.; McCright, R.D.
1997-01-01
A key component of the Engineered Barrier System (EBS) being designed for containment of spent-fuel and high-level waste at the proposed geological repository at Yucca Mountain, Nevada is a two-layer canister. In this particular design, the inner barrier is made of a corrosion resistant material (CRM) such as Alloy 625 or C-22, while the outer barrier is made of a corrosion-allowance material (CAM) such as carbon steel or Monel 400. An integrated predictive model is being developed to account for the effects of localized environmental conditions in the CRM-CAM crevice on the initiation and propagation of pits through the CRM
International Nuclear Information System (INIS)
JOSEPH C. FARMER AND R. DANIEL MCCRIGHT
1997-01-01
A key component of the Engineered Barrier System (EBS) being designed for containment of spent-fuel and high-level waste at the proposed geological repository at Yucca Mountain, Nevada is a two-layer canister. In this particular design, the inner barrier is made of a corrosion resistant material (CRM) such as Alloy 625 or C-22, while the outer barrier is made of a corrosion-allowance material (CAM) such as carbon steel or Monel 400. An integrated predictive model is being developed to account for the effects of localized environmental conditions in the CRM-CAM crevice on the initiation and propagation of pits through the CRM
Deterministic Modeling of the High Temperature Test Reactor with DRAGON-HEXPEDITE
International Nuclear Information System (INIS)
Ortensi, J.; Pope, M.A.; Ferrer, R.M.; Cogliati, J.J.; Bess, J.D.; Ougouag, A.M.
2010-01-01
The Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine the INL's current prismatic reactor analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 fuel column thin annular core, and the fully loaded core critical condition with 30 fuel columns. Special emphasis is devoted to physical phenomena and artifacts in HTTR that are similar to phenomena and artifacts in the NGNP base design. The DRAGON code is used in this study since it offers significant ease and versatility in modeling prismatic designs. DRAGON can generate transport solutions via Collision Probability (CP), Method of Characteristics (MOC) and Discrete Ordinates (Sn). A fine group cross-section library based on the SHEM 281 energy structure is used in the DRAGON calculations. The results from this study show reasonable agreement in the calculation of the core multiplication factor with the MC methods, but a consistent bias of 2-3% with the experimental values is obtained. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement partially stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral
Energy Technology Data Exchange (ETDEWEB)
Grace, Matthew; Lowry, Thomas Stephen; Arnold, Bill Walter; James, Scott Carlton; Gray, Genetha Anne; Ahlmann, Michael
2008-08-01
Uncertainty in site characterization arises from a lack of data and knowledge about a site and includes uncertainty in the boundary conditions, uncertainty in the characteristics, location, and behavior of major features within an investigation area (e.g., major faults as barriers or conduits), uncertainty in the geologic structure, as well as differences in numerical implementation (e.g., 2-D versus 3-D, finite difference versus finite element, grid resolution, deterministic versus stochastic, etc.). Since the true condition at a site can never be known, selection of the best conceptual model is very difficult. In addition, limiting the understanding to a single conceptualization too early in the process, or before data can support that conceptualization, may lead to confidence in a characterization that is unwarranted as well as to data collection efforts and field investigations that are misdirected and/or redundant. Using a series of numerical modeling experiments, this project examined the application and use of information criteria within the site characterization process. The numerical experiments are based on models of varying complexity that were developed to represent one of two synthetically developed groundwater sites; (1) a fully hypothetical site that represented a complex, multi-layer, multi-faulted site, and (2) a site that was based on the Horonobe site in northern Japan. Each of the synthetic sites were modeled in detail to provide increasingly informative 'field' data over successive iterations to the representing numerical models. The representing numerical models were calibrated to the synthetic site data and then ranked and compared using several different information criteria approaches. Results show, that for the early phases of site characterization, low-parameterized models ranked highest while more complex models generally ranked lowest. In addition, predictive capabilities were also better with the low-parameterized models. For
Deterministic analysis of extrinsic and intrinsic noise in an epidemiological model.
Bayati, Basil S
2016-05-01
We couple a stochastic collocation method with an analytical expansion of the canonical epidemiological master equation to analyze the effects of both extrinsic and intrinsic noise. It is shown that depending on the distribution of the extrinsic noise, the master equation yields quantitatively different results compared to using the expectation of the distribution for the stochastic parameter. This difference is incident to the nonlinear terms in the master equation, and we show that the deviation away from the expectation of the extrinsic noise scales nonlinearly with the variance of the distribution. The method presented here converges linearly with respect to the number of particles in the system and exponentially with respect to the order of the polynomials used in the stochastic collocation calculation. This makes the method presented here more accurate than standard Monte Carlo methods, which suffer from slow, nonmonotonic convergence. In epidemiological terms, the results show that extrinsic fluctuations should be taken into account since they effect the speed of disease breakouts and that the gamma distribution should be used to model the basic reproductive number.
Mechanical Systems, Classical Models
Teodorescu, Petre P
2009-01-01
This third volume completes the Work Mechanical Systems, Classical Models. The first two volumes dealt with particle dynamics and with discrete and continuous mechanical systems. The present volume studies analytical mechanics. Topics like Lagrangian and Hamiltonian mechanics, the Hamilton-Jacobi method, and a study of systems with separate variables are thoroughly discussed. Also included are variational principles and canonical transformations, integral invariants and exterior differential calculus, and particular attention is given to non-holonomic mechanical systems. The author explains in detail all important aspects of the science of mechanics, regarded as a natural science, and shows how they are useful in understanding important natural phenomena and solving problems of interest in applied and engineering sciences. Professor Teodorescu has spent more than fifty years as a Professor of Mechanics at the University of Bucharest and this book relies on the extensive literature on the subject as well as th...
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-12-01
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente
2009-12-20
This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.
Energy Technology Data Exchange (ETDEWEB)
Alves, A.S.M., E-mail: asergi@eletronuclear.gov.br [Eletrobrás Termonuclear – Eletronuclear S.A. , Rua da Candelária 65, 7° andar, GSN.T, 20091-906 Rio de Janeiro, RJ (Brazil); Melo, P.F. Frutuoso e, E-mail: frutuoso@nuclear.ufrj.br [Graduate Program of Nuclear Engineering, COPPE, Federal University of Rio de Janeiro, Av. Horácio Macedo 2030, Bloco G, sala 206, 21941-914 Rio de Janeiro, RJ (Brazil); Passos, E.M., E-mail: epassos@eletronuclear.gov.br [Eletrobrás Termonuclear – Eletronuclear S.A. , Rua da Candelária 65, 7° andar, GSN.T, 20091-906 Rio de Janeiro, RJ (Brazil); Fontes, G.S., E-mail: gsfontes@hotmail.com [Instituto Militar de Engenharia – IME, Praça General Tibúrcio 80, 22290-270 Rio de Janeiro, RJ (Brazil)
2015-06-15
Highlights: • The water infiltration scenario is evaluated for a near surface repository. • The main objective is the determination of the critical distance of the repository. • The column liquid height in the repository is governed by an Ito stochastic equation. • Practical results are obtained for the Abadia de Goiás repository in Brazil. - Abstract: The aim of this paper is to present the stochastic and deterministic models developed for the evaluation of the critical distance of a near surface repository for the disposal of intermediate (ILW) and low level (LLW) radioactive wastes. The critical distance of a repository is defined as the distance between the repository and a well in which the water activity concentration is able to cause a radiological dose to a member of the public equal to the dose limit set by the regulatory body. The mathematical models are developed based on the Richards equation for the liquid flow in the porous media and on the solute transport equation in this medium. The release of radioactive material from the repository to the environment is considered through its base and its flow is determined by Darcy's Law. The deterministic model is obtained from the stochastic approach by neglecting the influence of the Gaussian white noise on the rainfall and the equations are solved analytically with the help of conventional calculus (non-stochastic calculus). The equations of the stochastic model are solved analytically based on the Ito stochastic calculus and numerically by using the Euler–Maruyama method. The impact on the value of the critical distance of the Abadia de Goiás repository is analyzed, taken as a study case, when the deterministic methodology is replaced by the stochastic one, considered more appropriate for modeling rainfall as a stochastic process.
Energy Technology Data Exchange (ETDEWEB)
Passon, O.
2004-07-01
The following topics are dealt with: Quantum mechanics, measurement and observables in Bohmian mechanics, locality, reality, causality, applications to the harmonic oscillator, the hydrogen atom, the double-slit experiment, the tunnel effect, Schroedinger's cat, many-body systems, the wave-particle dualism of the light, critique on Bohm's mechanics. (HSI)
Directory of Open Access Journals (Sweden)
Guangjian Ni
2014-01-01
Full Text Available The cochlea plays a crucial role in mammal hearing. The basic function of the cochlea is to map sounds of different frequencies onto corresponding characteristic positions on the basilar membrane (BM. Sounds enter the fluid-filled cochlea and cause deflection of the BM due to pressure differences between the cochlear fluid chambers. These deflections travel along the cochlea, increasing in amplitude, until a frequency-dependent characteristic position and then decay away rapidly. The hair cells can detect these deflections and encode them as neural signals. Modelling the mechanics of the cochlea is of help in interpreting experimental observations and also can provide predictions of the results of experiments that cannot currently be performed due to technical limitations. This paper focuses on reviewing the numerical modelling of the mechanical and electrical processes in the cochlea, which include fluid coupling, micromechanics, the cochlear amplifier, nonlinearity, and electrical coupling.
Deterministic computation of functional integrals
International Nuclear Information System (INIS)
Lobanov, Yu.Yu.
1995-09-01
A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the
Baum, Rex L.; Godt, Jonathan W.; De Vita, P.; Napolitano, E.
2012-01-01
Rainfall-induced debris flows involving ash-fall pyroclastic deposits that cover steep mountain slopes surrounding the Somma-Vesuvius volcano are natural events and a source of risk for urban settlements located at footslopes in the area. This paper describes experimental methods and modelling results of shallow landslides that occurred on 5–6 May 1998 in selected areas of the Sarno Mountain Range. Stratigraphical surveys carried out in initiation areas show that ash-fall pyroclastic deposits are discontinuously distributed along slopes, with total thicknesses that vary from a maximum value on slopes inclined less than 30° to near zero thickness on slopes inclined greater than 50°. This distribution of cover thickness influences the stratigraphical setting and leads to downward thinning and the pinching out of pyroclastic horizons. Three engineering geological settings were identified, in which most of the initial landslides that triggered debris flows occurred in May 1998 can be classified as (1) knickpoints, characterised by a downward progressive thinning of the pyroclastic mantle; (2) rocky scarps that abruptly interrupt the pyroclastic mantle; and (3) road cuts in the pyroclastic mantle that occur in a critical range of slope angle. Detailed topographic and stratigraphical surveys coupled with field and laboratory tests were conducted to define geometric, hydraulic and mechanical features of pyroclastic soil horizons in the source areas and to carry out hydrological numerical modelling of hillslopes under different rainfall conditions. The slope stability for three representative cases was calculated considering the real sliding surface of the initial landslides and the pore pressures during the infiltration process. The hydrological modelling of hillslopes demonstrated localised increase of pore pressure, up to saturation, where pyroclastic horizons with higher hydraulic conductivity pinch out and the thickness of pyroclastic mantle reduces or is
Fracture Mechanical Markov Chain Crack Growth Model
DEFF Research Database (Denmark)
Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard
1991-01-01
propagation process can be described by a discrete space Markov theory. The model is applicable to deterministic as well as to random loading. Once the model parameters for a given material have been determined, the results can be used for any structure as soon as the geometrical function is known....
Deterministic Compressed Sensing
2011-11-01
39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
International Nuclear Information System (INIS)
1990-01-01
In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)
Directory of Open Access Journals (Sweden)
YouHua Chen
2014-06-01
Full Text Available In the present report, the coexistence of Prisoners' Dilemma game players (cooperators and defectors were explored in an individual-based framework with the consideration of the impacts of deterministic and stochastic waiting time (WT for triggering mortality and/or colonization events. For the type of deterministic waiting time, the time step for triggering a mortality and/or colonization event is fixed. For the type of stochastic waiting time, whether a mortality and/or colonization event should be triggered for each time step of a simulation is randomly determined by a given acceptance probability (the event takes place when a variate drawn from a uniform distribution [0,1] is smaller than the acceptance probability. The two strategies of modeling waiting time are considered simultaneously and applied to both quantities (mortality: WTm, colonization: WTc. As such, when WT (WTm and/or WTc is an integral >=1, it indicated a deterministically triggering strategy. In contrast, when 1>WT>0, it indicated a stochastically triggering strategy and the WT value itself is used as the acceptance probability. The parameter space between the waiting time for mortality (WTm-[0.1,40] and colonization (WTc-[0.1,40] was traversed to explore the coexistence and non-coexistence regions. The role of defense award was evaluated. My results showed that, one non-coexistence region is identified consistently, located at the area where 1>=WTm>=0.3 and 40>=WTc>=0.1. As a consequence, it was found that the coexistence of cooperators and defectors in the community is largely dependent on the waiting time of mortality events, regardless of the defense or cooperation rewards. When the mortality events happen in terms of stochastic waiting time (1>=WTm>=0.3, extinction of either cooperators or defectors or both could be very likely, leading to the emergence of non-coexistence scenarios. However, when the mortality events occur in forms of relatively long deterministic
Deterministic chaos in entangled eigenstates
Schlegel, K. G.; Förster, S.
2008-05-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
Energy Technology Data Exchange (ETDEWEB)
Schlegel, K.G. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)], E-mail: guenter.schlegel@arcor.de; Foerster, S. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)
2008-05-12
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
International Nuclear Information System (INIS)
Schlegel, K.G.; Foerster, S.
2008-01-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator
Li, Zhixi; Peck, Kyung K; Brennan, Nicole P; Jenabi, Mehrnaz; Hsu, Meier; Zhang, Zhigang; Holodny, Andrei I; Young, Robert J
2013-02-01
The purpose of this study was to compare the deterministic and probabilistic tracking methods of diffusion tensor white matter fiber tractography in patients with brain tumors. We identified 29 patients with left brain tumors probabilistic method based on an extended Monte Carlo Random Walk algorithm. Tracking was controlled using two ROIs corresponding to Broca's and Wernicke's areas. Tracts in tumoraffected hemispheres were examined for extension between Broca's and Wernicke's areas, anterior-posterior length and volume, and compared with the normal contralateral tracts. Probabilistic tracts displayed more complete anterior extension to Broca's area than did FACT tracts on the tumor-affected and normal sides (p probabilistic tracts than FACT tracts (p probabilistic tracts than FACT tracts (p = 0.01). Probabilistic tractography reconstructs the arcuate fasciculus more completely and performs better through areas of tumor and/or edema. The FACT algorithm tends to underestimate the anterior-most fibers of the arcuate fasciculus, which are crossed by primary motor fibers.
David, Hamilton P; Carey, Cayelan C.; Arvola, Lauri; Arzberger, Peter; Brewer, Carol A.; Cole, Jon J; Gaiser, Evelyn; Hanson, Paul C.; Ibelings, Bas W; Jennings, Eleanor; Kratz, Tim K; Lin, Fang-Pang; McBride, Christopher G.; de Motta Marques, David; Muraoka, Kohji; Nishri, Ami; Qin, Boqiang; Read, Jordan S.; Rose, Kevin C.; Ryder, Elizabeth; Weathers, Kathleen C.; Zhu, Guangwei; Trolle, Dennis; Brookes, Justin D
2014-01-01
A Global Lake Ecological Observatory Network (GLEON; www.gleon.org) has formed to provide a coordinated response to the need for scientific understanding of lake processes, utilising technological advances available from autonomous sensors. The organisation embraces a grassroots approach to engage researchers from varying disciplines, sites spanning geographic and ecological gradients, and novel sensor and cyberinfrastructure to synthesise high-frequency lake data at scales ranging from local to global. The high-frequency data provide a platform to rigorously validate process- based ecological models because model simulation time steps are better aligned with sensor measurements than with lower-frequency, manual samples. Two case studies from Trout Bog, Wisconsin, USA, and Lake Rotoehu, North Island, New Zealand, are presented to demonstrate that in the past, ecological model outputs (e.g., temperature, chlorophyll) have been relatively poorly validated based on a limited number of directly comparable measurements, both in time and space. The case studies demonstrate some of the difficulties of mapping sensor measurements directly to model state variable outputs as well as the opportunities to use deviations between sensor measurements and model simulations to better inform process understanding. Well-validated ecological models provide a mechanism to extrapolate high-frequency sensor data in space and time, thereby potentially creating a fully 3-dimensional simulation of key variables of interest.
Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.
2016-12-01
Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.
Energy Technology Data Exchange (ETDEWEB)
Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA
2017-03-28
Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.
DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS
Directory of Open Access Journals (Sweden)
MICULEAC Melania Elena
2014-06-01
Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...
Proving Non-Deterministic Computations in Agda
Directory of Open Access Journals (Sweden)
Sergio Antoy
2017-01-01
Full Text Available We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.
Directory of Open Access Journals (Sweden)
A.K. Bhunia
2013-04-01
Full Text Available This paper deals with a deterministic inventory model developed for deteriorating items having two separate storage facilities (owned and rented warehouses due to limited capacity of the existing storage (owned warehouse with linear time dependent demand (increasing over a fixed finite time horizon. The model is formulated with infinite replenishment and the successive replenishment cycle lengths are in arithmetic progression. Partially backlogged shortages are allowed. The stocks of rented warehouse (RW are transported to the owned warehouse (OW in continuous release pattern. For this purpose, the model is formulated as a constrained non-linear mixed integer programming problem. For solving the problem, an advanced genetic algorithm (GA has been developed. This advanced GA is based on ranking selection, elitism, whole arithmetic crossover and non-uniform mutation dependent on the age of the population. Our objective is to determine the optimal replenishment number, lot-size of two-warehouses (OW and RW by maximizing the profit function. The model is illustrated with four numerical examples and sensitivity analyses of the optimal solution are performed with respect to different parameters.
DEFF Research Database (Denmark)
Davidsen, Steffen; Löwe, Roland; Thrysøe, Cecilie
2017-01-01
Evaluation of pluvial flood risk is often based on computations using 1D/2D urban flood models. However, guidelines on choice of model complexity are missing, especially for one-dimensional (1D) network models. This study presents a new automatic approach for simplification of 1D hydraulic networ...
So, Rita; Teakles, Andrew; Baik, Jonathan; Vingarzan, Roxanne; Jones, Keith
2018-05-01
Visibility degradation, one of the most noticeable indicators of poor air quality, can occur despite relatively low levels of particulate matter when the risk to human health is low. The availability of timely and reliable visibility forecasts can provide a more comprehensive understanding of the anticipated air quality conditions to better inform local jurisdictions and the public. This paper describes the development of a visibility forecasting modeling framework, which leverages the existing air quality and meteorological forecasts from Canada's operational Regional Air Quality Deterministic Prediction System (RAQDPS) for the Lower Fraser Valley of British Columbia. A baseline model (GM-IMPROVE) was constructed using the revised IMPROVE algorithm based on unprocessed forecasts from the RAQDPS. Three additional prototypes (UMOS-HYB, GM-MLR, GM-RF) were also developed and assessed for forecast performance of up to 48 hr lead time during various air quality and meteorological conditions. Forecast performance was assessed by examining their ability to provide both numerical and categorical forecasts in the form of 1-hr total extinction and Visual Air Quality Ratings (VAQR), respectively. While GM-IMPROVE generally overestimated extinction more than twofold, it had skill in forecasting the relative species contribution to visibility impairment, including ammonium sulfate and ammonium nitrate. Both statistical prototypes, GM-MLR and GM-RF, performed well in forecasting 1-hr extinction during daylight hours, with correlation coefficients (R) ranging from 0.59 to 0.77. UMOS-HYB, a prototype based on postprocessed air quality forecasts without additional statistical modeling, provided reasonable forecasts during most daylight hours. In terms of categorical forecasts, the best prototype was approximately 75 to 87% correct, when forecasting for a condensed three-category VAQR. A case study, focusing on a poor visual air quality yet low Air Quality Health Index episode
A Numerical Simulation for a Deterministic Compartmental ...
African Journals Online (AJOL)
In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Taheri, Shahyar
2015-01-01
Accurate and efficient tire models for deformable terrain operations are essential for performing vehicle simulations. Assessment of the forces and moments that occur at the tire-terrain interface, and the effect of the tire motion on properties of the terrain are crucial in understanding the performance of a vehicle. In order to model the dynamic behavior of the tire on different terrains, a lumped mass discretized tire model using Kelvin-Voigt elements is developed. To optimize the computat...
DEFF Research Database (Denmark)
Löwe, Roland; Davidsen, Steffen; Thrysøe, Cecilie
We present an algorithm for automated simplification of 1D pipe network models. The impact of the simplifications on the flooding simulated by coupled 1D-2D models is evaluated in an Australian case study. Significant reductions of the simulation time of the coupled model are achieved by reducing...... the 1D network model. The simplifications lead to an underestimation of flooded area because interaction points between network and surface are removed and because water is transported downstream faster. These effects can be mitigated by maintaining nodes in flood-prone areas in the simplification...... and by adjusting pipe roughness to increase transport times....
Mathematical modelling in solid mechanics
Sofonea, Mircea; Steigmann, David
2017-01-01
This book presents new research results in multidisciplinary fields of mathematical and numerical modelling in mechanics. The chapters treat the topics: mathematical modelling in solid, fluid and contact mechanics nonconvex variational analysis with emphasis to nonlinear solid and structural mechanics numerical modelling of problems with non-smooth constitutive laws, approximation of variational and hemivariational inequalities, numerical analysis of discrete schemes, numerical methods and the corresponding algorithms, applications to mechanical engineering numerical aspects of non-smooth mechanics, with emphasis on developing accurate and reliable computational tools mechanics of fibre-reinforced materials behaviour of elasto-plastic materials accounting for the microstructural defects definition of structural defects based on the differential geometry concepts or on the atomistic basis interaction between phase transformation and dislocations at nano-scale energetic arguments bifurcation and post-buckling a...
Stochastic modeling of friction force and vibration analysis of a mechanical system using the model
International Nuclear Information System (INIS)
Kang, Won Seok; Choi, Chan Kyu; Yoo, Hong Hee
2015-01-01
The squeal noise generated from a disk brake or chatter occurred in a machine tool primarily results from friction-induced vibration. Since friction-induced vibration is usually accompanied by abrasion and lifespan reduction of mechanical parts, it is necessary to develop a reliable analysis model by which friction-induced vibration phenomena can be accurately analyzed. The original Coulomb's friction model or the modified Coulomb friction model employed in most commercial programs employs deterministic friction coefficients. However, observing friction phenomena between two contact surfaces, one may observe that friction coefficients keep changing due to the unevenness of contact surface, temperature, lubrication and humidity. Therefore, in this study, friction coefficients are modeled as random parameters that keep changing during the motion of a mechanical system undergoing friction force. The integrity of the proposed stochastic friction model was validated by comparing the analysis results obtained by the proposed model with experimental results.
International Nuclear Information System (INIS)
Lin, C.-H.; Ferng, Y.-M.; Pei, B.-S.
2009-01-01
Additional fire barriers of electrical cables are required for the nuclear power plants (NPPs) in Taiwan due to the separation requirements of Appendix R to 10 CFR Part 50. The risk-informed fire analysis (RIFA) may provide a viable method to resolve these fire barrier issues. However, it is necessary to perform the fire scenario analyses so that RIFA can quantitatively determine the risk related to the fire barrier wrap. The CFD fire models are then proposed in this paper to help the RIFA in resolving these issues. Three typical fire scenarios are selected to assess the present CFD models. Compared with the experimental data and other model's simulations, the present calculated results show reasonable agreements, rendering that present CFD fire models can provide the quantitative information for RIFA analyses to release the cable wrap requirements for NPPs
Barrera, A.; Altava-Ortiz, V.; Llasat, M. C.; Barnolas, M.
2007-09-01
Between the 11 and 13 October 2005 several flash floods were produced along the coast of Catalonia (NE Spain) due to a significant heavy rainfall event. Maximum rainfall achieved values up to 250 mm in 24 h. The total amount recorded during the event in some places was close to 350 mm. Barcelona city was also in the affected area where high rainfall intensities were registered, but just a few small floods occurred, thanks to the efficient urban drainage system of the city. Two forecasting methods have been applied in order to evaluate their capability of prediction regarding extreme events: the deterministic MM5 model and a probabilistic model based on the analogous method. The MM5 simulation allows analysing accurately the main meteorological features with a high spatial resolution (2 km), like the formation of some convergence lines over the region that partially explains the maximum precipitation location during the event. On the other hand, the analogous technique shows a good agreement among highest probability values and real affected areas, although a larger pluviometric rainfall database would be needed to improve the results. The comparison between the observed precipitation and from both QPF (quantitative precipitation forecast) methods shows that the analogous technique tends to underestimate the rainfall values and the MM5 simulation tends to overestimate them.
Modelling the fragmentation mechanisms
International Nuclear Information System (INIS)
Bougault, R.; Durand, D.; Gulminelli, F.
1998-01-01
We have investigated the role of high amplitude collective motion in the nuclear fragmentation by using semi-classical macroscopic, as well as, microscopic simulations (BUU). These studies are motivated by the search of instabilities responsible for nuclear fragmentation. Two cases were examined: the bubble formation following the collective expansion of the compressed nucleus in case of very central reactions and, in the case of the semi-central collisions, the fast fission of the two partners issued from a binary reaction, in their corresponding Coulomb field. In the two cases the fragmentation channel is dominated by the inter-relation between the Coulomb and nuclear fields, and it is possible to obtain semi-quantitative predictions as functions of interaction parameters. The transport equations of BUU type predicts for central reactions formation of a high density transient state. Of much interest is the mechanism subsequent to de-excitation. It seems reasonable to conceive that the pressure stocked in the compressional mode manifests itself as a collective expansion of the system. As the pressure is a increasing function of the available energy one can conceive a variety of energy depending exit channels, starting from the fragmentation due the amplification of fluctuations interior to the spinodal zone up to the complete vaporization of the highly excited system. If the reached pressure is sufficiently high the reaction final state may preserve the memory of the entrance channel as a collective radial energy superimposed to the thermal disordered motion. Distributions of particles in the configuration space for both central and semi-central reactions for the Pb+Au system are presented. The rupture time is estimated to the order of 300 fm/c, and is strongly dependent on the initial temperature. The study of dependence of the rupture time on the interaction parameters is under way
Directory of Open Access Journals (Sweden)
Asoke Kumar Bhunia
2014-06-01
Full Text Available In this paper, an attempt is made to develop two inventory models for deteriorating items with variable demand dependent on the selling price and frequency of advertisement of items. In the first model, shortages are not allowed whereas in the second, these are allowed and partially backlogged with a variable rate dependent on the duration of waiting time up to the arrival of next lot. In both models, the deterioration rate follows three-parameter Weibull distribution and the transportation cost is considered explicitly for replenishing the order quantity. This cost is dependent on the lot-size as well as the distance from the source to the destination. The corresponding models have been formulated and solved. Two numerical examples have been considered to illustrate the results and the significant features of the results are discussed. Finally, based on these examples, the effects of different parameters on the initial stock level, shortage level (in case of second model only, cycle length along with the optimal profit have been studied by sensitivity analyses taking one parameter at a time keeping the other parameters as same.
Mechanics Model of Plug Welding
Zuo, Q. K.; Nunes, A. C., Jr.
2015-01-01
An analytical model has been developed for the mechanics of friction plug welding. The model accounts for coupling of plastic deformation (material flow) and thermal response (plastic heating). The model predictions of the torque, energy, and pull force on the plug were compared to the data of a recent experiment, and the agreements between predictions and data are encouraging.
Directory of Open Access Journals (Sweden)
S. Mariani
2005-01-01
Full Text Available In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as 'Montserrat-2000' event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs, several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard 'eyeball' analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.
Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker
2009-02-21
Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.
International Nuclear Information System (INIS)
Cioflan, C.O.; Apostol, B.F.; Moldoveanu, C.L.; Marmureanu, G.; Panza, G.F.
2002-03-01
The mapping of the seismic ground motion in Bucharest, due to the strong Vrancea earthquakes is carried out using a complex hybrid waveform modeling method which combines the modal summation technique, valid for laterally homogenous anelastic media, with finite-differences technique and optimizes the advantages of both methods. For recent earthquakes, it is possible to validate the modeling by comparing the synthetic seismograms with the records. As controlling records we consider the accelerograms of the Magurele station, low pass filtered with a cut off frequency of 1.0 Hz, of the 3 last major strong (M w >6) Vrancea earthquakes. Using the hybrid method with a double-couple- seismic source approximation, scaled for the source dimensions and relatively simple regional (bedrock) and local structure models we succeeded in reproducing the recorded ground motion in Bucharest, at a satisfactory level for seismic engineering. Extending the modeling to the whole territory of the Bucharest area, we construct a new seismic microzonation map, where five different zones are identified by their characteristic response spectra. (author)
Ghil, M.; Pierini, S.; Chekroun, M.
2017-12-01
A low-order quasigeostrophic model [1] captures several key features of intrinsic low-frequency variability of the oceans' wind-driven circulation. This double-gyre model is used here as a prototype of an unstable and nonlinear dynamical system with time-dependent forcing to explore basic features of climate change in the presence of natural variability. The studies rely on the theoretical framework of nonautonomous dynamical systems and of their pullback attractors (PBAs), namely the time-dependent invariant sets that attract all trajectories initialized in the remote past [2,3]. Ensemble simulations help us explore these PBAs. The chaotic PBAs of the periodically forced model [4] are found to be cyclo-stationary and cyclo-ergodic. Two parameters are then introduced to analyze the topological structure of the PBAs as a function of the forcing period; their joint use allows one to identify four distinct forms of sensitivity to initial state that correspond to distinct types of system behavior. The model's response to periodic forcing turns out to be, in most cases, very sensitive to the initial state. The system is then forced by a synthetic aperiodic forcing [5]. The existence of a global PBA is rigorously demonstrated. We then assess the convergence of trajectories to this PBA by computing the probability density function (PDF) of trajectory localization in the model's phase space. A sensitivity analysis with respect to forcing amplitude shows that the global PBA experiences large modifications if the underlying autonomous system is dominated by small-amplitude limit cycles, while the changes are less dramatic in a regime characterized by large-amplitude relaxation oscillations. The dependence of the attracting sets on the choice of the ensemble of initial states is analyzed in detail. The extension to random dynamical systems is described and connected to the model's autonomous and periodically forced behavior. [1] Pierini, S., 2011. J. Phys. Oceanogr., 41, 1585
Directory of Open Access Journals (Sweden)
A. K. Bhunia
2011-01-01
Full Text Available This paper deals with an inventory model, which considers the impact of marketing strategies such as pricing and advertising as well as the displayed inventory level on the demand rate of the system. In addition, the demand rate during the stock-out period differs from that during the stock-in period by a function varied on the waiting time up to the beginning of the next cycle. Shortage are allowed and partially backlogged. Here, the deterioration rate is assumed to follow the Weibull distribution. Considering all these factors with others, different scenarios of the system are investigated. To obtain the solutions of these cases and to illustrate the model, an example is considered. Finally, to study the effects of changes of different parameters of the system, sensitivity analyses have been carried out with respect to the different parameters of the system.
DEFF Research Database (Denmark)
Nielsen, Steen
2000-01-01
This paper expands the traditional product costing technique be including a stochastic form in a complex production process for product costing. The stochastic phenomenon in flesbile manufacturing technologies is seen as an important phenomenon that companies try to decreas og eliminate. DFM has...... been used for evaluating the appropriateness of the firm's production capability. In this paper a simulation model is developed to analyze the relevant cost behaviour with respect to DFM and to develop a more streamlined process in the layout of the manufacturing process....
Directory of Open Access Journals (Sweden)
Youness El Ansari
2017-01-01
Full Text Available We investigate the various conditions that control the extinction and stability of a nonlinear mathematical spread model with stochastic perturbations. This model describes the spread of viruses into an infected computer network which is powered by a system of antivirus software. The system is analyzed by using the stability theory of stochastic differential equations and the computer simulations. First, we study the global stability of the virus-free equilibrium state and the virus-epidemic equilibrium state. Furthermore, we use the Itô formula and some other theoretical theorems of stochastic differential equation to discuss the extinction and the stationary distribution of our system. The analysis gives a sufficient condition for the infection to be extinct (i.e., the number of viruses tends exponentially to zero. The ergodicity of the solution and the stationary distribution can be obtained if the basic reproduction number Rp is bigger than 1, and the intensities of stochastic fluctuations are small enough. Numerical simulations are carried out to illustrate the theoretical results.
Directory of Open Access Journals (Sweden)
Júlio César Bastos de Figueiredo
2013-06-01
Full Text Available Este trabalho tem por objetivo apresentar um modelo teórico simplificado de cadeia produtiva onde as relações entre o número de competidores, os tempos de resposta para ajustes da produção e a intensidade da resposta das empresas levam intrinsecamente ao surgimento de oscilações caóticas na oferta e na demanda. No modelo proposto, desenvolvido com o uso da metodologia de dinâmica de sistemas, as flutuações irregulares na demanda e nos preços estão intimamente relacionadas com a própria estrutura da cadeia, ou seja, com suas regras, políticas e capacidades produtivas. São feitas considerações sobre a importância do estudo de caos aplicado à economia e são discutidas técnicas para caracterização de comportamento caótico em séries econômicas.The purpose of this study was to present a simplified supply chain model where the relations between the number of competitors, the delay in production adjustments, and the intensity response of each company lead, intrinsically, to the emergence of chaotic oscillations in supply and demand. In the considered model, developed with the use of the System Dynamics methodology, the irregular fluctuations in demand and prices are closely related to the supply chain structure, that is, its rules, policies and capabilities. Discussions about the importance of the study of chaos applied to the economy are developed and specific techniques for characterization of chaotic behavior in economic time series are presented.
A statistical mechanical model of economics
Lubbers, Nicholas Edward Williams
Statistical mechanics pursues low-dimensional descriptions of systems with a very large number of degrees of freedom. I explore this theme in two contexts. The main body of this dissertation explores and extends the Yard Sale Model (YSM) of economic transactions using a combination of simulations and theory. The YSM is a simple interacting model for wealth distributions which has the potential to explain the empirical observation of Pareto distributions of wealth. I develop the link between wealth condensation and the breakdown of ergodicity due to nonlinear diffusion effects which are analogous to the geometric random walk. Using this, I develop a deterministic effective theory of wealth transfer in the YSM that is useful for explaining many quantitative results. I introduce various forms of growth to the model, paying attention to the effect of growth on wealth condensation, inequality, and ergodicity. Arithmetic growth is found to partially break condensation, and geometric growth is found to completely break condensation. Further generalizations of geometric growth with growth in- equality show that the system is divided into two phases by a tipping point in the inequality parameter. The tipping point marks the line between systems which are ergodic and systems which exhibit wealth condensation. I explore generalizations of the YSM transaction scheme to arbitrary betting functions to develop notions of universality in YSM-like models. I find that wealth vi condensation is universal to a large class of models which can be divided into two phases. The first exhibits slow, power-law condensation dynamics, and the second exhibits fast, finite-time condensation dynamics. I find that the YSM, which exhibits exponential dynamics, is the critical, self-similar model which marks the dividing line between the two phases. The final chapter develops a low-dimensional approach to materials microstructure quantification. Modern materials design harnesses complex
Mechanical model for ductility loss
International Nuclear Information System (INIS)
Hu, W.L.
1980-01-01
A mechanical model was constructed to probe into the mechanism of ductility loss. Fracture criterion based on critical localized deformation was undertaken. Two microstructure variables were considered in the model. Namely, the strength ratio of grain boundary affected area to the matrix, Ω, and the linear fraction, x, of grain boundary affected area. A parametrical study was carried out. The study shows that the ductility is very sensitive to those microstructure parameters. The functional dependence of ductility to temperature as well as strain-rate, suggested by the model, is demonstrated to be consistent with the observation
Deterministic methods in radiation transport
International Nuclear Information System (INIS)
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
Larkin, Steven P.; Levander, Alan; Okaya, David; Goff, John A.
1996-12-01
As a high resolution addition to the 1992 Pacific to Arizona Crustal Experiment (PACE), a 45-km-long deep crustal seismic reflection profile was acquired across the Chocolate Mountains in southeastern California to illuminate crustal structure in the transition between the Salton Trough and the Basin and Range province. The complex seismic data are analyzed for both large-scale (deterministic) and fine-scale (stochastic) crustal features. A low-fold near-offset common-midpoint (CMP) stacked section shows the northeastward lateral extent of a high-velocity lower crustal body which is centered beneath the Salton Trough. Off-end shots record a high-amplitude diffraction from the point where the high velocity lower crust pinches out at the Moho. Above the high-velocity lower crust, moderate-amplitude reflections occur at midcrustal levels. These reflections display the coherency and frequency characteristics of reflections backscattered from a heterogeneous velocity field, which we model as horizontal intrusions with a von Kármán (fractal) distribution. The effects of upper crustal scattering are included by combining the mapped surface geology and laboratory measurements of exposed rocks within the Chocolate Mountains to reproduce the upper crustal velocity heterogeneity in our crustal velocity model. Viscoelastic finite difference simulations indicate that the volume of mafic material within the reflective zone necessary to produce the observed backscatter is about 5%. The presence of wavelength-scale heterogeneity within the near-surface, upper, and middle crust also produces a 0.5-s-thick zone of discontinuous reflections from a crust-mantle interface which is actually a first-order discontinuity.
Deterministic dynamics of plasma focus discharges
International Nuclear Information System (INIS)
Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.
1992-04-01
The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs
Local deterministic theory surviving the violation of Bell's inequalities
International Nuclear Information System (INIS)
Cormier-Delanoue, C.
1984-01-01
Bell's theorem which asserts that no deterministic theory with hidden variables can give the same predictions as quantum theory, is questioned. Such a deterministic theory is presented and carefully applied to real experiments performed on pairs of correlated photons, derived from the EPR thought experiment. The ensuing predictions violate Bell's inequalities just as quantum mechanics does, and it is further shown that this discrepancy originates in the very nature of radiations. Complete locality is therefore restored while separability remains more limited [fr
Computational modelling in fluid mechanics
International Nuclear Information System (INIS)
Hauguel, A.
1985-01-01
The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr
Aerodynamic and Mechanical System Modelling
DEFF Research Database (Denmark)
Jørgensen, Martin Felix
This thesis deals with mechanical multibody-systems applied to the drivetrain of a 500 kW wind turbine. Particular focus has been on gearbox modelling of wind turbines. The main part of the present project involved programming multibody systems to investigate the connection between forces, moments...
Fracture mechanics model of fragmentation
International Nuclear Information System (INIS)
Glenn, L.A.; Gommerstadt, B.Y.; Chudnovsky, A.
1986-01-01
A model of the fragmentation process is developed, based on the theory of linear elastic fracture mechanics, which predicts the average fragment size as a function of strain rate and material properties. This approach permits a unification of previous results, yielding Griffith's solution in the low-strain-rate limit and Grady's solution at high strain rates
Directory of Open Access Journals (Sweden)
Sarah Hamylton
Full Text Available A geomorphic assessment of reef system calcification is conducted for past (3200 Ka to present, present and future (2010-2100 time periods. Reef platform sediment production is estimated at 569 m3 yr-1 using rate laws that express gross community carbonate production as a function of seawater aragonite saturation, community composition and rugosity and incorporating estimates of carbonate removal from the reef system. Key carbonate producers including hard coral, crustose coralline algae and Halimeda are mapped accurately (mean R2 = 0.81. Community net production estimates correspond closely to independent census-based estimates made in-situ (R2 = 0.86. Reef-scale outputs are compared with historic rates of production generated from (i radiocarbon evidence of island deposition initiation around 3200 years ago, and (ii island volume calculated from a high resolution island digital elevation model. Contemporary carbonate production rates appear to be remarkably similar to historical values of 573 m3 yr-1. Anticipated future seawater chemistry parameters associated with an RCP8.5 emissions scenario are employed to model rates of net community calcification for the period 2000-2100 on the basis of an inorganic aragonite precipitation law, under the assumption of constant benthic community character. Simulations indicate that carbonate production will decrease linearly to a level of 118 m3 yr-1 by 2100 and that by 2150 aragonite saturation levels may no longer support the positive budgetary status necessary to sustain island accretion. Novel aspects of this assessment include the development of rate law parameters to realistically represent the variable composition of coral reef benthic carbonate producers, incorporation of three dimensional rugosity of the entire reef platform and the coupling of model outputs with both historical radiocarbon dating evidence and forward hydrochemical projections to conduct an assessment of island evolution
Hamylton, Sarah
2014-01-01
A geomorphic assessment of reef system calcification is conducted for past (3200 Ka to present), present and future (2010-2100) time periods. Reef platform sediment production is estimated at 569 m3 yr-1 using rate laws that express gross community carbonate production as a function of seawater aragonite saturation, community composition and rugosity and incorporating estimates of carbonate removal from the reef system. Key carbonate producers including hard coral, crustose coralline algae and Halimeda are mapped accurately (mean R2 = 0.81). Community net production estimates correspond closely to independent census-based estimates made in-situ (R2 = 0.86). Reef-scale outputs are compared with historic rates of production generated from (i) radiocarbon evidence of island deposition initiation around 3200 years ago, and (ii) island volume calculated from a high resolution island digital elevation model. Contemporary carbonate production rates appear to be remarkably similar to historical values of 573 m3 yr-1. Anticipated future seawater chemistry parameters associated with an RCP8.5 emissions scenario are employed to model rates of net community calcification for the period 2000-2100 on the basis of an inorganic aragonite precipitation law, under the assumption of constant benthic community character. Simulations indicate that carbonate production will decrease linearly to a level of 118 m3 yr-1 by 2100 and that by 2150 aragonite saturation levels may no longer support the positive budgetary status necessary to sustain island accretion. Novel aspects of this assessment include the development of rate law parameters to realistically represent the variable composition of coral reef benthic carbonate producers, incorporation of three dimensional rugosity of the entire reef platform and the coupling of model outputs with both historical radiocarbon dating evidence and forward hydrochemical projections to conduct an assessment of island evolution through time
Advances in stochastic and deterministic global optimization
Zhigljavsky, Anatoly; Žilinskas, Julius
2016-01-01
Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...
Understanding deterministic diffusion by correlated random walks
International Nuclear Information System (INIS)
Klages, R.; Korabel, N.
2002-01-01
Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)
ZERODUR: deterministic approach for strength design
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody; Tembine, Hamidou; Tempone, Raul
2016-01-01
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic indexing for packed strings
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye
2017-01-01
Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ...
Deterministic Echo State Networks Based Stock Price Forecasting
Directory of Open Access Journals (Sweden)
Jingpei Dan
2014-01-01
Full Text Available Echo state networks (ESNs, as efficient and powerful computational models for approximating nonlinear dynamical systems, have been successfully applied in financial time series forecasting. Reservoir constructions in standard ESNs rely on trials and errors in real applications due to a series of randomized model building stages. A novel form of ESN with deterministically constructed reservoir is competitive with standard ESN by minimal complexity and possibility of optimizations for ESN specifications. In this paper, forecasting performances of deterministic ESNs are investigated in stock price prediction applications. The experiment results on two benchmark datasets (Shanghai Composite Index and S&P500 demonstrate that deterministic ESNs outperform standard ESN in both accuracy and efficiency, which indicate the prospect of deterministic ESNs for financial prediction.
Deterministic quantitative risk assessment development
Energy Technology Data Exchange (ETDEWEB)
Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)
2009-07-01
Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)
Shane, D D; Larson, R L; Sanderson, M W; Miesner, M; White, B J
2017-10-01
Some cattle production experts believe that cow-calf producers should breed replacement heifers (nulliparous cows) before cows (primiparous and multiparous cows), sometimes referred to as providing a heifer lead time (tHL). Our objective was to model the effects different durations of tHL may have on measures of herd productivity, including the percent of the herd cycling before the end of the first 21 d of the breeding season (%C21), the percent of the herd pregnant at pregnancy diagnosis (%PPD), the distribution of pregnancy by 21-d breeding intervals, the kilograms of calf weaned per cow exposed (KPC), and the replacement percentage (%RH), using a deterministic, dynamic systems model of cow-calf production over a 10-yr horizon. We also wished to examine differences in the effect of tHL related to the primiparous duration of postpartum anestrus (dPPA). The study model examined 6 different dPPA for primiparous cows (60, 70, 80, 90, 100, or 110 d). The multiparous cow duration of postpartum anestrus was set to 60 d. The breeding season length for nulliparous cows was 63 d, as was the breeding season length for primiparous and multiparous cows. Nulliparous cows were modeled with a tHL of 0, 7, 14, 21, 28, 35, or 42 d. Results are reported for the final breeding season of the 10-yr horizon. Increasing tHL resulted in a greater %C21 for the herd and for primiparous cows. Length of tHL had minimal impact on the %PPD unless the dPPA was 80 d or greater. For a dPPA of 110 d, a 0 d tHL resulted in the herd having 88.1 %PPD. When tHL was 21 d, the %PPD increased to 93.0%. The KPC was 161.2 kg when the dPPA was 110 d and tHL was 0 d and improved to 183.2 kg when tHL was increased to 42 d. The %RH did not vary much unless the dPPA was 90 d or greater, but increasing tHL resulted in decreased %RH. Based on the model results, increasing tHL improves the production outcomes included in the analysis, but herds with dPPA of 90 d or greater had the greatest degree of improvement
Uncertainty propagation through dynamic models of assemblies of mechanical structures
International Nuclear Information System (INIS)
Daouk, Sami
2016-01-01
When studying the behaviour of mechanical systems, mathematical models and structural parameters are usually considered deterministic. Return on experience shows however that these elements are uncertain in most cases, due to natural variability or lack of knowledge. Therefore, quantifying the quality and reliability of the numerical model of an industrial assembly remains a major question in low-frequency dynamics. The purpose of this thesis is to improve the vibratory design of bolted assemblies through setting up a dynamic connector model that takes account of different types and sources of uncertainty on stiffness parameters, in a simple, efficient and exploitable in industrial context. This work has been carried out in the framework of the SICODYN project, led by EDF R and D, that aims to characterise and quantify, numerically and experimentally, the uncertainties in the dynamic behaviour of bolted industrial assemblies. Comparative studies of several numerical methods of uncertainty propagation demonstrate the advantage of using the Lack-Of-Knowledge theory. An experimental characterisation of uncertainties in bolted structures is performed on a dynamic test rig and on an industrial assembly. The propagation of many small and large uncertainties through different dynamic models of mechanical assemblies leads to the assessment of the efficiency of the Lack-Of-Knowledge theory and its applicability in an industrial environment. (author)
Multiscale modelling of DNA mechanics
International Nuclear Information System (INIS)
Dršata, Tomáš; Lankaš, Filip
2015-01-01
Mechanical properties of DNA are important not only in a wide range of biological processes but also in the emerging field of DNA nanotechnology. We review some of the recent developments in modeling these properties, emphasizing the multiscale nature of the problem. Modern atomic resolution, explicit solvent molecular dynamics simulations have contributed to our understanding of DNA fine structure and conformational polymorphism. These simulations may serve as data sources to parameterize rigid base models which themselves have undergone major development. A consistent buildup of larger entities involving multiple rigid bases enables us to describe DNA at more global scales. Free energy methods to impose large strains on DNA, as well as bead models and other approaches, are also briefly discussed. (topical review)
Sorption mechanisms and sorption models
International Nuclear Information System (INIS)
Fedoroff, M.; Lefevre, G.; Duc, M.; Neskovic, C.; Milonjic, S.
2004-01-01
Sorption at the solid-liquid interfaces play a major role in many phenomena and technologies: chemical separations, catalysis, biological processes, transport of toxic and radioactive species in surface and underground waters. The long term safety of radioactive waste repositories is based on artificial and natural barriers, intended to sorb radionuclides after the moment when the storage matrixes and containers will be corroded. Predictions on the efficiency of sorption for more than 10 6 years have to be done in order to demonstrate the safety of such depositories, what is a goal never encountered in the history of sciences and technology. For all these purposes, and, especially for the long term prediction, acquiring of sorption data constitutes only a first step of studies. Modeling based on a very good knowledge of sorption mechanisms is needed. In this review, we shall examine the main approaches and models used to quantify sorption processes, including results taken from the literature and from our own studies. We shall compare sorption models and examine their adequacy with sorption mechanisms. The cited references are only a few examples of the numerous articles published in that field. (orig.)
Deterministic extraction from weak random sources
Gabizon, Ariel
2011-01-01
In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.
Deterministic hydrodynamics: Taking blood apart
Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.
2006-10-01
We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication
ICRP (1991) and deterministic effects
International Nuclear Information System (INIS)
Mole, R.H.
1992-01-01
A critical review of ICRP Publication 60 (1991) shows that considerable revisions are needed in both language and thinking about deterministic effects (DE). ICRP (1991) makes a welcome and clear distinction between change, caused by irradiation; damage, some degree of deleterious change, for example to cells, but not necessarily deleterious to the exposed individual; harm, clinically observable deleterious effects expressed in individuals or their descendants; and detriment, a complex concept combining the probability, severity and time of expression of harm (para42). (All added emphases come from the author.) Unfortunately these distinctions are not carried through into the discussion of deterministic effects (DE) and two important terms are left undefined. Presumably effect may refer to change, damage, harm or detriment, according to context. Clinically observable is also undefined although its meaning is crucial to any consideration of DE since DE are defined as causing observable harm (para 20). (Author)
Deterministic prediction of surface wind speed variations
Directory of Open Access Journals (Sweden)
G. V. Drisya
2014-11-01
Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Deterministic Diffusion in Delayed Coupled Maps
International Nuclear Information System (INIS)
Sozanski, M.
2005-01-01
Coupled Map Lattices (CML) are discrete time and discrete space dynamical systems used for modeling phenomena arising in nonlinear systems with many degrees of freedom. In this work, the dynamical and statistical properties of a modified version of the CML with global coupling are considered. The main modification of the model is the extension of the coupling over a set of local map states corresponding to different time iterations. The model with both stochastic and chaotic one-dimensional local maps is studied. Deterministic diffusion in the CML under variation of a control parameter is analyzed for unimodal maps. As a main result, simple relations between statistical and dynamical measures are found for the model and the cases where substituting nonlinear lattices with simpler processes is possible are presented. (author)
Quantum Mechanics/Molecular Mechanics Modeling of Drug Metabolism
DEFF Research Database (Denmark)
Lonsdale, Richard; Fort, Rachel M; Rydberg, Patrik
2016-01-01
)-mexiletine in CYP1A2 with hybrid quantum mechanics/molecular mechanics (QM/MM) methods, providing a more detailed and realistic model. Multiple reaction barriers have been calculated at the QM(B3LYP-D)/MM(CHARMM27) level for the direct N-oxidation and H-abstraction/rebound mechanisms. Our calculated barriers......The mechanism of cytochrome P450(CYP)-catalyzed hydroxylation of primary amines is currently unclear and is relevant to drug metabolism; previous small model calculations have suggested two possible mechanisms: direct N-oxidation and H-abstraction/rebound. We have modeled the N-hydroxylation of (R...... indicate that the direct N-oxidation mechanism is preferred and proceeds via the doublet spin state of Compound I. Molecular dynamics simulations indicate that the presence of an ordered water molecule in the active site assists in the binding of mexiletine in the active site...
Yang, Hyun Mo
2015-12-01
Currently, discrete modellings are largely accepted due to the access to computers with huge storage capacity and high performance processors and easy implementation of algorithms, allowing to develop and simulate increasingly sophisticated models. Wang et al. [7] present a review of dynamics in complex networks, focusing on the interaction between disease dynamics and human behavioral and social dynamics. By doing an extensive review regarding to the human behavior responding to disease dynamics, the authors briefly describe the complex dynamics found in the literature: well-mixed populations networks, where spatial structure can be neglected, and other networks considering heterogeneity on spatially distributed populations. As controlling mechanisms are implemented, such as social distancing due 'social contagion', quarantine, non-pharmaceutical interventions and vaccination, adaptive behavior can occur in human population, which can be easily taken into account in the dynamics formulated by networked populations.
Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Niels Jacob
1994-01-01
An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...
Experimental verification of the energetic model of the dry mechanical reclamation process
Directory of Open Access Journals (Sweden)
R. Dańko
2008-04-01
Full Text Available The experimental results of the dry mechanical reclamation process, which constituted the bases for the verification of the energetic model of this process, developed by the author on the grounds of the Rittinger’s deterministic hypothesis of the crushing process, are presented in the paper. Used foundry sands with bentonite, with water-glass from the floster technology and used sands with furan FL 105 resin were used in the reclamation tests. In the mechanical and mechanical-cryogenic reclamation a wide range of time variations and reclamation conditions influencing intensity of the reclamation process – covering all possible parameters used in industrial devices - were applied. The developed theoretical model constitutes a new tool allowing selecting optimal times for the reclamation treatment of the given spent foundry sand at the assumed process intensity realized in rotor reclaimers - with leaves or rods as grinding elements mounted horizontally on the rotor axis.
Energy Technology Data Exchange (ETDEWEB)
Passon, Oliver
2010-07-01
Bohm's mechanics belong to the alternative formulations of quantum mechanics, deviates in their knowledge-theoretical implications however radially from the usual Copenhagen interpretation. Their importance lies by this above all in the region of fundamental questions and the interpretation of quantum mechanics, because they allow yet a solution of the measurement problem discussed since decades controversy. Simultaneously all predictions of usual quantum mechanics can be reproduced. Even though on the German-language textbook market hitherto an elementary introduction to this topic lacked. New in the second edition is a short draft of the relativistic and quantum-field theoretical generalizations.
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
Energy Technology Data Exchange (ETDEWEB)
Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.
2014-02-01
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
Langevin equation with the deterministic algebraically correlated noise
International Nuclear Information System (INIS)
Ploszajczak, M.; Srokowski, T.
1995-01-01
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author)
International Nuclear Information System (INIS)
Bruckner-Foit, A.; Munz, D.
1989-10-01
A deterministic and a probabilistic crack growth analysis is presented for the major defects found in the welds during ultrasonic pre-service inspection. The deterministic analysis includes first a determination of the number of load cycles until crack initiation, then a cycle-by-cycle calculation of the growth of the embedded elliptical cracks, followed by an evaluation of the growth of the semi-elliptical surface crack formed after the crack considered has broken through the wall and, finally, a determination of the critical crack size and shape. In the probabilistic analysis, a Monte-Carlo simulation is performed with a sample of cracks where the statistical distributions of the crack dimensions describe the uncertainty in sizing of the ultrasonic inspection. The distributions of crack depth, crack length and location are evaluated as a function of the number of load cycles. In the simulation, the fracture mechanics model of the deterministic analysis is employed for each random crack. The results of the deterministic and probabilistic crack growth analysis are compared with the results of the second in-service inspection where stable extension of some of the cracks had been observed. It is found that the prediction and the experiment agree only with a probability of the order of 5% or less
Rock mechanics models evaluation report
International Nuclear Information System (INIS)
1987-08-01
This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The primary recommendations of the analysis are that the DOT code be used for two-dimensional thermal analysis and that the STEALTH and HEATING 5/6 codes be used for three-dimensional and complicated two-dimensional thermal analysis. STEALTH and SPECTROM 32 are recommended for thermomechanical analyses. The other evaluated codes should be considered for use in certain applications. A separate review of salt creep models indicate that the commonly used exponential time law model is appropriate for use in repository design studies. 38 refs., 1 fig., 7 tabs
S. Boldyreva; S. Fehr (Serge); A. O'Neill; D. Wagner
2008-01-01
textabstractThe study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic
Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P
2014-05-20
Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on
Skin mechanical properties and modeling: A review.
Joodaki, Hamed; Panzer, Matthew B
2018-04-01
The mechanical properties of the skin are important for various applications. Numerous tests have been conducted to characterize the mechanical behavior of this tissue, and this article presents a review on different experimental methods used. A discussion on the general mechanical behavior of the skin, including nonlinearity, viscoelasticity, anisotropy, loading history dependency, failure properties, and aging effects, is presented. Finally, commonly used constitutive models for simulating the mechanical response of skin are discussed in the context of representing the empirically observed behavior.
Multiscale modelling of DNA mechanics
Czech Academy of Sciences Publication Activity Database
Dršata, Tomáš; Lankaš, Filip
2015-01-01
Roč. 27, č. 32 (2015), 323102/1-323102/12 ISSN 0953-8984 R&D Projects: GA ČR(CZ) GA14-21893S Institutional support: RVO:61388963 Keywords : DNA elasticity * DNA coarse-grained models * molecular dynamics simulations Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.209, year: 2015
Beeping a Deterministic Time-Optimal Leader Election
Dufoulon , Fabien; Burman , Janna; Beauquier , Joffroy
2018-01-01
The beeping model is an extremely restrictive broadcast communication model that relies only on carrier sensing. In this model, we solve the leader election problem with an asymptotically optimal round complexity of O(D + log n), for a network of unknown size n and unknown diameter D (but with unique identifiers). Contrary to the best previously known algorithms in the same setting, the proposed one is deterministic. The techniques we introduce give a new insight as to how local constraints o...
Non Kolmogorov Probability Models Outside Quantum Mechanics
Accardi, Luigi
2009-03-01
This paper is devoted to analysis of main conceptual problems in the interpretation of QM: reality, locality, determinism, physical state, Heisenberg principle, "deterministic" and "exact" theories, laws of chance, notion of event, statistical invariants, adaptive realism, EPR correlations and, finally, the EPR-chameleon experiment.
Infinite Random Graphs as Statistical Mechanical Models
DEFF Research Database (Denmark)
Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria
2011-01-01
We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe a ...
Deterministic and unambiguous dense coding
International Nuclear Information System (INIS)
Wu Shengjun; Cohen, Scott M.; Sun Yuqing; Griffiths, Robert B.
2006-01-01
Optimal dense coding using a partially-entangled pure state of Schmidt rank D and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most L d messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τ x ) Bob knows for sure that Alice sent message x, and when it fails (probability 1-τ x ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For D≤D a bound is obtained for L d in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes et al. [Phys. Rev. A71, 012311 (2005)]. For D>D it is shown that L d is strictly less than D 2 unless D is an integer multiple of D, in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for D≤D, assuming τ x >0 for a set of DD messages, and a bound is obtained for the average . A bound on the average requires an additional assumption of encoding by isometries (unitaries when D=D) that are orthogonal for different messages. Both bounds are saturated when τ x is a constant independent of x, by a protocol based on one-shot entanglement concentration. For D>D it is shown that (at least) D 2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states
Continuum mechanics the birthplace of mathematical models
Allen, Myron B
2015-01-01
Continuum mechanics is a standard course in many graduate programs in engineering and applied mathematics as it provides the foundations for the various differential equations and mathematical models that are encountered in fluid mechanics, solid mechanics, and heat transfer. This book successfully makes the topic more accessible to advanced undergraduate mathematics majors by aligning the mathematical notation and language with related courses in multivariable calculus, linear algebra, and differential equations; making connections with other areas of applied mathematics where parial differe
Mechanism study of pulsus paradoxus using mechanical models.
Directory of Open Access Journals (Sweden)
Chang-yang Xing
Full Text Available Pulsus paradoxus is an exaggeration of the normal inspiratory decrease in systolic blood pressure. Despite a century of attempts to explain this sign consensus is still lacking. To solve the controversy and reveal the exact mechanism, we reexamined the characteristic anatomic arrangement of the circulation system in the chest and designed these mechanical models based on related hydromechanic principles. Model 1 was designed to observe the primary influence of respiratory intrathoracic pressure change (RIPC on systemic and pulmonary venous return systems (SVR and PVR respectively. Model 2, as an equivalent mechanical model of septal swing, was to study the secondary influence of RIPC on the motion of the interventriclar septum (IVS, which might be the direct cause for pulsus paradoxus. Model 1 demonstrated that the simulated RIPC had different influence on the simulated SVR and PVR. It increased the volume of the simulated right ventricle (SRV when the internal pressure was kept constant (8.16 cmH2O, while it had the opposite effect on PVR. Model 2 revealed the three major factors determining the respiratory displacement of IVS in normal and different pathophysiological conditions: the magnitude of RIPC, the pressure difference between the two ventricles and the intrapericardial pressure. Our models demonstrate that the different anatomical arrangement of the two venous return systems leads to a different effect of RIPC on right and left ventricles, and thus a pressure gradient across IVS that tends to shift IVS left- and rightwards. When the leftward displacement of IVS reaches a considerable amplitude in some pathologic condition such as cardiac tamponade, the pulsus paradoxus occurs.
Toy Models of a Nonassociative Quantum Mechanics
International Nuclear Information System (INIS)
Dzhunushaliev, V.
2007-01-01
Toy models of a nonassociative quantum mechanics are presented. The Heisenberg equation of motion is modified using a nonassociative commutator. Possible physical applications of a nonassociative quantum mechanics are considered. The idea is discussed that a nonassociative algebra could be the operator language for the nonperturbative quantum theory. In such approach the nonperturbative quantum theory has observables and un observables quantities.
A quantum mechanical model of "dark matter"
Belokurov, V. V.; Shavgulidze, E. T.
2014-01-01
The role of singular solutions in some simple quantum mechanical models is studied. The space of the states of two-dimensional quantum harmonic oscillator is shown to be separated into sets of states with different properties.
Solid mechanics theory, modeling, and problems
Bertram, Albrecht
2015-01-01
This textbook offers an introduction to modeling the mechanical behavior of solids within continuum mechanics and thermodynamics. To illustrate the fundamental principles, the book starts with an overview of the most important models in one dimension. Tensor calculus, which is called for in three-dimensional modeling, is concisely presented in the second part of the book. Once the reader is equipped with these essential mathematical tools, the third part of the book develops the foundations of continuum mechanics right from the beginning. Lastly, the book’s fourth part focuses on modeling the mechanics of materials and in particular elasticity, viscoelasticity and plasticity. Intended as an introductory textbook for students and for professionals interested in self-study, it also features numerous worked-out examples to aid in understanding.
Genetic pathways to Neurodegeneration Models and mechanisms ...
Indian Academy of Sciences (India)
Paige Rudich
Models and mechanisms of repeat expansion disorders: a worm's eye view ..... retardation 1 gene FMR1 gives rise to a spectrum of neurological disorders (Saul and Tarleton ... autism. Shorter repeat expansion lengths from 55-200 cause the.
Evaluation of Deterministic and Stochastic Components of Traffic Counts
Directory of Open Access Journals (Sweden)
Ivan Bošnjak
2012-10-01
Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.
Deterministic secure communication protocol without using entanglement
Cai, Qing-yu
2003-01-01
We show a deterministic secure direct communication protocol using single qubit in mixed state. The security of this protocol is based on the security proof of BB84 protocol. It can be realized with current technologies.
Deterministic chaos in the processor load
International Nuclear Information System (INIS)
Halbiniak, Zbigniew; Jozwiak, Ireneusz J.
2007-01-01
In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case
Deterministic Approach to Detect Heart Sound Irregularities
Directory of Open Access Journals (Sweden)
Richard Mengko
2017-07-01
Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.
Risk-based and deterministic regulation
International Nuclear Information System (INIS)
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
A deterministic seismic hazard map of India and adjacent areas
International Nuclear Information System (INIS)
Parvez, Imtiyaz A.; Vaccari, Franco; Panza, Giuliano
2001-09-01
A seismic hazard map of the territory of India and adjacent areas has been prepared using a deterministic approach based on the computation of synthetic seismograms complete of all main phases. The input data set consists of structural models, seismogenic zones, focal mechanisms and earthquake catalogue. The synthetic seismograms have been generated by the modal summation technique. The seismic hazard, expressed in terms of maximum displacement (DMAX), maximum velocity (VMAX), and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid of 0.2 deg. x 0.2 deg. over the studied territory. The estimated values of the peak ground acceleration are compared with the observed data available for the Himalayan region and found in good agreement. Many parts of the Himalayan region have the DGA values exceeding 0.6 g. The epicentral areas of the great Assam earthquakes of 1897 and 1950 represent the maximum hazard with DGA values reaching 1.2-1.3 g. (author)
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
Open Business Models: New Compensation Mechanisms for ...
International Development Research Centre (IDRC) Digital Library (Canada)
Open Business Models: New Compensation Mechanisms for Creativity and Inclusion ... This research aims to explore important new business models in the networked society ... Linking research to urban planning at the ICLEI World Congress 2018 ... In partnership with UNESCO's Organization for Women in Science for the ...
Quantum mechanical models for the Fermi shuttle
Sternberg, James; Ovchinnikov, S. Yu.; Macek, J. H.
2009-05-01
Although the Fermi shuttle was originally proposed as an explanation for highly energetic cosmic rays, it is also a mechanism for the production of high energy electrons in atomic collisions [1]. The Fermi shuttle is usually thought of as a classical effect and most models of this process rely on classical or semi-classical approximations. In this work we explore several quantum mechanical models for ion-atom collisions and examine the evidence for the Fermi shuttle in these models. [4pt] [1] B. Sulik, Cs. Koncz, K. Tok'esi, A. Orb'an, and D. Ber'enyi, Phys Rev. Lett. 88 073201 (2002)
Modeling the mechanical response of PBX 9501
Energy Technology Data Exchange (ETDEWEB)
Ragaswamy, Partha [Los Alamos National Laboratory; Lewis, Matthew W [Los Alamos National Laboratory; Liu, Cheng [Los Alamos National Laboratory; Thompson, Darla G [Los Alamos National Laboratory
2010-01-01
An engineering overview of the mechanical response of Plastic-Bonded eXplosives (PBXs), specifically PBX 9501, will be provided with emphasis on observed mechanisms associated with different types of mechanical testing. Mechanical tests in the form of uniaxial tension, compression, cyclic loading, creep (compression and tension), and Hopkinson bar show strain rate and temperature dependence. A range of mechanical behavior is observed which includes small strain recoverable response in the form of viscoelasticity; change in stiffness and softening beyond peak strength due to damage in the form microcracks, debonding, void formation and the growth of existing voids; inelastic response in the form of irrecoverable strain as shown in cyclic tests, and viscoelastic creep combined with plastic response as demonstrated in creep and recovery tests. The main focus of this paper is to elucidate the challenges and issues involved in modeling the mechanical behavior of PBXs for simulating thermo-mechanical responses in engineering components. Examples of validation of a constitutive material model based on a few of the observed mechanisms will be demonstrated against three point bending, split Hopkinson pressure bar and Brazilian disk geometry.
Chemo-mechanical modeling of tumor growth in elastic epithelial tissue
Energy Technology Data Exchange (ETDEWEB)
Bratsun, Dmitry A., E-mail: bratsun@pspu.ru [Department of Applied Physics, Perm National Research Polytechnical University, Perm, 614990 (Russian Federation); Zakharov, Andrey P. [Department of Chemical Engineering, Technion-Israel Institute of Technology, Haifa, 32000 Israel (Israel); Theoretical Physics Department, Perm State Humanitarian Pedagogical University, Perm, 614990 (Russian Federation); Pismen, Len [Department of Chemical Engineering, Technion-Israel Institute of Technology, Haifa, 32000 Israel (Israel)
2016-08-02
We propose a multiscale chemo-mechanical model of the cancer tumor development in the epithelial tissue. The epithelium is represented by an elastic 2D array of polygonal cells with its own gene regulation dynamics. The model allows the simulation of the evolution of multiple cells interacting via the chemical signaling or mechanically induced strain. The algorithm includes the division and intercalation of cells as well as the transformation of normal cells into a cancerous state triggered by a local failure of the spatial synchronization of the cellular rhythms driven by transcription/translation processes. Both deterministic and stochastic descriptions of the system are given for chemical signaling. The transformation of cells means the modification of their respective parameters responsible for chemo-mechanical interactions. The simulations reproduce a distinct behavior of invasive and localized carcinoma. Generally, the model is designed in such a way that it can be readily modified to take account of any newly understood gene regulation processes and feedback mechanisms affecting chemo-mechanical properties of cells.
Chemo-mechanical modeling of tumor growth in elastic epithelial tissue
Bratsun, Dmitry A.; Zakharov, Andrey P.; Pismen, Len
2016-08-01
We propose a multiscale chemo-mechanical model of the cancer tumor development in the epithelial tissue. The epithelium is represented by an elastic 2D array of polygonal cells with its own gene regulation dynamics. The model allows the simulation of the evolution of multiple cells interacting via the chemical signaling or mechanically induced strain. The algorithm includes the division and intercalation of cells as well as the transformation of normal cells into a cancerous state triggered by a local failure of the spatial synchronization of the cellular rhythms driven by transcription/translation processes. Both deterministic and stochastic descriptions of the system are given for chemical signaling. The transformation of cells means the modification of their respective parameters responsible for chemo-mechanical interactions. The simulations reproduce a distinct behavior of invasive and localized carcinoma. Generally, the model is designed in such a way that it can be readily modified to take account of any newly understood gene regulation processes and feedback mechanisms affecting chemo-mechanical properties of cells.
International Nuclear Information System (INIS)
Chen, Chang-Kuo; Hou, Yi-You; Luo, Cheng-Long
2012-01-01
Highlights: ► An efficient design procedure for deterministic response time design of nuclear I and C system. ► We model the concurrent operations based on sequence diagrams and Petri nets. ► The model can achieve the deterministic behavior by using symbolic time representation. ► An illustrative example of the bistable processor logic is given. - Abstract: This study is concerned with a deterministic response time design for computer-based systems in the nuclear industry. In current approach, Petri nets are used to model the requirement of a system specified with sequence diagrams. Also, the linear logic is proposed to characterize the state of changes in the Petri net model accurately by using symbolic time representation for the purpose of acquiring deterministic behavior. An illustrative example of the bistable processor logic is provided to demonstrate the practicability of the proposed approach.
International Nuclear Information System (INIS)
Azadeh, A.; Ghaderi, S.F.; Omrani, H.
2009-01-01
This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran
Study on modeling of operator's learning mechanism
International Nuclear Information System (INIS)
Yoshimura, Seichi; Hasegawa, Naoko
1998-01-01
One effective method to analyze the causes of human errors is to model the behavior of human and to simulate it. The Central Research Institute of Electric Power Industry (CRIEPI) has developed an operator team behavior simulation system called SYBORG (Simulation System for the Behavior of an Operating Group) to analyze the human errors and to establish the countermeasures for them. As an operator behavior model which composes SYBORG has no learning mechanism and the knowledge of a plant is fixed, it cannot take suitable actions when unknown situations occur nor learn anything from the experience. However, considering actual operators, learning is an essential human factor to enhance their abilities to diagnose plant anomalies. In this paper, Q learning with 1/f fluctuation was proposed as a learning mechanism of an operator and simulation using the mechanism was conducted. The results showed the effectiveness of the learning mechanism. (author)
Langevin equation with the deterministic algebraically correlated noise
Energy Technology Data Exchange (ETDEWEB)
Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Srokowski, T. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)]|[Institute of Nuclear Physics, Cracow (Poland)
1995-12-31
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author). 58 refs.
A Deterministic Approach to Earthquake Prediction
Directory of Open Access Journals (Sweden)
Vittorio Sgrigna
2012-01-01
Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.
Massuel, S.; George, B. A.; Venot, J.-P.; Bharati, L.; Acharya, S.
2013-11-01
Since the 1990s, Indian farmers, supported by the government, have partially shifted from surface-water to groundwater irrigation in response to the uncertainty in surface-water availability. Water-management authorities only slowly began to consider sustainable use of groundwater resources as a prime concern. Now, a reliable integration of groundwater resources for water-allocation planning is needed to prevent aquifer overexploitation. Within the 11,000-km2 Musi River sub-basin (South India), human interventions have dramatically impacted the hard-rock aquifers, with a water-table drop of 0.18 m/a over the period 1989-2004. A fully distributed numerical groundwater model was successfully implemented at catchment scale. The model allowed two distinct conceptualizations of groundwater availability to be quantified: one that was linked to easily quantified fluxes, and one that was more expressive of long-term sustainability by taking account of all sources and sinks. Simulations showed that the latter implied 13 % less available groundwater for exploitation than did the former. In turn, this has major implications for the existing water-allocation modelling framework used to guide decision makers and water-resources managers worldwide.
Modeling and Generating Strategy Games Mechanics
DEFF Research Database (Denmark)
Mahlmann, Tobias
of the game is, how players may manipulate the game world, etc. We present the Strategy Games Description Language (SGDL), a tree-based approach to model the game mechanics of strategy games. SGDL allows game designers to rapid prototype their game ideas with the help of our customisable game engine. We...... their games to individual players’ preferences by creating game content adaptively to how the player plays (and likes) a game. W we extend the notion of “procedural game content generation” by “game mechanics”. Game mechanics herein refer to the way that objects in a game may interact, what the goal...... present several example games to demonstrate the capabilities of the language and how to model common strategy game elements. Furthermore, we present methods to procedurally generate and evaluate game mechanics modelled in SGDL in terms of enjoyability. We argue that an evolutionary process can be used...
Convergence studies of deterministic methods for LWR explicit reflector methodology
International Nuclear Information System (INIS)
Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.
2013-01-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)
Selroos, J. O.; Appleyard, P.; Bym, T.; Follin, S.; Hartley, L.; Joyce, S.; Munier, R.
2015-12-01
In 2011 the Swedish Nuclear Fuel and Waste Management Company (SKB) applied for a license to start construction of a final repository for spent nuclear fuel at Forsmark in Northern Uppland, Sweden. The repository is to be built at approximately 500 m depth in crystalline rock. A stochastic, discrete fracture network (DFN) concept was chosen for interpreting the surface-based (incl. boreholes) data, and for assessing the safety of the repository in terms of groundwater flow and flow pathways to and from the repository. Once repository construction starts, also underground data such as tunnel pilot borehole and tunnel trace data will become available. It is deemed crucial that DFN models developed at this stage honors the mapped structures both in terms of location and geometry, and in terms of flow characteristics. The originally fully stochastic models will thus increase determinism towards the repository. Applying the adopted probabilistic framework, predictive modeling to support acceptance criteria for layout and disposal can be performed with the goal of minimizing risks associated with the repository. This presentation describes and illustrates various methodologies that have been developed to condition stochastic realizations of fracture networks around underground openings using borehole and tunnel trace data, as well as using hydraulic measurements of inflows or hydraulic interference tests. The methodologies, implemented in the numerical simulators ConnectFlow and FracMan/MAFIC, are described in some detail, and verification tests and realistic example cases are shown. Specifically, geometric and hydraulic data are obtained from numerical synthetic realities approximating Forsmark conditions, and are used to test the constraining power of the developed methodologies by conditioning unconditional DFN simulations following the same underlying fracture network statistics. Various metrics are developed to assess how well the conditional simulations compare to
Ordinal optimization and its application to complex deterministic problems
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Design of deterministic interleaver for turbo codes
International Nuclear Information System (INIS)
Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.
2008-01-01
The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)
Prediction of Sliding Friction Coefficient Based on a Novel Hybrid Molecular-Mechanical Model.
Zhang, Xiaogang; Zhang, Yali; Wang, Jianmei; Sheng, Chenxing; Li, Zhixiong
2018-08-01
Sliding friction is a complex phenomenon which arises from the mechanical and molecular interactions of asperities when examined in a microscale. To reveal and further understand the effects of micro scaled mechanical and molecular components of friction coefficient on overall frictional behavior, a hybrid molecular-mechanical model is developed to investigate the effects of main factors, including different loads and surface roughness values, on the sliding friction coefficient in a boundary lubrication condition. Numerical modelling was conducted using a deterministic contact model and based on the molecular-mechanical theory of friction. In the contact model, with given external loads and surface topographies, the pressure distribution, real contact area, and elastic/plastic deformation of each single asperity contact were calculated. Then asperity friction coefficient was predicted by the sum of mechanical and molecular components of friction coefficient. The mechanical component was mainly determined by the contact width and elastic/plastic deformation, and the molecular component was estimated as a function of the contact area and interfacial shear stress. Numerical results were compared with experimental results and a good agreement was obtained. The model was then used to predict friction coefficients in different operating and surface conditions. Numerical results explain why applied load has a minimum effect on the friction coefficients. They also provide insight into the effect of surface roughness on the mechanical and molecular components of friction coefficients. It is revealed that the mechanical component dominates the friction coefficient when the surface roughness is large (Rq > 0.2 μm), while the friction coefficient is mainly determined by the molecular component when the surface is relatively smooth (Rq < 0.2 μm). Furthermore, optimal roughness values for minimizing the friction coefficient are recommended.
Mechanics model for actin-based motility.
Lin, Yuan
2009-02-01
We present here a mechanics model for the force generation by actin polymerization. The possible adhesions between the actin filaments and the load surface, as well as the nucleation and capping of filament tips, are included in this model on top of the well-known elastic Brownian ratchet formulation. A closed form solution is provided from which the force-velocity relationship, summarizing the mechanics of polymerization, can be drawn. Model predictions on the velocity of moving beads driven by actin polymerization are consistent with experiment observations. This model also seems capable of explaining the enhanced actin-based motility of Listeria monocytogenes and beads by the presence of Vasodilator-stimulated phosphoprotein, as observed in recent experiments.
Quantum Mechanical Modeling of Ballistic MOSFETs
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan (Technical Monitor)
2001-01-01
The objective of this project was to develop theory, approximations, and computer code to model quasi 1D structures such as nanotubes, DNA, and MOSFETs: (1) Nanotubes: Influence of defects on ballistic transport, electro-mechanical properties, and metal-nanotube coupling; (2) DNA: Model electron transfer (biochemistry) and transport experiments, and sequence dependence of conductance; and (3) MOSFETs: 2D doping profiles, polysilicon depletion, source to drain and gate tunneling, understand ballistic limit.
Modeling the mechanical properties of DNA nanostructures.
Arbona, Jean Michel; Aimé, Jean-Pierre; Elezgaray, Juan
2012-11-01
We discuss generalizations of a previously published coarse-grained description [Mergell et al., Phys. Rev. E 68, 021911 (2003)] of double stranded DNA (dsDNA). The model is defined at the base-pair level and includes the electrostatic repulsion between neighbor helices. We show that the model reproduces mechanical and elastic properties of several DNA nanostructures (DNA origamis). We also show that electrostatic interactions are necessary to reproduce atomic force microscopy measurements on planar DNA origamis.
Distinguishing deterministic and noise components in ELM time series
International Nuclear Information System (INIS)
Zvejnieks, G.; Kuzovkov, V.N
2004-01-01
Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type
Deterministic dense coding with partially entangled states
Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni
2005-01-01
The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.
Nonsmooth mechanics models, dynamics and control
Brogliato, Bernard
2016-01-01
Now in its third edition, this standard reference is a comprehensive treatment of nonsmooth mechanical systems refocused to give more prominence to control and modelling. It covers Lagrangian and Newton–Euler systems, detailing mathematical tools such as convex analysis and complementarity theory. The ways in which nonsmooth mechanics influence and are influenced by well-posedness analysis, numerical analysis and simulation, modelling and control are explained. Contact/impact laws, stability theory and trajectory-tracking control are given in-depth exposition connected by a framework formed from complementarity systems and measure-differential inclusions. Links are established with electrical circuits with set-valued nonsmooth elements and with other nonsmooth dynamical systems like impulsive and piecewise linear systems. Nonsmooth Mechanics (third edition) has been substantially rewritten, edited and updated to account for the significant body of results that have emerged in the twenty-first century—incl...
ANALYSIS AND MODELING OF GENEVA MECHANISM
Directory of Open Access Journals (Sweden)
HARAGA Georgeta
2015-06-01
Full Text Available The paper presents some aspects theoretical and practical based on the finite element analysis and modelling of Geneva mechanism with four slots, using the CATIA graphic program. This type of mechanism is an example of intermittent gearing that translates a continuous rotation into an intermittent rotary motion. It consists of alternate periods of motion and rest without reversing direction. In this paper, some design parameters with specify a Geneva mechanism will be defined precisely such as number of driving cranks, number of slots, wheel diameter, pin diameter, etc. Finite element analysis (FEA can be used for creating a finite element model (preprocessing and visualizing the analysis results (postprocessing, and use other solvers for processing.
Modeling of SCC initiation and propagation mechanisms in BWR environments
Energy Technology Data Exchange (ETDEWEB)
Hoffmeister, Hans, E-mail: Hans.Hoffmeister@hsu-hh.de [Institute for Failure Analysis and Failure Prevention ISSV e.V., c/o Helmut Schmidt University of the Federal Armed Forces, D-22039 Hamburg (Germany); Klein, Oliver [Institute for Failure Analysis and Failure Prevention ISSV e.V., c/o Helmut Schmidt University of the Federal Armed Forces, D-22039 Hamburg (Germany)
2011-12-15
Highlights: Black-Right-Pointing-Pointer We show that SSC in BWR environments includes anodic crack propagation and hydrogen assisted cracking. Black-Right-Pointing-Pointer Hydrogen cracking is triggered by crack tip acidification following local impurity accumulations and subsequent phase precipitations. Black-Right-Pointing-Pointer We calculate effects of pH, chlorides, potentials and stress on crack SCC growth rates at 288 Degree-Sign C. - Abstract: During operation of mainly BWRs' (Boiling Water Reactors) excursions from recommended water chemistries may provide favorite conditions for stress corrosion cracking (SCC). Maximum levels for chloride and sulfate ion contents for avoiding local corrosion are therefore given in respective water specifications. In a previously published deterministic 288 Degree-Sign C - corrosion model for Nickel as a main alloying element of BWR components it was demonstrated that, as a theoretically worst case, bulk water chloride levels as low as 30 ppb provide local chloride ion accumulation, dissolution of passivating nickel oxide and precipitation of nickel chlorides followed by subsequent local acidification. In an extension of the above model to SCC the following work shows that, in a first step, local anodic path corrosion with subsequent oxide breakdown, chloride salt formation and acidification at 288 Degree-Sign C would establish local cathodic reduction of accumulated hydrogen ions inside the crack tip fluid. In a second step, local hydrogen reduction charges and increasing local crack tip strains from increasing crack lengths at given global stresses are time stepwise calculated and related to experimentally determined crack critical cathodic hydrogen charges and fracture strains taken from small scale SSRT tensile tests pieces. As a result, at local hydrogen equilibrium potentials higher than those of nickel in the crack tip solution, hydrogen ion reduction initiates hydrogen crack propagation that is enhanced with
(ajst) statistical mechanics model for orientational
African Journals Online (AJOL)
Science and Engineering Series Vol. 6, No. 2, pp. 94 - 101. STATISTICAL MECHANICS MODEL FOR ORIENTATIONAL. MOTION OF TWO-DIMENSIONAL RIGID ROTATOR. Malo, J.O. ... there is no translational motion and that they are well separated so .... constant and I is the moment of inertia of a linear rotator. Thus, the ...
Modeling mechanical interactions between cancerous mammary acini
Wang, Jeffrey; Liphardt, Jan; Rycroft, Chris
2015-03-01
The rules and mechanical forces governing cell motility and interactions with the extracellular matrix of a tissue are often critical for understanding the mechanisms by which breast cancer is able to spread through the breast tissue and eventually metastasize. Ex vivo experimentation has demonstrated the the formation of long collagen fibers through collagen gels between the cancerous mammary acini responsible for milk production, providing a fiber scaffolding along which cancer cells can disorganize. We present a minimal mechanical model that serves as a potential explanation for the formation of these collagen fibers and the resultant motion. Our working hypothesis is that cancerous cells induce this fiber formation by pulling on the gel and taking advantage of the specific mechanical properties of collagen. To model this system, we employ a new Eulerian, fixed grid simulation method to model the collagen as a nonlinear viscoelastic material subject to various forces coupled with a multi-agent model to describe individual cancer cells. We find that these phenomena can be explained two simple ideas: cells pull collagen radially inwards and move towards the tension gradient of the collagen gel, while being exposed to standard adhesive and collision forces.
Optimal Deterministic Investment Strategies for Insurers
Directory of Open Access Journals (Sweden)
Ulrich Rieder
2013-11-01
Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.
Modelling of volatility in monetary transmission mechanism
Energy Technology Data Exchange (ETDEWEB)
Dobešová, Anna; Klepáč, Václav; Kolman, Pavel [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, 61300, Brno (Czech Republic); Bednářová, Petra [Institute of Technology and Business, Okružní 517/10, 370 01, České Budějovice (Czech Republic)
2015-03-10
The aim of this paper is to compare different approaches to modeling of volatility in monetary transmission mechanism. For this purpose we built time-varying parameter VAR (TVP-VAR) model with stochastic volatility and VAR-DCC-GARCH model with conditional variance. The data from three European countries are included in the analysis: the Czech Republic, Germany and Slovakia. Results show that VAR-DCC-GARCH system captures higher volatility of observed variables but main trends and detected breaks are generally identical in both approaches.
Modelling of volatility in monetary transmission mechanism
International Nuclear Information System (INIS)
Dobešová, Anna; Klepáč, Václav; Kolman, Pavel; Bednářová, Petra
2015-01-01
The aim of this paper is to compare different approaches to modeling of volatility in monetary transmission mechanism. For this purpose we built time-varying parameter VAR (TVP-VAR) model with stochastic volatility and VAR-DCC-GARCH model with conditional variance. The data from three European countries are included in the analysis: the Czech Republic, Germany and Slovakia. Results show that VAR-DCC-GARCH system captures higher volatility of observed variables but main trends and detected breaks are generally identical in both approaches
Equilibrium statistical mechanics of lattice models
Lavis, David A
2015-01-01
Most interesting and difficult problems in equilibrium statistical mechanics concern models which exhibit phase transitions. For graduate students and more experienced researchers this book provides an invaluable reference source of approximate and exact solutions for a comprehensive range of such models. Part I contains background material on classical thermodynamics and statistical mechanics, together with a classification and survey of lattice models. The geometry of phase transitions is described and scaling theory is used to introduce critical exponents and scaling laws. An introduction is given to finite-size scaling, conformal invariance and Schramm—Loewner evolution. Part II contains accounts of classical mean-field methods. The parallels between Landau expansions and catastrophe theory are discussed and Ginzburg—Landau theory is introduced. The extension of mean-field theory to higher-orders is explored using the Kikuchi—Hijmans—De Boer hierarchy of approximations. In Part III the use of alge...
Systematic comparison of model polymer nanocomposite mechanics.
Xiao, Senbo; Peter, Christine; Kremer, Kurt
2016-09-13
Polymer nanocomposites render a range of outstanding materials from natural products such as silk, sea shells and bones, to synthesized nanoclay or carbon nanotube reinforced polymer systems. In contrast to the fast expanding interest in this type of material, the fundamental mechanisms of their mixing, phase behavior and reinforcement, especially for higher nanoparticle content as relevant for bio-inorganic composites, are still not fully understood. Although polymer nanocomposites exhibit diverse morphologies, qualitatively their mechanical properties are believed to be governed by a few parameters, namely their internal polymer network topology, nanoparticle volume fraction, particle surface properties and so on. Relating material mechanics to such elementary parameters is the purpose of this work. By taking a coarse-grained molecular modeling approach, we study an range of different polymer nanocomposites. We vary polymer nanoparticle connectivity, surface geometry and volume fraction to systematically study rheological/mechanical properties. Our models cover different materials, and reproduce key characteristics of real nanocomposites, such as phase separation, mechanical reinforcement. The results shed light on establishing elementary structure, property and function relationship of polymer nanocomposites.
Mechanics, Models and Methods in Civil Engineering
Maceri, Franco
2012-01-01
„Mechanics, Models and Methods in Civil Engineering” collects leading papers dealing with actual Civil Engineering problems. The approach is in the line of the Italian-French school and therefore deeply couples mechanics and mathematics creating new predictive theories, enhancing clarity in understanding, and improving effectiveness in applications. The authors of the contributions collected here belong to the Lagrange Laboratory, an European Research Network active since many years. This book will be of a major interest for the reader aware of modern Civil Engineering.
Are deterministic methods suitable for short term reserve planning?
International Nuclear Information System (INIS)
Voorspools, Kris R.; D'haeseleer, William D.
2005-01-01
Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods
Deterministic hazard quotients (HQs): Heading down the wrong road
International Nuclear Information System (INIS)
Wilde, L.; Hunter, C.; Simpson, J.
1995-01-01
The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches
Skewed factor models using selection mechanisms
Kim, Hyoung-Moon
2015-12-21
Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.
Skewed factor models using selection mechanisms
Kim, Hyoung-Moon; Maadooliat, Mehdi; Arellano-Valle, Reinaldo B.; Genton, Marc G.
2015-01-01
Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.
Quantum mechanical Hamiltonian models of discrete processes
International Nuclear Information System (INIS)
Benioff, P.
1981-01-01
Here the results of other work on quantum mechanical Hamiltonian models of Turing machines are extended to include any discrete process T on a countably infinite set A. The models are constructed here by use of scattering phase shifts from successive scatterers to turn on successive step interactions. Also a locality requirement is imposed. The construction is done by first associating with each process T a model quantum system M with associated Hilbert space H/sub M/ and step operator U/sub T/. Since U/sub T/ is not unitary in general, M, H/sub M/, and U/sub T/ are extended into a (continuous time) Hamiltonian model on a larger space which satisfies the locality requirement. The construction is compared with the minimal unitary dilation of U/sub T/. It is seen that the model constructed here is larger than the minimal one. However, the minimal one does not satisfy the locality requirement
Atomic routing in a deterministic queuing model
Directory of Open Access Journals (Sweden)
T.L. Werth
2014-03-01
We also consider the makespan objective (arrival time of the last user and show that optimal solutions and Nash equilibria in these games, where every user selfishly tries to minimize her travel time, can be found efficiently.
International Nuclear Information System (INIS)
Buonomano, V.; Engel, A.
1974-10-01
Some speculations on a causal model that seems to provide a common conceptual foundation for Relativity Gravitation and Quantum Mechanics are presented. The present approach is a unifying of three theories. The first being the repulsive theory of gravitational forces first proposed by Lesage in the eighteenth century. The second of these theories is the Brownian Motion Theory of Quantum Mechanics or Stocastic Mechanics which treats the non-deterministic Nature of Quantum Mechanics as being due to a Brownian motion of all objects. This Brownian motion being caused by the statistical variation in the graviton flux. The above two theories are unified with the Causal Theory of Special Relativity. Within the present context, the time dilations (and other effects) of Relativity are explained by assuming that the rate of a clock is a function of the total number or intensity of gravitons and the average frequency or energy of the gravitons that the clock receives. The Special Theory would then be the special case of the General Theory where the intensity is constant but the average frequency varies. In all the previous it is necessary to assume a particular model of the creation of the universe, namely the Big Bang Theory. This assumption gives us the existence of a preferred reference frame, the frame in which the Big Bang explosion was at rest. The above concepts of graviton distribution and real time dilations become meaningful by assuming the Big Bang Theory along with this preferred frame. An experimental test is proposed
Mathematical modeling in mechanics of heterogeneous media
International Nuclear Information System (INIS)
Fedorov, A.V.; Fomin, V.M.
1991-01-01
The paper reviews the work carried out at the Department of Multi-Phase Media Mechanics of the Institute of Theoretical and Applied Mechanics of the Siberian Division of the USSR Academy of Sciences. It deals with mathematical models for the flow of gas mixtures and solid particles that account for phase transitions and chemical reactions. This work is concerned with the problems of construction of laws of conservation, determination of the type of equations of heterogeneous media mechanics, structure of shock waves, and combined discontinuities in mixtures. The theory of ideal and nonideal detonation in suspension of matter in gases is discussed. Self-similar flows of gas mixtures and responding particles, as well as the problem of breakup of discontinuity for suspension of matter in gases, is studied. 42 refs
A Theory of Deterministic Event Structures
Lee, I.; Rensink, Arend; Smolka, S.A.
1995-01-01
We present an w-complete algebra of a class of deterministic event structures which are labelled prime event structures where the labelling function satises a certain distinctness condition. The operators of the algebra are summation sequential composition and join. Each of these gives rise to a
The deterministic optical alignment of the HERMES spectrograph
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
A mechanical model of the smartphone's accelerometer
Gallitto, Aurelio Agliolo; Lupo, Lucia
2015-01-01
To increase the attention of students, several physics experiments can be performed at school, as well at home, by using the smartphone as laboratory tools. In the paper we describe a mechanical model of the smartphone's accelerometer, which can be used in classroom to allow students to better understand the principle of the accelerometer even by students at the beginning of the study in physics.
Interfacial Fluid Mechanics A Mathematical Modeling Approach
Ajaev, Vladimir S
2012-01-01
Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail. Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also: Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...
Numerical modelling in non linear fracture mechanics
Directory of Open Access Journals (Sweden)
Viggo Tvergaard
2007-07-01
Full Text Available Some numerical studies of crack propagation are based on using constitutive models that accountfor damage evolution in the material. When a critical damage value has been reached in a materialpoint, it is natural to assume that this point has no more carrying capacity, as is done numerically in the elementvanish technique. In the present review this procedure is illustrated for micromechanically based materialmodels, such as a ductile failure model that accounts for the nucleation and growth of voids to coalescence, and a model for intergranular creep failure with diffusive growth of grain boundary cavities leading to micro-crack formation. The procedure is also illustrated for low cycle fatigue, based on continuum damage mechanics. In addition, the possibility of crack growth predictions for elastic-plastic solids using cohesive zone models to represent the fracture process is discussed.
Material modeling of biofilm mechanical properties.
Laspidou, C S; Spyrou, L A; Aravas, N; Rittmann, B E
2014-05-01
A biofilm material model and a procedure for numerical integration are developed in this article. They enable calculation of a composite Young's modulus that varies in the biofilm and evolves with deformation. The biofilm-material model makes it possible to introduce a modeling example, produced by the Unified Multi-Component Cellular Automaton model, into the general-purpose finite-element code ABAQUS. Compressive, tensile, and shear loads are imposed, and the way the biofilm mechanical properties evolve is assessed. Results show that the local values of Young's modulus increase under compressive loading, since compression results in the voids "closing," thus making the material stiffer. For the opposite reason, biofilm stiffness decreases when tensile loads are imposed. Furthermore, the biofilm is more compliant in shear than in compression or tension due to the how the elastic shear modulus relates to Young's modulus. Copyright © 2014 Elsevier Inc. All rights reserved.
Recent achievements of the neo-deterministic seismic hazard assessment in the CEI region
International Nuclear Information System (INIS)
Panza, G.F.; Vaccari, F.; Kouteva, M.
2008-03-01
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales - regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown. (author)
Amezcua, Javier
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn't represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion
Directory of Open Access Journals (Sweden)
Shunsuke Nansai
2015-01-01
Full Text Available The Theo Jansen mechanism is gaining widespread popularity among the legged robotics community due to its scalable design, energy efficiency, low payload-to-machine-load ratio, bioinspired locomotion, and deterministic foot trajectory. In this paper, we perform for the first time the dynamic modeling and analysis on a four-legged robot driven by a single actuator and composed of Theo Jansen mechanisms. The projection method is applied to derive the equations of motion of this complex mechanical system and a position control strategy based on energy is proposed. Numerical simulations validate the efficacy of the designed controller, thus setting a theoretical basis for further investigations on Theo Jansen based quadruped robots.
Mechanical Model Development for Composite Structural Supercapacitors
Ricks, Trenton M.; Lacy, Thomas E., Jr.; Santiago, Diana; Bednarcyk, Brett A.
2016-01-01
Novel composite structural supercapacitor concepts have recently been developed as a means both to store electrical charge and to provide modest mechanical load carrying capability. Double-layer composite supercapacitors are often fabricated by impregnating a woven carbon fiber fabric, which serves as the electrodes, with a structural polymer electrolyte. Polypropylene or a glass fabric is often used as the separator material. Recent research has been primarily limited to evaluating these composites experimentally. In this study, mechanical models based on the Multiscale Generalized Method of Cells (MSGMC) were developed and used to calculate the shear and tensile properties and response of two composite structural supercapacitors from the literature. The modeling approach was first validated against traditional composite laminate data. MSGMC models for composite supercapacitors were developed, and accurate elastic shear/tensile properties were obtained. It is envisioned that further development of the models presented in this work will facilitate the design of composite components for aerospace and automotive applications and can be used to screen candidate constituent materials for inclusion in future composite structural supercapacitor concepts.
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
International Nuclear Information System (INIS)
S. GOLUOGLU, C. BENTLEY, R. DEMEGLIO, M. DUNN, K. NORTON, R. PEVEY I.SUSLOV AND H.L. DODDS
1998-01-01
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems
Kinetic mechanism for modeling of electrochemical reactions.
Cervenka, Petr; Hrdlička, Jiří; Přibyl, Michal; Snita, Dalimil
2012-04-01
We propose a kinetic mechanism of electrochemical interactions. We assume fast formation and recombination of electron donors D- and acceptors A+ on electrode surfaces. These mediators are continuously formed in the electrode matter by thermal fluctuations. The mediators D- and A+, chemically equivalent to the electrode metal, enter electrochemical interactions on the electrode surfaces. Electrochemical dynamics and current-voltage characteristics of a selected electrochemical system are studied. Our results are in good qualitative agreement with those given by the classical Butler-Volmer kinetics. The proposed model can be used to study fast electrochemical processes in microsystems and nanosystems that are often out of the thermal equilibrium. Moreover, the kinetic mechanism operates only with the surface concentrations of chemical reactants and local electric potentials, which facilitates the study of electrochemical systems with indefinable bulk.
Directory of Open Access Journals (Sweden)
Christoph Hartmann
2015-12-01
Full Text Available Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN, which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural
Mechanical Models of Fault-Related Folding
Energy Technology Data Exchange (ETDEWEB)
Johnson, A. M.
2003-01-09
The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).
On nonlocal modeling in continuum mechanics
Directory of Open Access Journals (Sweden)
Adam Martowicz
2018-01-01
Full Text Available The objective of the paper is to provide an overview of nonlocal formulations for models of elastic solids. The author presents the physical foundations for nonlocal theories of continuum mechanics, followed by various analytical and numerical techniques. The characteristics and range of practical applications for the presented approaches are discussed. The results of numerical simulations for the selected case studies are provided to demonstrate the properties of the described methods. The paper is illustrated with outcomes from peridynamic analyses. Fatigue and axial stretching were simulated to show the capabilities of the developed numerical tools.
Deterministic nonlinear systems a short course
Anishchenko, Vadim S; Strelkova, Galina I
2014-01-01
This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems. This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.
Deterministic nanoparticle assemblies: from substrate to solution
International Nuclear Information System (INIS)
Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J
2014-01-01
The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)
Deterministic analyses of severe accident issues
International Nuclear Information System (INIS)
Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.
2004-01-01
Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
An examination of adaptive cellular protective mechanisms using a multi-stage carcinogenesis model
International Nuclear Information System (INIS)
Schollnberger, H.; Stewart, R. D.; Mitchel, R. E. J.; Hofmann, W.
2004-01-01
A multi-stage cancer model that describes the putative rate-limiting steps in carcinogenesis was developed and used to investigate the potential impact on lung cancer incidence of the hormesis mechanisms suggested by Feinendegen and Pollycove. In this deterministic cancer model, radiation and endogenous processes damage the DNA of target cells in the lung. Some fraction of the misrepaired our unrepaired DNA damage induces genomic instability and, ultimately, leads to the accumulation of malignant cells. The model accounts for cell birth and death processes. Ita also includes a rate of malignant transformation and a lag period for tumour formation. Cellular defence mechanisms are incorporated into the model by postulating dose and dose rate dependent radical scavenging. The accuracy of DNA damage repair also depends on dose and dose rate. Sensitivity studies were conducted to identify critical model inputs and to help define the shapes of the cumulative lung cancer incidence curves that may arise when dose and dose rate dependent cellular defence mechanisms are incorporated into a multi-stage cancer model. For lung cancer, both linear no-threshold (LNT) and non-LNT shaped responses can be obtained. The reported studied clearly show that it is critical to know whether or not and to what extent multiply damaged DNA sites are formed by endogenous processes. Model inputs that give rise to U-shaped responses are consistent with an effective cumulative lung cancer incidence threshold that may be as high as 300 mGy (4 mGy per year for 75 years). (Author) 11 refs
Recent progress in sorption mechanisms and models
International Nuclear Information System (INIS)
Fedoroff, M.; Lefevre, G.
2005-01-01
reactivity of the different faces. Finally, we know more about the sorption processes and are able to model them with a better agreement with the real sorption mechanisms. However, this progress concerns a few simple systems and a further task will be the application of this knowledge to more complex systems. (authors)
Incorporating damage mechanics into explosion simulation models
International Nuclear Information System (INIS)
Sammis, C.G.
1993-01-01
The source region of an underground explosion is commonly modeled as a nested series of shells. In the innermost open-quotes hydrodynamic regimeclose quotes pressures and temperatures are sufficiently high that the rock deforms as a fluid and may be described using a PVT equation of state. Just beyond the hydrodynamic regime, is the open-quotes non-linear regimeclose quotes in which the rock has shear strength but the deformation is nonlinear. This regime extends out to the open-quotes elastic radiusclose quotes beyond which the deformation is linear. In this paper, we develop a model for the non-linear regime in crystalline source rock where the nonlinearity is mostly due to fractures. We divide the non-linear regime into a open-quotes damage regimeclose quotes in which the stresses are sufficiently high to nucleate new fractures from preexisting ones and a open-quotes crack-slidingclose quotes regime where motion on preexisting cracks produces amplitude dependent attenuation and other non-linear effects, but no new cracks are nucleated. The boundary between these two regimes is called the open-quotes damage radius.close quotes The micromechanical damage mechanics recently developed by Ashby and Sammis (1990) is used to write an analytic expression for the damage radius in terms of the initial fracture spectrum of the source rock, and to develop an algorithm which may be used to incorporate damage mechanics into computer source models for the damage regime. Effects of water saturation and loading rate are also discussed
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Deterministic sensitivity analysis for the numerical simulation of contaminants transport
International Nuclear Information System (INIS)
Marchand, E.
2007-12-01
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Soft error mechanisms, modeling and mitigation
Sayil, Selahattin
2016-01-01
This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...
Current algebra, statistical mechanics and quantum models
Vilela Mendes, R.
2017-11-01
Results obtained in the past for free boson systems at zero and nonzero temperatures are revisited to clarify the physical meaning of current algebra reducible functionals which are associated to systems with density fluctuations, leading to observable effects on phase transitions. To use current algebra as a tool for the formulation of quantum statistical mechanics amounts to the construction of unitary representations of diffeomorphism groups. Two mathematical equivalent procedures exist for this purpose. One searches for quasi-invariant measures on configuration spaces, the other for a cyclic vector in Hilbert space. Here, one argues that the second approach is closer to the physical intuition when modelling complex systems. An example of application of the current algebra methodology to the pairing phenomenon in two-dimensional fermion systems is discussed.
Cellular automata and statistical mechanical models
International Nuclear Information System (INIS)
Rujan, P.
1987-01-01
The authors elaborate on the analogy between the transfer matrix of usual lattice models and the master equation describing the time development of cellular automata. Transient and stationary properties of probabilistic automata are linked to surface and bulk properties, respectively, of restricted statistical mechanical systems. It is demonstrated that methods of statistical physics can be successfully used to describe the dynamic and the stationary behavior of such automata. Some exact results are derived, including duality transformations, exact mappings, disorder, and linear solutions. Many examples are worked out in detail to demonstrate how to use statistical physics in order to construct cellular automata with desired properties. This approach is considered to be a first step toward the design of fully parallel, probabilistic systems whose computational abilities rely on the cooperative behavior of their components
A solution to the biodiversity paradox by logical deterministic cellular automata.
Kalmykov, Lev V; Kalmykov, Vyacheslav L
2015-06-01
The paradox of biological diversity is the key problem of theoretical ecology. The paradox consists in the contradiction between the competitive exclusion principle and the observed biodiversity. The principle is important as the basis for ecological theory. On a relatively simple model we show a mechanism of indefinite coexistence of complete competitors which violates the known formulations of the competitive exclusion principle. This mechanism is based on timely recovery of limiting resources and their spatio-temporal allocation between competitors. Because of limitations of the black-box modeling there was a problem to formulate the exclusion principle correctly. Our white-box multiscale model of two-species competition is based on logical deterministic individual-based cellular automata. This approach provides an automatic deductive inference on the basis of a system of axioms, and gives a direct insight into mechanisms of the studied system. It is one of the most promising methods of artificial intelligence. We reformulate and generalize the competitive exclusion principle and explain why this formulation provides a solution of the biodiversity paradox. In addition, we propose a principle of competitive coexistence.
Deterministic effects of the ionizing radiation
International Nuclear Information System (INIS)
Raslawski, Elsa C.
2001-01-01
Full text: The deterministic effect is the somatic damage that appears when radiation dose is superior to the minimum value or 'threshold dose'. Over this threshold dose, the frequency and seriousness of the damage increases with the amount given. Sixteen percent of patients younger than 15 years of age with the diagnosis of cancer have the possibility of a cure. The consequences of cancer treatment in children are very serious, as they are physically and emotionally developing. The seriousness of the delayed effects of radiation therapy depends on three factors: a)- The treatment ( dose of radiation, schedule of treatment, time of treatment, beam energy, treatment volume, distribution of the dose, simultaneous chemotherapy, etc.); b)- The patient (state of development, patient predisposition, inherent sensitivity of tissue, the present of other alterations, etc.); c)- The tumor (degree of extension or infiltration, mechanical effects, etc.). The effect of radiation on normal tissue is related to cellular activity and the maturity of the tissue irradiated. Children have a mosaic of tissues in different stages of maturity at different moments in time. On the other hand, each tissue has a different pattern of development, so that sequelae are different in different irradiated tissues of the same patient. We should keep in mind that all the tissues are affected in some degree. Bone tissue evidences damage with growth delay and degree of calcification. Damage is small at 10 Gy; between 10 and 20 Gy growth arrest is partial, whereas at doses larger than 20 Gy growth arrest is complete. The central nervous system is the most affected because the radiation injuries produce demyelination with or without focal or diffuse areas of necrosis in the white matter causing character alterations, lower IQ and functional level, neuro cognitive impairment,etc. The skin is also affected, showing different degrees of erythema such as ulceration and necrosis, different degrees of
Quantum Gravity as a Dissipative Deterministic System
Hooft, G. 't
1999-01-01
It is argued that the so-called holographic principle will obstruct attempts to produce physically realistic models for the unification of general relativity with quantum mechanics, unless determinism in the latter is restored. The notion of time in GR is so different from the usual one in
Stochastic and deterministic causes of streamer branching in liquid dielectrics
International Nuclear Information System (INIS)
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-01-01
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching
Deterministic and probabilistic approach to safety analysis
International Nuclear Information System (INIS)
Heuser, F.W.
1980-01-01
The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)
Diffusion in Deterministic Interacting Lattice Systems
Medenjak, Marko; Klobas, Katja; Prosen, Tomaž
2017-09-01
We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.
Monitoring and modeling the aging mechanisms
International Nuclear Information System (INIS)
Le Pape, Yann; Courtois, Alexis; Ghavamian, Charles
2006-09-01
origin of these cracks might be caused by differential drying shrinkage for the cracks near the raft and by deviated post-tensioning near the material hatch. The bi-directional prestress limits the cracks opening when the containment is over-pressurized. However, under sustained loads, concrete creep leads to a loss of prestress due to the deformation compatibility between the grouted tendons the concrete mass. Thus the safety margin shall be affected since cracks opening may become larger with time. In order to optimize the extent of reparation, it is therefore compulsory to improve the prediction of the long-time mechanical behavior of the containment. This task requires: - the improvement of the delayed behavior understanding, the so-called aging mechanism, the development of realistic, i.e. less conservative, models specifically designed for the very specific loading conditions of the inner containment; - the integration of monitored data in the numerical or analytical simulation; - the evaluation of the impact of the concrete damage and the loss of prestress on the hydraulic behavior. The paper addresses the following items: inner containment description and in-situ monitoring; concrete shrinkage and creep modeling; laboratory testing; numerical computation and comparison with monitored data; introducing monitored data in the computation; impact of damage on the leak tightness of the containment wall. The communication illustrates the general strategy adopted by EDF in order to assess the long-term integrity of NPP inner concrete containment vessel. All analysis and computation are performed on the standard zone of the concrete vessel. On-going research and development programs are focused on the refinement of the methodology and their application to more realistic numerical model of large-scale structure
HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks
Directory of Open Access Journals (Sweden)
Luca Marchetti
2017-01-01
Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.
International Nuclear Information System (INIS)
Barthelet, B.; Ardillon, E.
1997-01-01
The flaw acceptance rules in nuclear components rely on deterministic criteria supposed to ensure the safe operating of plants. The interest of having a reliable method of evaluating the safety margins and the integrity of components led Electricite de France to launch a study to link safety factors with requested reliability. A simplified analytical probabilistic approach is developed to analyse the failure risk in Fracture Mechanics. Assuming lognormal distributions of the main random variables, it is possible considering a simple Linear Elastic Fracture Mechanics model, to determine the failure probability as a function of mean values and logarithmic standard deviations. The 'design' failure point can be analytically calculated. Partial safety factors on the main variables (stress, crack size, material toughness) are obtained in relation with reliability target values. The approach is generalized to elastic plastic Fracture Mechanics (piping) by fitting J as a power law function of stress, crack size and yield strength. The simplified approach is validated by detailed probabilistic computations with PROBAN computer program. Assuming reasonable coefficients of variations (logarithmic standard deviations), the method helps to calibrate safety factors for different components taking into account reliability target values in normal, emergency and faulted conditions. Statistical data for the mechanical properties of the main basic materials complement the study. The work involves laboratory results and manufacture data. The results of this study are discussed within a working group of the French in service inspection code RSE-M. (authors)
Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks.
Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin
2017-09-22
The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability.
Use of deterministic methods in survey calculations for criticality problems
International Nuclear Information System (INIS)
Hutton, J.L.; Phenix, J.; Course, A.F.
1991-01-01
A code package using deterministic methods for solving the Boltzmann Transport equation is the WIMS suite. This has been very successful for a range of situations. In particular it has been used with great success to analyse trends in reactivity with a range of changes in state. The WIMS suite of codes have a range of methods and are very flexible in the way they can be combined. A wide variety of situations can be modelled ranging through all the current Thermal Reactor variants to storage systems and items of chemical plant. These methods have recently been enhanced by the introduction of the CACTUS method. This is based on a characteristics technique for solving the Transport equation and has the advantage that complex geometrical situations can be treated. In this paper the basis of the method is outlined and examples of its use are illustrated. In parallel with these developments the validation for out of pile situations has been extended to include experiments with relevance to criticality situations. The paper will summarise this evidence and show how these results point to a partial re-adoption of deterministic methods for some areas of criticality. The paper also presents results to illustrate the use of WIMS in criticality situations and in particular show how it can complement codes such as MONK when used for surveying the reactivity effect due to changes in geometry or materials. (Author)
Strongly Deterministic Population Dynamics in Closed Microbial Communities
Directory of Open Access Journals (Sweden)
Zak Frentz
2015-10-01
Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
Shock-induced explosive chemistry in a deterministic sample configuration.
Energy Technology Data Exchange (ETDEWEB)
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Safety margins in deterministic safety analysis
International Nuclear Information System (INIS)
Viktorov, A.
2011-01-01
The concept of safety margins has acquired certain prominence in the attempts to demonstrate quantitatively the level of the nuclear power plant safety by means of deterministic analysis, especially when considering impacts from plant ageing and discovery issues. A number of international or industry publications exist that discuss various applications and interpretations of safety margins. The objective of this presentation is to bring together and examine in some detail, from the regulatory point of view, the safety margins that relate to deterministic safety analysis. In this paper, definitions of various safety margins are presented and discussed along with the regulatory expectations for them. Interrelationships of analysis input and output parameters with corresponding limits are explored. It is shown that the overall safety margin is composed of several components each having different origins and potential uses; in particular, margins associated with analysis output parameters are contrasted with margins linked to the analysis input. While these are separate, it is possible to influence output margins through the analysis input, and analysis method. Preserving safety margins is tantamount to maintaining safety. At the same time, efficiency of operation requires optimization of safety margins taking into account various technical and regulatory considerations. For this, basic definitions and rules for safety margins must be first established. (author)
Streamflow disaggregation: a nonlinear deterministic approach
Directory of Open Access Journals (Sweden)
B. Sivakumar
2004-01-01
Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.
Design of deterministic OS for SPLC
International Nuclear Information System (INIS)
Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop
2012-01-01
Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events
ONKALO rock mechanics model (RMM). Version 2.3
Energy Technology Data Exchange (ETDEWEB)
Haekkinen, T.; Merjama, S.; Moenkkoenen, H. [WSP Finland, Helsinki (Finland)
2014-07-15
The Rock Mechanics Model of the ONKALO rock volume includes the most important rock mechanics features and parameters at the Olkiluoto site. The main objective of the model is to be a tool to predict rock properties, rock quality and hence provide an estimate for the rock stability of the potential repository at Olkiluoto. The model includes a database of rock mechanics raw data and a block model in which the rock mechanics parameters are estimated through block volumes based on spatial rock mechanics raw data. In this version 2.3, special emphasis was placed on refining the estimation of the block model. The model was divided into rock mechanics domains which were used as constraints during the block model estimation. During the modelling process, a display profile and toolbar were developed for the GEOVIA Surpac software to improve visualisation and access to the rock mechanics data for the Olkiluoto area. (orig.)
Development of 3D Oxide Fuel Mechanics Models
Energy Technology Data Exchange (ETDEWEB)
Spencer, B. W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Casagranda, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pitts, S. A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, W. [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-07-27
This report documents recent work to improve the accuracy and robustness of the mechanical constitutive models used in the BISON fuel performance code. These developments include migration of the fuel mechanics models to be based on the MOOSE Tensor Mechanics module, improving the robustness of the smeared cracking model, implementing a capability to limit the time step size based on material model response, and improving the robustness of the return mapping iterations used in creep and plasticity models.
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
Location deterministic biosensing from quantum-dot-nanowire assemblies
International Nuclear Information System (INIS)
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-01-01
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
Mechanical-mathematical modeling for landslide process
Svalova, V.
2009-04-01
500 m and displacement of a landslide in the plan over 1 m. Last serious activization of a landslide has taken place in 2002 with a motion on 53 cm. Catastrophic activization of the deep blockglide landslide in the area of Khoroshevo in Moscow took place in 2006-2007. A crack of 330 m long appeared in the old sliding circus, along which a new 220 m long creeping block was separated from the plateau and began sinking with a displaced surface of the plateau reaching to 12 m. Such activization of the landslide process was not observed in Moscow since mid XIX century. The sliding area of Khoroshevo was stable during long time without manifestations of activity. Revealing of the reasons of deformation and development of ways of protection from deep landslide motions is extremely actual and difficult problem which decision is necessary for preservation of valuable historical monuments and modern city constructions. The reasons of activization and protective measures are discussed. Structure of monitoring system for urban territories is elaborated. Mechanical-mathematical model of high viscous fluid was used for modeling of matter behavior on landslide slopes. Equation of continuity and an approximated equation of the Navier-Stockes for slow motions in a thin layer were used. The results of modelling give possibility to define the place of highest velocity on landslide surface, which could be the best place for monitoring post position. Model can be used for calibration of monitoring equipment and gives possibility to investigate some fundamental aspects of matter movement on landslide slope.
Optimization of structures subjected to dynamic load: deterministic and probabilistic methods
Directory of Open Access Journals (Sweden)
Élcio Cassimiro Alves
Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.
Deterministic and stochastic evolution equations for fully dispersive and weakly nonlinear waves
DEFF Research Database (Denmark)
Eldeberky, Y.; Madsen, Per A.
1999-01-01
and stochastic formulations are solved numerically for the case of cross shore motion of unidirectional waves and the results are verified against laboratory data for wave propagation over submerged bars and over a plane slope. Outside the surf zone the two model predictions are generally in good agreement......This paper presents a new and more accurate set of deterministic evolution equations for the propagation of fully dispersive, weakly nonlinear, irregular, multidirectional waves. The equations are derived directly from the Laplace equation with leading order nonlinearity in the surface boundary...... is significantly underestimated for larger wave numbers. In the present work we correct this inconsistency. In addition to the improved deterministic formulation, we present improved stochastic evolution equations in terms of the energy spectrum and the bispectrum for multidirectional waves. The deterministic...
Surface effects in solid mechanics models, simulations and applications
Altenbach, Holm
2013-01-01
This book reviews current understanding, and future trends, of surface effects in solid mechanics. Covers elasticity, plasticity and viscoelasticity, modeling based on continuum theories and molecular modeling and applications of different modeling approaches.
Directory of Open Access Journals (Sweden)
Tim ePalmer
2015-10-01
Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
International Nuclear Information System (INIS)
Jepps, Owen G; Rondoni, Lamberto
2010-01-01
Deterministic 'thermostats' are mathematical tools used to model nonequilibrium steady states of fluids. The resulting dynamical systems correctly represent the transport properties of these fluids and are easily simulated on modern computers. More recently, the connection between such thermostats and entropy production has been exploited in the development of nonequilibrium fluid theories. The purpose and limitations of deterministic thermostats are discussed in the context of irreversible thermodynamics and the development of theories of nonequilibrium phenomena. We draw parallels between the development of such nonequilibrium theories and the development of notions of ergodicity in equilibrium theories. (topical review)
Palmer, Tim N; O'Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
Inferring hierarchical clustering structures by deterministic annealing
International Nuclear Information System (INIS)
Hofmann, T.; Buhmann, J.M.
1996-01-01
The unsupervised detection of hierarchical structures is a major topic in unsupervised learning and one of the key questions in data analysis and representation. We propose a novel algorithm for the problem of learning decision trees for data clustering and related problems. In contrast to many other methods based on successive tree growing and pruning, we propose an objective function for tree evaluation and we derive a non-greedy technique for tree growing. Applying the principles of maximum entropy and minimum cross entropy, a deterministic annealing algorithm is derived in a meanfield approximation. This technique allows us to canonically superimpose tree structures and to fit parameters to averaged or open-quote fuzzified close-quote trees
Deterministic effects of interventional radiology procedures
International Nuclear Information System (INIS)
Shope, Thomas B.
1997-01-01
The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)
Deterministic Chaos in Radon Time Variation
International Nuclear Information System (INIS)
Planinic, J.; Vukovic, B.; Radolic, V.; Faj, Z.; Stanic, D.
2003-01-01
Radon concentrations were continuously measured outdoors, in living room and basement in 10-minute intervals for a month. The radon time series were analyzed by comparing algorithms to extract phase-space dynamical information. The application of fractal methods enabled to explore the chaotic nature of radon in the atmosphere. The computed fractal dimensions, such as Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non random changes) of the time series, but the positive values of the λ pointed out the grate sensitivity on initial conditions and appearing deterministic chaos by radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere. (author)
Radon time variations and deterministic chaos
Energy Technology Data Exchange (ETDEWEB)
Planinic, J. E-mail: planinic@pedos.hr; Vukovic, B.; Radolic, V
2004-07-01
Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent ({lambda}) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0
Radon time variations and deterministic chaos
International Nuclear Information System (INIS)
Planinic, J.; Vukovic, B.; Radolic, V.
2004-01-01
Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non-random changes) of the time series, but the positive values of λ pointed out the grate sensitivity on initial conditions and the deterministic chaos that appeared due to radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere
Primality deterministic and primality probabilistic tests
Directory of Open Access Journals (Sweden)
Alfredo Rizzi
2007-10-01
Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.
Mesh generation and energy group condensation studies for the jaguar deterministic transport code
International Nuclear Information System (INIS)
Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J.
2012-01-01
The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)
Mesh generation and energy group condensation studies for the jaguar deterministic transport code
Energy Technology Data Exchange (ETDEWEB)
Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J. [Knolls Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 1072, Schenectady, NY 12301-1072 (United States)
2012-07-01
The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)
Numerical modeling of polar mesocyclones generation mechanisms
Sergeev, Dennis; Stepanenko, Victor
2013-04-01
Polar mesocyclones, commonly referred to as polar lows, remain of great interest due to their complicated dynamics. These mesoscale vortices are small short-living objects that are formed over the observation-sparse high-latitude oceans, and therefore, their evolution can hardly be observed and predicted numerically. The origin of polar mesoscale cyclones is still a matter of uncertainty, though the recent numerical investigations [3] have exposed a strong dependence of the polar mesocyclone development upon the magnitude of baroclinicity. Nevertheless, most of the previous studies focused on the individual polar low (the so-called case studies), with too many factors affecting it simultaneously. None of the earlier studies suggested a clear picture of polar mesocyclone generation within an idealized experiment, where it is possible to look deeper into each single physical process. The present paper concentrates on the initial triggering mechanism of the polar mesocyclone. As it is reported by many researchers, some mesocyclones are formed by the surface forcing, namely the uneven distribution of heat fluxes. That feature is common on the ice boundaries [2], where intense air stream flows from the cold ice surface to the warm sea surface. Hence, the resulting conditions are shallow baroclinicity and strong surface heat fluxes, which provide an arising polar mesocyclone with potential energy source converting it to the kinetic energy of the vortex. It is shown in this paper that different surface characteristics, including thermal parameters and, for example, the shape of an ice edge, determine an initial phase of a polar low life cycle. Moreover, it is shown what initial atmospheric state is most preferable for the formation of a new polar mesocyclone or in maintaining and reinforcing the existing one. The study is based on idealized high-resolution (~2 km) numerical experiment in which baroclinicity, stratification, initial wind profile and disturbance, surface
The cognitive life of mechanical molecular models.
Charbonneau, Mathieu
2013-12-01
The use of physical models of molecular structures as research tools has been central to the development of biochemistry and molecular biology. Intriguingly, it has received little attention from scholars of science. In this paper, I argue that these physical models are not mere three-dimensional representations but that they are in fact very special research tools: they are cognitive augmentations. Despite the fact that they are external props, these models serve as cognitive tools that augment and extend the modeler's cognitive capacities and performance in molecular modeling tasks. This cognitive enhancement is obtained because of the way the modeler interacts with these models, the models' materiality contributing to the solving of the molecule's structure. Furthermore, I argue that these material models and their component parts were designed, built and used specifically to serve as cognitive facilitators and cognitive augmentations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data
Directory of Open Access Journals (Sweden)
I. Kioutsioukis
2016-12-01
Full Text Available Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism. Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3, nitrogen dioxide (NO2 and particulate matter (PM10. Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII. The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each station's best deterministic model at no more than 60 % of the sites, indicating a combination of members with unbalanced skill difference and error dependence for the rest. The promotion of the right amount of accuracy and diversity within the ensemble results in an average additional skill of up to 31 % compared to using the full ensemble in an unconditional way. The skill improvements were higher for O3 and lower for PM10, associated with the extent of potential changes in the joint
Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.
Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.
1996-01-01
Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate
2D deterministic radiation transport with the discontinuous finite element method
International Nuclear Information System (INIS)
Kershaw, D.; Harte, J.
1993-01-01
This report provides a complete description of the analytic and discretized equations for 2D deterministic radiation transport. This computational model has been checked against a wide variety of analytic test problems and found to give excellent results. We make extensive use of the discontinuous finite element method
Realization of deterministic quantum teleportation with solid state qubits
International Nuclear Information System (INIS)
Andreas Wallfraff
2014-01-01
Using modern micro and nano-fabrication techniques combined with superconducting materials we realize electronic circuits the dynamics of which are governed by the laws of quantum mechanics. Making use of the strong interaction of photons with superconducting quantum two-level systems realized in these circuits we investigate both fundamental quantum effects of light and applications in quantum information processing. In this talk I will discuss the deterministic teleportation of a quantum state in a macroscopic quantum system. Teleportation may be used for distributing entanglement between distant qubits in a quantum network and for realizing universal and fault-tolerant quantum computation. Previously, we have demonstrated the implementation of a teleportation protocol, up to the single-shot measurement step, with three superconducting qubits coupled to a single microwave resonator. Using full quantum state tomography and calculating the projection of the measured density matrix onto the basis of two qubits has allowed us to reconstruct the teleported state with an average output state fidelity of 86%. Now we have realized a new device in which four qubits are coupled pair-wise to three resonators. Making use of parametric amplifiers coupled to the output of two of the resonators we are able to perform high-fidelity single-shot read-out. This has allowed us to demonstrate teleportation by individually post-selecting on any Bell-state and by deterministically distinguishing between all four Bell states measured by the sender. In addition, we have recently implemented fast feed-forward to complete the teleportation process. In all instances, we demonstrate that the fidelity of the teleported states are above the threshold imposed by classical physics. The presented experiments are expected to contribute towards realizing quantum communication with microwave photons in the foreseeable future. (author)
International Nuclear Information System (INIS)
Zio, Enrico
2014-01-01
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives
Energy Technology Data Exchange (ETDEWEB)
Zio, Enrico, E-mail: enrico.zio@ecp.fr [Ecole Centrale Paris and Supelec, Chair on System Science and the Energetic Challenge, European Foundation for New Energy – Electricite de France (EDF), Grande Voie des Vignes, 92295 Chatenay-Malabry Cedex (France); Dipartimento di Energia, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy)
2014-12-15
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives.
Integrable models in classical and quantum mechanics
International Nuclear Information System (INIS)
Jurco, B.
1991-01-01
Integrable systems are investigated, especially the rational and trigonometric Gaudin models. The Gaudin models are diagonalized for the case of classical Lie algebras. Their relation to the other integrable models and to the quantum inverse scattering method is investigated. Applications in quantum optics and plasma physics are discussed. (author). 94 refs
Technical report on micro-mechanical versus conventional modelling in non-linear fracture mechanics
International Nuclear Information System (INIS)
2001-07-01
While conventional fracture mechanics is capable of predicting crack growth behaviour if sufficient experimental observations are available, micro-mechanical modelling can both increase the accuracy of these predictions and model phenomena that are inaccessible by the conventional theory such as the ductile-cleavage temperature transition. A common argument against micro-mechanical modelling is that it is too complicated for use in routine engineering applications. This is both a computational and an educational problem. That micro-mechanical modelling is unnecessarily complicated is certainly true in many situations. The on-going development of micro-mechanical models, computational algorithms and computer speed will however most probably diminish the computational problem rather rapidly. Compare for instance the rate of development of computational methods for structural analysis. Meanwhile micro-mechanical modelling may serve as a tool by which more simplified engineering methods can be validated. The process of receiving a wide acceptance of the new methods is probably much slower. This involves many steps. First the research community must be in reasonable agreement on the methods and their use. Then the methods have to be implemented into computer software and into code procedures. The development and acceptance of conventional fracture mechanics may serve as an historical example of the time required before a new methodology has received a wide usage. The CSNI Working Group on Integrity and Ageing (IAGE) decided to carry out a report on micro-mechanical modeling to promote this promising and valuable technique. The report presents a comparison with non-linear fracture mechanics and highlights key aspects that could lead to a better knowledge and accurate predictions. Content: - 1. Introduction; - 2. Concepts of non-linear fracture mechanics with point crack tip modelling; - 3. Micro-mechanical models for cleavage fracture; - 4, Micro-mechanical modelling of
Rock mechanics models evaluation report: Draft report
International Nuclear Information System (INIS)
1985-10-01
This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The end result of the KT analysis is a balanced, documented recommendation of the codes and models which are best suited to conceptual subsurface design for the salt repository. The various laws for modeling the creep of rock salt are also reviewed in this report. 37 refs., 1 fig., 7 tabs
Recognition of deterministic ETOL languages in logarithmic space
DEFF Research Database (Denmark)
Jones, Neil D.; Skyum, Sven
1977-01-01
It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian...
An introduction to mathematical modeling a course in mechanics
Oden, Tinsley J
2011-01-01
A modern approach to mathematical modeling, featuring unique applications from the field of mechanics An Introduction to Mathematical Modeling: A Course in Mechanics is designed to survey the mathematical models that form the foundations of modern science and incorporates examples that illustrate how the most successful models arise from basic principles in modern and classical mathematical physics. Written by a world authority on mathematical theory and computational mechanics, the book presents an account of continuum mechanics, electromagnetic field theory, quantum mechanics, and statistical mechanics for readers with varied backgrounds in engineering, computer science, mathematics, and physics. The author streamlines a comprehensive understanding of the topic in three clearly organized sections: Nonlinear Continuum Mechanics introduces kinematics as well as force and stress in deformable bodies; mass and momentum; balance of linear and angular momentum; conservation of energy; and constitutive equation...
Simiu, Emil
2002-01-01
The classical Melnikov method provides information on the behavior of deterministic planar systems that may exhibit transitions, i.e. escapes from and captures into preferred regions of phase space. This book develops a unified treatment of deterministic and stochastic systems that extends the applicability of the Melnikov method to physically realizable stochastic planar systems with additive, state-dependent, white, colored, or dichotomous noise. The extended Melnikov method yields the novel result that motions with transitions are chaotic regardless of whether the excitation is deterministic or stochastic. It explains the role in the occurrence of transitions of the characteristics of the system and its deterministic or stochastic excitation, and is a powerful modeling and identification tool. The book is designed primarily for readers interested in applications. The level of preparation required corresponds to the equivalent of a first-year graduate course in applied mathematics. No previous exposure to d...
International Nuclear Information System (INIS)
Guillermier, Pierre; Daniel, Lucile; Gauthier, Laurent
2009-01-01
To support AREVA NP in its design on HTR reactor and its HTR fuel R and D program, the Commissariat a l'Energie Atomique developed the ATLAS code (Advanced Thermal mechanicaL Analysis Software) with the objectives: - to quantify, with a statistical approach, the failed particle fraction and fission product release of a HTR fuel core under normal and accidental conditions (compact or pebble design). - to simulate irradiation tests or benchmark in order to compare measurements or others code results with ATLAS evaluation. These two objectives aim at qualifying the code in order to predict fuel behaviour and to design fuel according to core performance and safety requirements. A statistical calculation uses numerous deterministic calculations. The finite element method is used for these deterministic calculations, in order to be able to choose among three types of meshes, depending on what must be simulated: - One-dimensional calculation of one single particle, for intact particles or particles with fully debonded layers. - Two-dimensional calculations of one single particle, in the case of particles which are cracked, partially debonded or shaped in various ways. - Three-dimensional calculations of a whole compact slice, in order to simulate the interactions between the particles, the thermal gradient and the transport of fission products up to the coolant. - Some calculations of a whole pebble, using homogenization methods are being studied. The temperatures, displacements, stresses, strains and fission product concentrations are calculated on each mesh of the model. Statistical calculations are done using these results, taking into account ceramic failure mode, but also fabrication tolerances and material property uncertainties, variations of the loads (fluence, temperature, burn-up) and core data parameters. The statistical method used in ATLAS is the importance sampling. The model of migration of long-lived fission products in the coated particle and more
Mechanical modeling of skeletal muscle functioning
van der Linden, B.J.J.J.
1998-01-01
For movement of body or body segments is combined effort needed of the central nervous system and the muscular-skeletal system. This thesis deals with the mechanical functioning of skeletal muscle. That muscles come in a large variety of geometries, suggest the existence of a relation between muscle
Numerical modelling of self healing mechanisms
Remmers, J.J.C.; Borst, de R.; Zwaag, van der S.
2007-01-01
A number of self healing mechanisms for composite materials have been presented in the previous chapters of this book. These methods vary from the classical concept of micro-encapsulating of healing agents in polymer systems to the autonomous healing of concrete. The key feature of these self
Experimental aspects of deterministic secure quantum key distribution
Energy Technology Data Exchange (ETDEWEB)
Walenta, Nino; Korn, Dietmar; Puhlmann, Dirk; Felbinger, Timo; Hoffmann, Holger; Ostermeyer, Martin [Universitaet Potsdam (Germany). Institut fuer Physik; Bostroem, Kim [Universitaet Muenster (Germany)
2008-07-01
Most common protocols for quantum key distribution (QKD) use non-deterministic algorithms to establish a shared key. But deterministic implementations can allow for higher net key transfer rates and eavesdropping detection rates. The Ping-Pong coding scheme by Bostroem and Felbinger[1] employs deterministic information encoding in entangled states with its characteristic quantum channel from Bob to Alice and back to Bob. Based on a table-top implementation of this protocol with polarization-entangled photons fundamental advantages as well as practical issues like transmission losses, photon storage and requirements for progress towards longer transmission distances are discussed and compared to non-deterministic protocols. Modifications of common protocols towards a deterministic quantum key distribution are addressed.
Energy Technology Data Exchange (ETDEWEB)
Natsuki, Toshiaki [Shinshu University, Faculty of Textile Science and Technology, Ueda (Japan); Shinshu University, Institute of Carbon Science and Technology, Nagano (Japan); Natsuki, Jun [Shinshu University, Institute of Carbon Science and Technology, Nagano (Japan)
2017-04-15
Mechanical behaviors of nanomaterials are not easy to be evaluated in the laboratory because of their extremely small size and difficulty controlling. Thus, a suitable model for the estimation of the mechanical properties for nanomaterials becomes very important. In this study, the elastic properties of boron nitride (BN) nanosheets, including the elastic modulus, the shear modulus, and the Poisson's ratio, are predicted using a molecular mechanics model. The molecular mechanics force filed is established to directly incorporate the Morse potential function into the constitutive model of nanostructures. According to the molecular mechanics model, the chirality effect of hexagonal BN nanosheets on the elastic modulus is investigated through a closed-form solution. The simulated result shows that BN nanosheets exhibit an isotropic elastic property. The present analysis yields a set of very simple formulas and is able to be served as a good approximation on the mechanical properties for the BN nanosheets. (orig.)
International Nuclear Information System (INIS)
Natsuki, Toshiaki; Natsuki, Jun
2017-01-01
Mechanical behaviors of nanomaterials are not easy to be evaluated in the laboratory because of their extremely small size and difficulty controlling. Thus, a suitable model for the estimation of the mechanical properties for nanomaterials becomes very important. In this study, the elastic properties of boron nitride (BN) nanosheets, including the elastic modulus, the shear modulus, and the Poisson's ratio, are predicted using a molecular mechanics model. The molecular mechanics force filed is established to directly incorporate the Morse potential function into the constitutive model of nanostructures. According to the molecular mechanics model, the chirality effect of hexagonal BN nanosheets on the elastic modulus is investigated through a closed-form solution. The simulated result shows that BN nanosheets exhibit an isotropic elastic property. The present analysis yields a set of very simple formulas and is able to be served as a good approximation on the mechanical properties for the BN nanosheets. (orig.)
Czech Academy of Sciences Publication Activity Database
Růžička, V.; Malíková, Lucie; Seitl, Stanislav
2017-01-01
Roč. 11, č. 42 (2017), s. 128-135 ISSN 1971-8993 R&D Projects: GA ČR GA17-01589S Institutional support: RVO:68081723 Keywords : Over-deterministic * Fracture mechanics * Rounding numbers * Stress field * Williams’ expansion Subject RIV: JL - Materials Fatigue, Friction Mechanics OBOR OECD: Audio engineering, reliability analysis
International Nuclear Information System (INIS)
Wanne, Toivo; Johansson, Erik; Potyondy, David
2004-02-01
SKB is planning to perform a large-scale pillar stability experiment called APSE (Aespoe Pillar Stability Experiment) at Aespoe HRL. The study is focused on understanding and control of progressive rock failure in hard crystalline rock and damage caused by high stresses. The elastic thermo-mechanical modeling was carried out in three dimensions because of the complex test geometry and in-situ stress tensor by using a finite-difference modeling software FLAC3D. Cracking and damage formation were modeled in the area of interest (pillar between two large scale holes) in two dimensions by using the Particle Flow Code (PFC), which is based on particle mechanics. FLAC and PFC were coupled to minimize the computer resources and the computing time. According to the modeling the initial temperature rises from 15 deg C to about 65 deg C in the pillar area during the heating period of 120 days. The rising temperature due to thermal expansion induces stresses in the pillar area and after 120 days heating the stresses have increased about 33% from the excavation induced maximum stress of 150 MPa to 200 MPa in the end of the heating period. The results from FLAC3D model showed that only regions where the crack initiation stress has exceeded were identified and they extended to about two meters down the hole wall. These could be considered the areas where damage may occur during the in-situ test. When the other hole is pressurized with a 0.8 MPa confining pressure it yields that 5 MPa more stress is needed to damage the rock than without confining pressure. This makes the damaged area in some degree smaller. High compressive stresses in addition to some tensile stresses might induce some AE (acoustic emission) activity in the upper part of the hole from the very beginning of the test and are thus potential areas where AE activities may be detected. Monitoring like acoustic emissions will be measured during the test execution. The 2D coupled PFC-FLAC modeling indicated that
Energy Technology Data Exchange (ETDEWEB)
Wanne, Toivo; Johansson, Erik; Potyondy, David [Saanio and Riekkola Oy, Helsinki (Finland)
2004-02-01
SKB is planning to perform a large-scale pillar stability experiment called APSE (Aespoe Pillar Stability Experiment) at Aespoe HRL. The study is focused on understanding and control of progressive rock failure in hard crystalline rock and damage caused by high stresses. The elastic thermo-mechanical modeling was carried out in three dimensions because of the complex test geometry and in-situ stress tensor by using a finite-difference modeling software FLAC3D. Cracking and damage formation were modeled in the area of interest (pillar between two large scale holes) in two dimensions by using the Particle Flow Code (PFC), which is based on particle mechanics. FLAC and PFC were coupled to minimize the computer resources and the computing time. According to the modeling the initial temperature rises from 15 deg C to about 65 deg C in the pillar area during the heating period of 120 days. The rising temperature due to thermal expansion induces stresses in the pillar area and after 120 days heating the stresses have increased about 33% from the excavation induced maximum stress of 150 MPa to 200 MPa in the end of the heating period. The results from FLAC3D model showed that only regions where the crack initiation stress has exceeded were identified and they extended to about two meters down the hole wall. These could be considered the areas where damage may occur during the in-situ test. When the other hole is pressurized with a 0.8 MPa confining pressure it yields that 5 MPa more stress is needed to damage the rock than without confining pressure. This makes the damaged area in some degree smaller. High compressive stresses in addition to some tensile stresses might induce some AE (acoustic emission) activity in the upper part of the hole from the very beginning of the test and are thus potential areas where AE activities may be detected. Monitoring like acoustic emissions will be measured during the test execution. The 2D coupled PFC-FLAC modeling indicated that
Modeling the mechanical behavior of tantalum
International Nuclear Information System (INIS)
Lee, B.J.; Ahzi, S.
1997-01-01
A crystal plasticity model is proposed to simulate the large plastic deformation and texture evolution in tantalum over a wide range of strain rates. In the model, a modification of the viscoplastic power law for slip and a Taylor interaction law for polycrystals are employed, which account for the effects of strain hardening, strain-rate hardening, and thermal softening. A series of uniaxial compression tests in tantalum at strain rates ranging from 10 -3 to 10 4 s -1 were conducted and used to verify the model's simulated stress-strain response. Initial and evolved deformation textures were also measured for comparison with predicted textures from the model. Applications of this crystal plasticity model are made to examine the effect of different initial crystallographic textures in tantalum subjected to uniaxial compression deformation or biaxial tensile deformation
Animal behavior models of the mechanisms underlying antipsychotic atypicality.
Geyer, M.A.; Ellenbroek, B.A.
2003-01-01
This review describes the animal behavior models that provide insight into the mechanisms underlying the critical differences between the actions of typical vs. atypical antipsychotic drugs. Although many of these models are capable of differentiating between antipsychotic and other psychotropic
Workshop on multifactor aging mechanisms and models
Agarwal, V. K.
1992-10-01
There have been considerable efforts to understand the aging and failure mechanisms of insulation in electrical systems. However, progress has been slow because of the complex nature of the subject particularly when dealing with multiple stresses e.g. electrical, thermal, mechanical, radiation, humidity and other environmental factors. When an insulating material is exposed to just one stress factor e.g. electric field, one must devise test(s) which are not only economically efficient and practical but which take into account the nature of electric field (ac, dc and pulsed), duration and level or field strength, and field configurations. Any additional stress factor(s) make the matrix of measurements and the understanding of resulting degradation processes more complex, time consuming and expensive.
Deterministic flows of order-parameters in stochastic processes of quantum Monte Carlo method
International Nuclear Information System (INIS)
Inoue, Jun-ichi
2010-01-01
In terms of the stochastic process of quantum-mechanical version of Markov chain Monte Carlo method (the MCMC), we analytically derive macroscopically deterministic flow equations of order parameters such as spontaneous magnetization in infinite-range (d(= ∞)-dimensional) quantum spin systems. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding (d + 1)-dimensional classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. In the steady state, we show that the equations are identical to the saddle point equations for the equilibrium state of the same system. The equation for the dynamical Ising model is recovered in the classical limit. We also check the validity of the static approximation by making use of computer simulations for finite size systems and discuss several possible extensions of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we shall use our procedure to evaluate the decoding process of Bayesian image restoration. With the assistance of the concept of dynamical replica theory (the DRT), we derive the zero-temperature flow equation of image restoration measure showing some 'non-monotonic' behaviour in its time evolution.
Deterministic dense coding and entanglement entropy
International Nuclear Information System (INIS)
Bourdon, P. S.; Gerjuoy, E.; McDonald, J. P.; Williams, H. T.
2008-01-01
We present an analytical study of the standard two-party deterministic dense-coding protocol, under which communication of perfectly distinguishable messages takes place via a qudit from a pair of nonmaximally entangled qudits in a pure state |ψ>. Our results include the following: (i) We prove that it is possible for a state |ψ> with lower entanglement entropy to support the sending of a greater number of perfectly distinguishable messages than one with higher entanglement entropy, confirming a result suggested via numerical analysis in Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. (ii) By explicit construction of families of local unitary operators, we verify, for dimensions d=3 and d=4, a conjecture of Mozes et al. about the minimum entanglement entropy that supports the sending of d+j messages, 2≤j≤d-1; moreover, we show that the j=2 and j=d-1 cases of the conjecture are valid in all dimensions. (iii) Given that |ψ> allows the sending of K messages and has √(λ 0 ) as its largest Schmidt coefficient, we show that the inequality λ 0 ≤d/K, established by Wu et al. [Phys. Rev. A 73, 042311 (2006)], must actually take the form λ 0 < d/K if K=d+1, while our constructions of local unitaries show that equality can be realized if K=d+2 or K=2d-1
Analysis of pinching in deterministic particle separation
Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German
2011-11-01
We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.
Basic mechanisms of MCD in animal models.
Battaglia, Giorgio; Becker, Albert J; LoTurco, Joseph; Represa, Alfonso; Baraban, Scott C; Roper, Steven N; Vezzani, Annamaria
2009-09-01
Epilepsy-associated glioneuronal malformations (malformations of cortical development [MCD]) include focal cortical dysplasias (FCD) and highly differentiated glioneuronal tumors, most frequently gangliogliomas. The neuropathological findings are variable but suggest aberrant proliferation, migration, and differentiation of neural precursor cells as essential pathogenetic elements. Recent advances in animal models for MCDs allow new insights in the molecular pathogenesis of these epilepsy-associated lesions. Novel approaches, presented here, comprise RNA interference strategies to generate and study experimental models of subcortical band heterotopia and study functional aspects of aberrantly shaped and positioned neurons. Exciting analyses address impaired NMDA receptor expression in FCD animal models compared to human FCDs and excitatory imbalances in MCD animal models such as lissencephaly gene ablated mice as well as in utero irradiated rats. An improved understanding of relevant pathomechanisms will advance the development of targeted treatment strategies for epilepsy-associated malformations.
Multi-layer composite mechanical modeling for the inhomogeneous biofilm mechanical behavior.
Wang, Xiaoling; Han, Jingshi; Li, Kui; Wang, Guoqing; Hao, Mudong
2016-08-01
Experiments showed that bacterial biofilms are heterogeneous, for example, the density, the diffusion coefficient, and mechanical properties of the biofilm are different along the biofilm thickness. In this paper, we establish a multi-layer composite model to describe the biofilm mechanical inhomogeneity based on unified multiple-component cellular automaton (UMCCA) model. By using our model, we develop finite element simulation procedure for biofilm tension experiment. The failure limit and biofilm extension displacement obtained from our model agree well with experimental measurements. This method provides an alternative theory to study the mechanical inhomogeneity in biological materials.
Between Laws and Models: Some Philosophical Morals of Lagrangian Mechanics
Butterfield, Jeremy
2004-01-01
I extract some philosophical morals from some aspects of Lagrangian mechanics. (A companion paper will present similar morals from Hamiltonian mechanics and Hamilton-Jacobi theory.) One main moral concerns methodology: Lagrangian mechanics provides a level of description of phenomena which has been largely ignored by philosophers, since it falls between their accustomed levels--``laws of nature'' and ``models''. Another main moral concerns ontology: the ontology of Lagrangian mechanics is bot...
Modeling the mechanisms of biological GTP hydrolysis
DEFF Research Database (Denmark)
Carvalho, Alexandra T.P.; Szeler, Klaudia; Vavitsas, Konstantinos
2015-01-01
Enzymes that hydrolyze GTP are currently in the spotlight, due to their molecular switch mechanism that controls many cellular processes. One of the best-known classes of these enzymes are small GTPases such as members of the Ras superfamily, which catalyze the hydrolysis of the γ-phosphate bond...... in GTP. In addition, the availability of an increasing number of crystal structures of translational GTPases such as EF-Tu and EF-G have made it possible to probe the molecular details of GTP hydrolysis on the ribosome. However, despite a wealth of biochemical, structural and computational data, the way...
Mechanical Impedance Modeling of Human Arm: A survey
Puzi, A. Ahmad; Sidek, S. N.; Sado, F.
2017-03-01
Human arm mechanical impedance plays a vital role in describing motion ability of the upper limb. One of the impedance parameters is stiffness which is defined as the ratio of an applied force to the measured deformation of the muscle. The arm mechanical impedance modeling is useful in order to develop a better controller for system that interacts with human as such an automated robot-assisted platform for automated rehabilitation training. The aim of the survey is to summarize the existing mechanical impedance models of human upper limb so to justify the need to have an improved version of the arm model in order to facilitate the development of better controller of such systems with ever increase in complexity. In particular, the paper will address the following issue: Human motor control and motor learning, constant and variable impedance models, methods for measuring mechanical impedance and mechanical impedance modeling techniques.
Neuroevolution Mechanism for Hidden Markov Model
Directory of Open Access Journals (Sweden)
Nabil M. Hewahi
2011-12-01
Full Text Available Hidden Markov Model (HMM is a statistical model based on probabilities. HMM is becoming one of the major models involved in many applications such as natural language
processing, handwritten recognition, image processing, prediction systems and many more. In this research we are concerned with finding out the best HMM for a certain application domain. We propose a neuroevolution process that is based first on converting the HMM to a neural network, then generating many neural networks at random where each represents a HMM. We proceed by
applying genetic operators to obtain new set of neural networks where each represents HMMs, and updating the population. Finally select the best neural network based on a fitness function.
Quantum mechanics model on a Kaehler conifold
International Nuclear Information System (INIS)
Bellucci, Stefano; Nersessian, Armen; Yeranyan, Armen
2004-01-01
We propose an exactly solvable model of the quantum oscillator on the class of Kaehler spaces (with conic singularities), connected with two-dimensional complex projective spaces. Its energy spectrum is nondegenerate in the orbital quantum number, when the space has nonconstant curvature. We reduce the model to a three-dimensional system interacting with the Dirac monopole. Owing to noncommutativity of the reduction and quantization procedures, the Hamiltonian of the reduced system gets nontrivial quantum corrections. We transform the reduced system into a MIC-Kepler-like one and find that quantum corrections arise only in its energy and coupling constant. We present the exact spectrum of the generalized MIC-Kepler system. The one-(complex) dimensional analog of the suggested model is formulated on the Riemann surface over the complex projective plane and could be interpreted as a system with fractional spin
A Plastic Damage Mechanics Model for Engineered Cementitious Composites
DEFF Research Database (Denmark)
Dick-Nielsen, Lars; Stang, Henrik; Poulsen, Peter Noe
2007-01-01
This paper discusses the establishment of a plasticity-based damage mechanics model for Engineered Cementitious Composites (ECC). The present model differs from existing models by combining a matrix and fiber description in order to describe the behavior of the ECC material. The model provides...
Model Order Reduction for Non Linear Mechanics
Pinillo, Rubén
2017-01-01
Context: Automotive industry is moving towards a new generation of cars. Main idea: Cars are furnished with radars, cameras, sensors, etc… providing useful information about the environment surrounding the car. Goals: Provide an efficient model for the radar input/output. Reducing computational costs by means of big data techniques.
Quark Model in the Quantum Mechanics Curriculum.
Hussar, P. E.; And Others
1980-01-01
This article discusses in detail the totally symmetric three-quark karyonic wave functions. The two-body mesonic states are also discussed. A brief review of the experimental efforts to identify the quark model multiplets is given. (Author/SK)
Putting mechanisms into crop production models.
Boote, Kenneth J; Jones, James W; White, Jeffrey W; Asseng, Senthold; Lizaso, Jon I
2013-09-01
Crop growth models dynamically simulate processes of C, N and water balance on daily or hourly time-steps to predict crop growth and development and at season-end, final yield. Their ability to integrate effects of genetics, environment and crop management have led to applications ranging from understanding gene function to predicting potential impacts of climate change. The history of crop models is reviewed briefly, and their level of mechanistic detail for assimilation and respiration, ranging from hourly leaf-to-canopy assimilation to daily radiation-use efficiency is discussed. Crop models have improved steadily over the past 30-40 years, but much work remains. Improvements are needed for the prediction of transpiration response to elevated CO₂ and high temperature effects on phenology and reproductive fertility, and simulation of root growth and nutrient uptake under stressful edaphic conditions. Mechanistic improvements are needed to better connect crop growth to genetics and to soil fertility, soil waterlogging and pest damage. Because crop models integrate multiple processes and consider impacts of environment and management, they have excellent potential for linking research from genomics and allied disciplines to crop responses at the field scale, thus providing a valuable tool for deciphering genotype by environment by management effects. © 2013 John Wiley & Sons Ltd.
Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Juan-Mariano de Goyeneche
2009-05-01
Full Text Available Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios.
Operational State Complexity of Deterministic Unranked Tree Automata
Directory of Open Access Journals (Sweden)
Xiaoxue Piao
2010-08-01
Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.
International Nuclear Information System (INIS)
Onimus, F.
2003-12-01
Zirconium alloys cladding tubes containing nuclear fuel of the Pressurized Water Reactors constitute the first safety barrier against the dissemination of radioactive elements. Thus, it is essential to predict the mechanical behavior of the material in-reactor conditions. This study aims, on the one hand, to identify and characterize the mechanisms of the plastic deformation of irradiated zirconium alloys and, on the other hand, to propose a micro-mechanical modeling based on these mechanisms. The experimental analysis shows that, for the irradiated material, the plastic deformation occurs by dislocation channeling. For transverse tensile test and internal pressure test this channeling occurs in the basal planes. However, for axial tensile test, the study revealed that the plastic deformation also occurs by channeling but in the prismatic and pyramidal planes. In addition, the study of the macroscopic mechanical behavior, compared to the deformation mechanisms observed by TEM, suggested that the internal stress is higher in the case of irradiated material than in the case of non-irradiated material, because of the very heterogeneous character of the plastic deformation. This analysis led to a coherent interpretation of the mechanical behavior of irradiated materials, in terms of deformation mechanisms. The mechanical behavior of irradiated materials was finally modeled by applying homogenization methods for heterogeneous materials. This model is able to reproduce adequately the mechanical behavior of the irradiated material, in agreement with the TEM observations. (author)
International Nuclear Information System (INIS)
Liang, Thomas K.S.; Chou, Ling-Yao; Zhang, Zhongwei; Hsueh, Hsiang-Yu; Lee, Min
2011-01-01
Highlights: → A new LOCA licensing methodology (DRHM, deterministic-realistic hybrid methodology) was developed. → DRHM involves conservative Appendix K physical models and statistical treatment of plant status uncertainties. → DRHM can generate 50-100 K PCT margin as compared to a traditional Appendix K methodology. - Abstract: It is well recognized that a realistic LOCA analysis with uncertainty quantification can generate greater safety margin as compared with classical conservative LOCA analysis using Appendix K evaluation models. The associated margin can be more than 200 K. To quantify uncertainty in BELOCA analysis, generally there are two kinds of uncertainties required to be identified and quantified, which involve model uncertainties and plant status uncertainties. Particularly, it will take huge effort to systematically quantify individual model uncertainty of a best estimate LOCA code, such as RELAP5 and TRAC. Instead of applying a full ranged BELOCA methodology to cover both model and plant status uncertainties, a deterministic-realistic hybrid methodology (DRHM) was developed to support LOCA licensing analysis. Regarding the DRHM methodology, Appendix K deterministic evaluation models are adopted to ensure model conservatism, while CSAU methodology is applied to quantify the effect of plant status uncertainty on PCT calculation. Generally, DRHM methodology can generate about 80-100 K margin on PCT as compared to Appendix K bounding state LOCA analysis.
Mechanisms and models for bentonite erosion
Energy Technology Data Exchange (ETDEWEB)
Neretnieks, Ivars; Longcheng Liu; Moreno, Luis (Dept. of Chemical Engineering and Technology, School of Chemical Science and Engineering, Royal Inst. of Technology, KTH, Stockholm (Sweden))
2009-12-15
There are concerns that the bentonite buffer surrounding the canisters with spent nuclear fuel may erode when non-saline groundwaters seep past the buffer. This is known to happen if the water content of ions is below the critical coagulation concentration CCC. Above the CCC the smectite forms a coherent gel, which does not release particles. One main effort in this study has been directed to assess under which conditions the pore water composition of the gel at the gel/water interface could be lower than the CCC. Another main effort has been directed to understanding the behaviour of expansive gel when the pore water is below the CCC. We have developed a Dynamic model for sodium gel expansion in fractures where the gel soaks up non-saline water as it expands. The model is based on a force balance between and on smectite particles, which move in the water. The Dynamic model of gel expansion showing the evolution in time and space of a gel was successfully tested against expansion experiments in test tubes. The expansion was measured with high resolution and in great detail over many months by Magnetic Resonance Imaging. The model also predicted the gel expansion through filters with very narrow pores well. A gel viscosity model of dilute gels was derived, which accounts for ion concentration influence as well as the volume fraction of smectite in the gel. The model accounts for the presence of the DDL, which seemingly makes the particles larger so that they interact at lower particle densities. Simulations were performed for a case where the gel expands outward into the fracture that intersects the deposition hole. Fresh groundwater approaches and passes the gel/water interface. Smectite colloids move out into the water due to the repulsive forces between the particle and by Brownian motion (effect included in the Dynamic model). The dilute gel/sol is mobilised and flows downstream in a thin region where the viscosity is low enough to permit flow. Sodium diffuses
Simulation and Modeling Application in Agricultural Mechanization
Directory of Open Access Journals (Sweden)
R. M. Hudzari
2012-01-01
Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.
RASopathies: unraveling mechanisms with animal models
Directory of Open Access Journals (Sweden)
Granton A. Jindal
2015-08-01
Full Text Available RASopathies are developmental disorders caused by germline mutations in the Ras-MAPK pathway, and are characterized by a broad spectrum of functional and morphological abnormalities. The high incidence of these disorders (∼1/1000 births motivates the development of systematic approaches for their efficient diagnosis and potential treatment. Recent advances in genome sequencing have greatly facilitated the genotyping and discovery of mutations in affected individuals, but establishing the causal relationships between molecules and disease phenotypes is non-trivial and presents both technical and conceptual challenges. Here, we discuss how these challenges could be addressed using genetically modified model organisms that have been instrumental in delineating the Ras-MAPK pathway and its roles during development. Focusing on studies in mice, zebrafish and Drosophila, we provide an up-to-date review of animal models of RASopathies at the molecular and functional level. We also discuss how increasingly sophisticated techniques of genetic engineering can be used to rigorously connect changes in specific components of the Ras-MAPK pathway with observed functional and morphological phenotypes. Establishing these connections is essential for advancing our understanding of RASopathies and for devising rational strategies for their management and treatment.
Mechanisms and models for bentonite erosion
International Nuclear Information System (INIS)
Neretnieks, Ivars; Longcheng Liu; Moreno, Luis
2009-12-01
There are concerns that the bentonite buffer surrounding the canisters with spent nuclear fuel may erode when non-saline groundwaters seep past the buffer. This is known to happen if the water content of ions is below the critical coagulation concentration CCC. Above the CCC the smectite forms a coherent gel, which does not release particles. One main effort in this study has been directed to assess under which conditions the pore water composition of the gel at the gel/water interface could be lower than the CCC. Another main effort has been directed to understanding the behaviour of expansive gel when the pore water is below the CCC. We have developed a Dynamic model for sodium gel expansion in fractures where the gel soaks up non-saline water as it expands. The model is based on a force balance between and on smectite particles, which move in the water. The Dynamic model of gel expansion showing the evolution in time and space of a gel was successfully tested against expansion experiments in test tubes. The expansion was measured with high resolution and in great detail over many months by Magnetic Resonance Imaging. The model also predicted the gel expansion through filters with very narrow pores well. A gel viscosity model of dilute gels was derived, which accounts for ion concentration influence as well as the volume fraction of smectite in the gel. The model accounts for the presence of the DDL, which seemingly makes the particles larger so that they interact at lower particle densities. Simulations were performed for a case where the gel expands outward into the fracture that intersects the deposition hole. Fresh groundwater approaches and passes the gel/water interface. Smectite colloids move out into the water due to the repulsive forces between the particle and by Brownian motion (effect included in the Dynamic model). The dilute gel/sol is mobilised and flows downstream in a thin region where the viscosity is low enough to permit flow. Sodium diffuses
Applications of the 3-D Deterministic Transport Attila(regsign) for Core Safety Analysis
International Nuclear Information System (INIS)
Lucas, D.S.; Gougar, D.; Roth, P.A.; Wareing, T.; Failla, G.; McGhee, J.; Barnett, A.
2004-01-01
An LDRD (Laboratory Directed Research and Development) project is ongoing at the Idaho National Engineering and Environmental Laboratory (INEEL) for applying the three-dimensional multi-group deterministic neutron transport code (Attila(reg s ign)) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the model development, capabilities of Attila, generation of the cross-section libraries, and comparisons to an ATR MCNP model and future
Statistical mechanics of the cluster Ising model
International Nuclear Information System (INIS)
Smacchia, Pietro; Amico, Luigi; Facchi, Paolo; Fazio, Rosario; Florio, Giuseppe; Pascazio, Saverio; Vedral, Vlatko
2011-01-01
We study a Hamiltonian system describing a three-spin-1/2 clusterlike interaction competing with an Ising-like antiferromagnetic interaction. We compute free energy, spin-correlation functions, and entanglement both in the ground and in thermal states. The model undergoes a quantum phase transition between an Ising phase with a nonvanishing magnetization and a cluster phase characterized by a string order. Any two-spin entanglement is found to vanish in both quantum phases because of a nontrivial correlation pattern. Nevertheless, the residual multipartite entanglement is maximal in the cluster phase and dependent on the magnetization in the Ising phase. We study the block entropy at the critical point and calculate the central charge of the system, showing that the criticality of the system is beyond the Ising universality class.
Statistical mechanics of the cluster Ising model
Energy Technology Data Exchange (ETDEWEB)
Smacchia, Pietro [SISSA - via Bonomea 265, I-34136, Trieste (Italy); Amico, Luigi [CNR-MATIS-IMM and Dipartimento di Fisica e Astronomia Universita di Catania, C/O ed. 10, viale Andrea Doria 6, I-95125 Catania (Italy); Facchi, Paolo [Dipartimento di Matematica and MECENAS, Universita di Bari, I-70125 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); Fazio, Rosario [NEST, Scuola Normale Superiore and Istituto Nanoscienze - CNR, 56126 Pisa (Italy); Center for Quantum Technology, National University of Singapore, 117542 Singapore (Singapore); Florio, Giuseppe; Pascazio, Saverio [Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); Vedral, Vlatko [Center for Quantum Technology, National University of Singapore, 117542 Singapore (Singapore); Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Department of Physics, University of Oxford, Clarendon Laboratory, Oxford, OX1 3PU (United Kingdom)
2011-08-15
We study a Hamiltonian system describing a three-spin-1/2 clusterlike interaction competing with an Ising-like antiferromagnetic interaction. We compute free energy, spin-correlation functions, and entanglement both in the ground and in thermal states. The model undergoes a quantum phase transition between an Ising phase with a nonvanishing magnetization and a cluster phase characterized by a string order. Any two-spin entanglement is found to vanish in both quantum phases because of a nontrivial correlation pattern. Nevertheless, the residual multipartite entanglement is maximal in the cluster phase and dependent on the magnetization in the Ising phase. We study the block entropy at the critical point and calculate the central charge of the system, showing that the criticality of the system is beyond the Ising universality class.
Numerical Modeling and Mechanical Analysis of Flexible Risers
Li, J. Y.; Qiu, Z. X.; Ju, J. S.
2015-01-01
ABAQUS is used to create a detailed finite element model for a 10-layer unbonded flexible riser to simulate the riser’s mechanical behavior under three load conditions: tension force and internal and external pressure. It presents a technique to create detailed finite element model and to analyze flexible risers. In FEM model, all layers are modeled separately with contact interfaces; interaction between steel trips in certain layers has been considered as well. FEM model considering contact ...
Method to deterministically study photonic nanostructures in different experimental instruments
Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the
Pseudo-random number generator based on asymptotic deterministic randomness
Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming
2008-06-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.
Pseudo-random number generator based on asymptotic deterministic randomness
International Nuclear Information System (INIS)
Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming
2008-01-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks
Non deterministic finite automata for power systems fault diagnostics
Directory of Open Access Journals (Sweden)
LINDEN, R.
2009-06-01
Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.
Transmission power control in WSNs : from deterministic to cognitive methods
Chincoli, M.; Liotta, A.; Gravina, R.; Palau, C.E.; Manso, M.; Liotta, A.; Fortino, G.
2018-01-01
Communications in Wireless Sensor Networks (WSNs) are affected by dynamic environments, variable signal fluctuations and interference. Thus, prompt actions are necessary to achieve dependable communications and meet Quality of Service (QoS) requirements. To this end, the deterministic algorithms
The probabilistic approach and the deterministic licensing procedure
International Nuclear Information System (INIS)
Fabian, H.; Feigel, A.; Gremm, O.
1984-01-01
If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)
Modeling The Effect Of Extruder Screw Speed On The Mechanical ...
African Journals Online (AJOL)
Modeling The Effect Of Extruder Screw Speed On The Mechanical Properties Of High Density Polyethylene Blown Film. ... Journal of Modeling, Design and Management of Engineering Systems ... Two sets of multiple linear regression models were developed to predict impact failure weight and tenacity respectively.
Links between fluid mechanics and quantum mechanics: a model for information in economics?
Haven, Emmanuel
2016-05-28
This paper tallies the links between fluid mechanics and quantum mechanics, and attempts to show whether those links can aid in beginning to build a formal template which is usable in economics models where time is (a)symmetric and memory is absent or present. An objective of this paper is to contemplate whether those formalisms can allow us to model information in economics in a novel way. © 2016 The Author(s).
A model for chemically-induced mechanical loading on MEMS
DEFF Research Database (Denmark)
Amiot, Fabien
2007-01-01
The development of full displacement field measurements as an alternative to the optical lever technique to measure the mechanical response for microelectro-mechanical systems components in their environment calls for a modeling of chemically-induced mechanical fields (stress, strain, and displac......The development of full displacement field measurements as an alternative to the optical lever technique to measure the mechanical response for microelectro-mechanical systems components in their environment calls for a modeling of chemically-induced mechanical fields (stress, strain...... of the system free energy and its dependence on the surface amount. It is solved in the cantilever case thanks to an asymptotic analysis, and an approached closed-form solution is obtained for the interfacial stress field. Finally, some conclusions regarding the transducer efficiency of cantilevers are drawn...
Mirror neurons: functions, mechanisms and models.
Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael A
2013-04-12
Mirror neurons for manipulation fire both when the animal manipulates an object in a specific way and when it sees another animal (or the experimenter) perform an action that is more or less similar. Such neurons were originally found in macaque monkeys, in the ventral premotor cortex, area F5 and later also in the inferior parietal lobule. Recent neuroimaging data indicate that the adult human brain is endowed with a "mirror neuron system," putatively containing mirror neurons and other neurons, for matching the observation and execution of actions. Mirror neurons may serve action recognition in monkeys as well as humans, whereas their putative role in imitation and language may be realized in human but not in monkey. This article shows the important role of computational models in providing sufficient and causal explanations for the observed phenomena involving mirror systems and the learning processes which form them, and underlines the need for additional circuitry to lift up the monkey mirror neuron circuit to sustain the posited cognitive functions attributed to the human mirror neuron system. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Foundation plate on the elastic half-space, deterministic and probabilistic approach
Directory of Open Access Journals (Sweden)
Tvrdá Katarína
2017-01-01
Full Text Available Interaction between the foundation plate and subgrade can be described by different mathematical - physical model. Elastic foundation can be modelled by different types of models, e.g. one-parametric model, two-parametric model and a comprehensive model - Boussinesque (elastic half-space had been used. The article deals with deterministic and probabilistic analysis of deflection of the foundation plate on the elastic half-space. Contact between the foundation plate and subsoil was modelled using contact elements node-node. At the end the obtained results are presented.
Fluid mechanics and heat transfer advances in nonlinear dynamics modeling
Asli, Kaveh Hariri
2015-01-01
This valuable new book focuses on new methods and techniques in fluid mechanics and heat transfer in mechanical engineering. The book includes the research of the authors on the development of optimal mathematical models and also uses modern computer technology and mathematical methods for the analysis of nonlinear dynamic processes. It covers technologies applicable to both fluid mechanics and heat transfer problems, which include a combination of physical, mechanical, and thermal techniques. The authors develop a new method for the calculation of mathematical models by computer technology, using parametric modeling techniques and multiple analyses for mechanical system. The information in this book is intended to help reduce the risk of system damage or failure. Included are sidebar discussions, which contain information and facts about each subject area that help to emphasize important points to remember.
6th Conference on Design and Modeling of Mechanical Systems
Fakhfakh, Tahar; Daly, Hachmi; Aifaoui, Nizar; Chaari, Fakher
2015-01-01
This book offers a collection of original peer-reviewed contributions presented at the 6th International Congress on Design and Modeling of Mechanical Systems (CMSM’2015), held in Hammamet, Tunisia, from the 23rd to the 25th of March 2015. It reports on both recent research findings and innovative industrial applications in the fields of mechatronics and robotics, dynamics of mechanical systems, fluid structure interaction and vibroacoustics, modeling and analysis of materials and structures, and design and manufacturing of mechanical systems. Since its first edition in 2005, the CMSM Congress has been held every two years with the aim of bringing together specialists from universities and industry to present the state-of-the-art in research and applications, discuss the most recent findings and exchange and develop expertise in the field of design and modeling of mechanical systems. The CMSM Congress is jointly organized by three Tunisian research laboratories: the Mechanical Engineering Laboratory of th...
Mechanical performance of injection molded polypropylene : characterization and modeling
Erp, van T.B.; Govaert, L.E.; Peters, G.W.M.
2013-01-01
It is shown that predictions of local mechanical properties in a product can be made from the orientation only using an anisotropic viscoplastic model. Due to processing-induced crystalline orientations, the mechanical properties of injection-molded polymer products are anisotropic and exhibit
Generic skills requirements (KSA model) towards future mechanical ...
African Journals Online (AJOL)
... Statistics and Discriminant Analysis (DA) as required to achieve the objective of the study. This study will guide all future engineers, especially in the field of Mechanical Engineering in Malaysia to penetrate the job market according to the current market needs. Keywords: generic skills; KSA model; mechanical engineers; ...
From fracture mechanics to damage mechanics: how to model structural deterioration
International Nuclear Information System (INIS)
Nicolet, S.; Lorentz, E.; Barbier, G.
1998-01-01
Modelling of structural deteriorations of thermo-mechanical origin is highly enhanced when using damage mechanics. Indeed, the latter offers both a fine description of the material behaviour and an ability to deal with any loading conditions, moving away the current limits of fracture mechanics. But new difficulties can arise, depending on the examined problem: if forecasts of rack initiation are well mastered, the study of crack propagation remains more complex and needs sophisticated modelizations, which are nevertheless on the point of being well understood too. (authors)
A Simplified Quantum Mechanical Model of Diatomic Molecules
Nielsen, Lars Drud
1978-01-01
Introduces a simple one-dimensional model of a diatomic molecule that can explain all the essential features of a real two particle quantum mechanical system and gives quantitative results in fair agreement with those of a hydrogen molecule. (GA)
Entrepreneurs, Chance, and the Deterministic Concentration of Wealth
Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Mechanisms of cold fusion: comprehensive explanations by the Nattoh model
International Nuclear Information System (INIS)
Matsumoto, Takaaki
1995-01-01
The phenomena of cold fusion seem to be very complicated; inconsistent data between the production rates of heat, neutrons, tritiums and heliums. Our thoughts need to drastically change in order to appropriately understand the mechanisms of cold fusion. Here, a review is described for the Nattoh model, that has been developed extensively to provide comprehensive explanations for the mechanisms of cold fusion. Important experimental findings that prove the model are described. Furthermore several subjects including impacts on other fields are also discussed. (author)
Seismic hazard in Romania associated to Vrancea subcrustal source Deterministic evaluation
Radulian, M; Moldoveanu, C L; Panza, G F; Vaccari, F
2002-01-01
Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the mo...
Theoretical methods and models for mechanical properties of soft biomaterials
Directory of Open Access Journals (Sweden)
Zhonggang Feng
2017-06-01
Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.
Continuum methods of physical modeling continuum mechanics, dimensional analysis, turbulence
Hutter, Kolumban
2004-01-01
The book unifies classical continuum mechanics and turbulence modeling, i.e. the same fundamental concepts are used to derive model equations for material behaviour and turbulence closure and complements these with methods of dimensional analysis. The intention is to equip the reader with the ability to understand the complex nonlinear modeling in material behaviour and turbulence closure as well as to derive or invent his own models. Examples are mostly taken from environmental physics and geophysics.
Mechanisms of Probe Tack Adhesion of Model Acrylic Elastomers
Lakrout, Hamed; Creton, Costantino; Ahn, Dongchan; Shull, Kenneth R.
1997-03-01
The adhesion mechanisms of model acrylate homopolymers and copolymers are studied with an instrumented probe tack test. A video camera positioned under the transparent glass substrate records the bonding and debonding process while the force displacement curve is acquired. This setup allows to couple the observation of the cavitation and fibrillation mechanisms, occurring during the debonding of the film from the stainless steel probe, with the mechanical measurement of stress and strain. The transitions between different debonding mechanisms are critically dicussed in terms of the bulk and surface properties of the adhesive and its molecular structure.
From cells to tissue: A continuum model of epithelial mechanics
Ishihara, Shuji; Marcq, Philippe; Sugimura, Kaoru
2017-08-01
A two-dimensional continuum model of epithelial tissue mechanics was formulated using cellular-level mechanical ingredients and cell morphogenetic processes, including cellular shape changes and cellular rearrangements. This model incorporates stress and deformation tensors, which can be compared with experimental data. Focusing on the interplay between cell shape changes and cell rearrangements, we elucidated dynamical behavior underlying passive relaxation, active contraction-elongation, and tissue shear flow, including a mechanism for contraction-elongation, whereby tissue flows perpendicularly to the axis of cell elongation. This study provides an integrated scheme for the understanding of the orchestration of morphogenetic processes in individual cells to achieve epithelial tissue morphogenesis.
Mechanics and model-based control of advanced engineering systems
Irschik, Hans; Krommer, Michael
2014-01-01
Mechanics and Model-Based Control of Advanced Engineering Systems collects 32 contributions presented at the International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines, which took place in St. Petersburg, Russia in July 2012. The workshop continued a series of international workshops, which started with a Japan-Austria Joint Workshop on Mechanics and Model Based Control of Smart Materials and Structures and a Russia-Austria Joint Workshop on Advanced Dynamics and Model Based Control of Structures and Machines. In the present volume, 10 full-length papers based on presentations from Russia, 9 from Austria, 8 from Japan, 3 from Italy, one from Germany and one from Taiwan are included, which represent the state of the art in the field of mechanics and model based control, with particular emphasis on the application of advanced structures and machines.
Directory of Open Access Journals (Sweden)
Edmundo Wallace Monteiro Lucas
2009-09-01
Full Text Available A modelagem hidrológica é uma importante ferramenta no planejamento e gerenciamento de programas de recursos hídricos de bacias hidrográficas. Neste trabalho, foi aplicado o modelo hidrológico determinístico mensal de dois parâmetros e o modelo estocástico, ARIMA, para simular a vazão mensal das sub-bacias da região hidrográfica do Xingu no Estado do Pará. O objetivo principal foi simular a vazão mensal através dos modelos e comparar os seus resultados. O modelo hidrológico determinístico aplicado possui uma estrutura simples e apresentou bons resultados, porém mostrou-se muito sensível a eventos extremos de precipitação. O modelo estocástico ARIMA, conseguiu capturar a dinâmica das séries temporais, apresentando resultados muito satisfatórios na simulação da vazão mensal nas estações da bacia. Ambos os modelos devem ser aplicados com cautela no período chuvoso, onde ocorrem os eventos extremos de precipitação e consequentemente vazões de pico.Hydrologic modeling is an important tool for the planning and management of water resources use in river basins. In this work, a two-parameter monthly deterministic hydrologic model and the stochastic model, ARIMA, were applied to simulate the monthly runoff of the Xingu river basin in the State of Pará. The main objective of this work is to simulate the monthly runoff using the two models and to compare their results. The applied hydrological deterministic model has a simple structure and presented good results, but seems to be very sensitive to extreme precipitation events. The stochastic model ARIMA was able to capture the dynamic of the temporal series, presenting very satisfactory results for the simulation of the monthly runoff at the basin stations. Both models should be applied with caution during the rainy season, when extreme precipitation events and consequently peaks of runoff occur.
A model of mechanical interactions between heart and lungs.
Fontecave Jallon, Julie; Abdulhay, Enas; Calabrese, Pascale; Baconnier, Pierre; Gumery, Pierre-Yves
2009-12-13
To study the mechanical interactions between heart, lungs and thorax, we propose a mathematical model combining a ventilatory neuromuscular model and a model of the cardiovascular system, as described by Smith et al. (Smith, Chase, Nokes, Shaw & Wake 2004 Med. Eng. Phys.26, 131-139. (doi:10.1016/j.medengphy.2003.10.001)). The respiratory model has been adapted from Thibault et al. (Thibault, Heyer, Benchetrit & Baconnier 2002 Acta Biotheor. 50, 269-279. (doi:10.1023/A:1022616701863)); using a Liénard oscillator, it allows the activity of the respiratory centres, the respiratory muscles and rib cage internal mechanics to be simulated. The minimal haemodynamic system model of Smith includes the heart, as well as the pulmonary and systemic circulation systems. These two modules interact mechanically by means of the pleural pressure, calculated in the mechanical respiratory system, and the intrathoracic blood volume, calculated in the cardiovascular model. The simulation by the proposed model provides results, first, close to experimental data, second, in agreement with the literature results and, finally, highlighting the presence of mechanical cardiorespiratory interactions.
Nonlinear structural mechanics theory, dynamical phenomena and modeling
Lacarbonara, Walter
2013-01-01
Nonlinear Structural Mechanics: Theory, Dynamical Phenomena and Modeling offers a concise, coherent presentation of the theoretical framework of nonlinear structural mechanics, computational methods, applications, parametric investigations of nonlinear phenomena and their mechanical interpretation towards design. The theoretical and computational tools that enable the formulation, solution, and interpretation of nonlinear structures are presented in a systematic fashion so as to gradually attain an increasing level of complexity of structural behaviors, under the prevailing assumptions on the geometry of deformation, the constitutive aspects and the loading scenarios. Readers will find a treatment of the foundations of nonlinear structural mechanics towards advanced reduced models, unified with modern computational tools in the framework of the prominent nonlinear structural dynamic phenomena while tackling both the mathematical and applied sciences. Nonlinear Structural Mechanics: Theory, Dynamical Phenomena...
Stochastic modelling in design of mechanical properties of nanometals
International Nuclear Information System (INIS)
Tengen, T.B.; Wejrzanowski, T.; Iwankiewicz, R.; Kurzydlowski, K.J.
2010-01-01
Polycrystalline nanometals are being fabricated through different processing routes and conditions. The consequence is that nanometals having the same mean grain size may have different grain size dispersion and, hence, may have different material properties. This has often led to conflicting reports from both theoretical and experimental findings about the evolutions of the mechanical properties of nanomaterials. The present paper employs stochastic model to study the impact of microstructure evolution during grain growth on the mechanical properties of polycrystalline nanometals. The stochastic model for grain growth and the stochastic model for changes in mechanical properties of nanomaterials are proposed. The model for the mechanical properties developed is tested on aluminium samples.Many salient features of the mechanical properties of the aluminium samples are revealed. The results show that the different mechanisms of grain growth impart different nature of response to the material mechanical properties. The conventional, homologous and anomalous temperature dependences of the yield stress have also been revealed to be due to different nature of interactions of the microstructures during evolution.
Directory of Open Access Journals (Sweden)
MANFREDI, P.
2014-11-01
Full Text Available This paper extends recent literature results concerning the statistical simulation of circuits affected by random electrical parameters by means of the polynomial chaos framework. With respect to previous implementations, based on the generation and simulation of augmented and deterministic circuit equivalents, the modeling is extended to generic and ?black-box? multi-terminal nonlinear subcircuits describing complex devices, like those found in integrated circuits. Moreover, based on recently-published works in this field, a more effective approach to generate the deterministic circuit equivalents is implemented, thus yielding more compact and efficient models for nonlinear components. The approach is fully compatible with commercial (e.g., SPICE-type circuit simulators and is thoroughly validated through the statistical analysis of a realistic interconnect structure with a 16-bit memory chip. The accuracy and the comparison against previous approaches are also carefully established.
Exponential power spectra, deterministic chaos and Lorentzian pulses in plasma edge dynamics
International Nuclear Information System (INIS)
Maggs, J E; Morales, G J
2012-01-01
Exponential spectra have been observed in the edges of tokamaks, stellarators, helical devices and linear machines. The observation of exponential power spectra is significant because such a spectral character has been closely associated with the phenomenon of deterministic chaos by the nonlinear dynamics community. The proximate cause of exponential power spectra in both magnetized plasma edges and nonlinear dynamics models is the occurrence of Lorentzian pulses in the time signals of fluctuations. Lorentzian pulses are produced by chaotic behavior in the separatrix regions of plasma E × B flow fields or the limit cycle regions of nonlinear models. Chaotic advection, driven by the potential fields of drift waves in plasmas, results in transport. The observation of exponential power spectra and Lorentzian pulses suggests that fluctuations and transport at the edge of magnetized plasmas arise from deterministic, rather than stochastic, dynamics. (paper)
Offshore platforms and deterministic ice actions: Kashagan phase 2 development: North Caspian Sea.
Energy Technology Data Exchange (ETDEWEB)
Croasdale, Ken [KRCA, Calgary (Canada); Jordaan, Ian [Ian Jordaan and Associates, St John' s (Canada); Verlaan, Paul [Shell Development Kashagan, London (United Kingdom)
2011-07-01
The Kashagan development has to face the difficult conditions of the northern Caspian Sea. This paper investigated ice interaction scenarios and deterministic methods used on platform designs for the Kashagan development. The study presents first a review of the types of platforms in use and being designed for the Kashagan development. The various ice load scenarios and the structures used in each case are discussed. Vertical faced barriers, mobile drilling barges and sheet pile islands were used for the ice loads on vertical structures. Sloping faced barriers and islands of rock were used for the ice loads on sloping structures. Deterministic models such as the model in ISO 19906 were used to calculate the loads occurring with or without ice rubble in front of the structure. The results showed the importance of rubble build-up in front of wide structures in shallow water. Recommendations were provided for building efficient vertical and sloping faced barriers.
Mechanical characterization of bioprinted in vitro soft tissue models
International Nuclear Information System (INIS)
Zhang, Ting; Ouyang, Liliang; Sun, Wei; Yan, Karen Chang
2013-01-01
Recent development in bioprinting technology enables the fabrication of complex, precisely controlled cell-encapsulated tissue constructs. Bioprinted tissue constructs have potential in both therapeutic applications and nontherapeutic applications such as drug discovery and screening, disease modelling and basic biological studies such as in vitro tissue modelling. The mechanical properties of bioprinted in vitro tissue models play an important role in mimicking in vivo the mechanochemical microenvironment. In this study, we have constructed three-dimensional in vitro soft tissue models with varying structure and porosity based on the 3D cell-assembly technique. Gelatin/alginate hybrid materials were used as the matrix material and cells were embedded. The mechanical properties of these models were assessed via compression tests at various culture times, and applicability of three material constitutive models was examined for fitting the experimental data. An assessment of cell bioactivity in these models was also carried out. The results show that the mechanical properties can be improved through structure design, and the compression modulus and strength decrease with respect to time during the first week of culture. In addition, the experimental data fit well with the Ogden model and experiential function. These results provide a foundation to further study the mechanical properties, structural and combined effects in the design and the fabrication of in vitro soft tissue models. (paper)
SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations
International Nuclear Information System (INIS)
Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir
2014-01-01
The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with
Theoretical Modeling of Rock Breakage by Hydraulic and Mechanical Tool
Directory of Open Access Journals (Sweden)
Hongxiang Jiang
2014-01-01
Full Text Available Rock breakage by coupled mechanical and hydraulic action has been developed over the past several decades, but theoretical study on rock fragmentation by mechanical tool with water pressure assistance was still lacking. The theoretical model of rock breakage by mechanical tool was developed based on the rock fracture mechanics and the solution of Boussinesq’s problem, and it could explain the process of rock fragmentation as well as predicating the peak reacting force. The theoretical model of rock breakage by coupled mechanical and hydraulic action was developed according to the superposition principle of intensity factors at the crack tip, and the reacting force of mechanical tool assisted by hydraulic action could be reduced obviously if the crack with a critical length could be produced by mechanical or hydraulic impact. The experimental results indicated that the peak reacting force could be reduced about 15% assisted by medium water pressure, and quick reduction of reacting force after peak value decreased the specific energy consumption of rock fragmentation by mechanical tool. The crack formation by mechanical or hydraulic impact was the prerequisite to improvement of the ability of combined breakage.
Testing and Modeling of Mechanical Characteristics of Resistance Welding Machines
DEFF Research Database (Denmark)
Wu, Pei; Zhang, Wenqi; Bay, Niels
2003-01-01
for both upper and lower electrode systems. This has laid a foundation for modeling the welding process and selecting the welding parameters considering the machine factors. The method is straightforward and easy to be applied in industry since the whole procedure is based on tests with no requirements......The dynamic mechanical response of resistance welding machine is very important to the weld quality in resistance welding especially in projection welding when collapse or deformation of work piece occurs. It is mainly governed by the mechanical parameters of machine. In this paper, a mathematical...... model for characterizing the dynamic mechanical responses of machine and a special test set-up called breaking test set-up are developed. Based on the model and the test results, the mechanical parameters of machine are determined, including the equivalent mass, damping coefficient, and stiffness...
Mechanical behavior of the ATLAS B0 model coil
Foussat, A; Acerbi, E; Alessandria, F; Berthier, R; Broggi, F; Daël, A; Dudarev, A; Mayri, C; Miele, P; Reytier, M; Rossi, L; Sorbi, M; Sun, Z; ten Kate, H H J; Vanenkov, I; Volpini, G
2002-01-01
The ATLAS B0 model coil has been developed and constructed to verify the design parameters and the manufacture techniques of the Barrel Toroid coils (BT) that are under construction for the ATLAS Detector. Essential for successful operation is the mechanical behavior of the superconducting coil and its support structure. In the ATLAS magnet test facility, a magnetic mirror is used to reproduce in the model coil the electromagnetic forces of the BT coils when assembled in the final Barrel Toroid magnet system. The model coil is extensively equipped with mechanical instrumentation to monitor stresses and force levels as well as contraction during a cooling down and excitation up to nominal current. The installed set up of strain gauges, position sensors and capacitive force transducers is presented. Moreover the first mechanical results in terms of expected main stress, strain and deformation values are presented based on detailed mechanical analysis of the design. (7 refs).
Mechanical Slosh Models for Rocket-Propelled Spacecraft
Jang, Jiann-Woei; Alaniz, Abram; Yang, Lee; Powers. Joseph; Hall, Charles
2013-01-01
Several analytical mechanical slosh models for a cylindrical tank with flat bottom are reviewed. Even though spacecrafts use cylinder shaped tanks, most of those tanks usually have elliptical domes. To extend the application of the analytical models for a cylindrical tank with elliptical domes, the modified slosh parameter models are proposed in this report by mapping an elliptical dome cylindrical tank to a flat top/bottom cylindrical tank while maintaining the equivalent liquid volume. For the low Bond number case, the low-g slosh models were also studied. Those low-g models can be used for Bond number > 10. The current low-g slosh models were also modified to extend their applications for the case that liquid height is smaller than the tank radius. All modified slosh models are implemented in MATLAB m-functions and are collected in the developed MST (Mechanical Slosh Toolbox).
A constitutive mechanical model for gas hydrate bearing sediments incorporating inelastic mechanisms
Sánchez, Marcelo
2016-11-30
Gas hydrate bearing sediments (HBS) are natural soils formed in permafrost and sub-marine settings where the temperature and pressure conditions are such that gas hydrates are stable. If these conditions shift from the hydrate stability zone, hydrates dissociate and move from the solid to the gas phase. Hydrate dissociation is accompanied by significant changes in sediment structure and strongly affects its mechanical behavior (e.g., sediment stiffenss, strength and dilatancy). The mechanical behavior of HBS is very complex and its modeling poses great challenges. This paper presents a new geomechanical model for hydrate bearing sediments. The model incorporates the concept of partition stress, plus a number of inelastic mechanisms proposed to capture the complex behavior of this type of soil. This constitutive model is especially well suited to simulate the behavior of HBS upon dissociation. The model was applied and validated against experimental data from triaxial and oedometric tests conducted on manufactured and natural specimens involving different hydrate saturation, hydrate morphology, and confinement conditions. Particular attention was paid to model the HBS behavior during hydrate dissociation under loading. The model performance was highly satisfactory in all the cases studied. It managed to properly capture the main features of HBS mechanical behavior and it also assisted to interpret the behavior of this type of sediment under different loading and hydrate conditions.
Frequency domain fatigue damage estimation methods suitable for deterministic load spectra
Energy Technology Data Exchange (ETDEWEB)
Henderson, A.R.; Patel, M.H. [University Coll., Dept. of Mechanical Engineering, London (United Kingdom)
2000-07-01
The evaluation of fatigue damage due to load spectra, directly in the frequency domain, is a complex phenomena but with the benefit of significant computation time savings. Various formulae have been suggested but have usually relating to a specific application only. The Dirlik method is the exception and is applicable to general cases of continuous stochastic spectra. This paper describes three approaches for evaluating discrete deterministic load spectra generated by the floating wind turbine model developed the UCL/RAL research project. (Author)
Deterministic and stochastic control of chimera states in delayed feedback oscillator
Energy Technology Data Exchange (ETDEWEB)
Semenov, V. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Zakharova, A.; Schöll, E. [Institut für Theoretische Physik, TU Berlin, Hardenbergstraße 36, 10623 Berlin (Germany); Maistrenko, Y. [Institute of Mathematics and Center for Medical and Biotechnical Research, NAS of Ukraine, Tereschenkivska Str. 3, 01601 Kyiv (Ukraine)
2016-06-08
Chimera states, characterized by the coexistence of regular and chaotic dynamics, are found in a nonlinear oscillator model with negative time-delayed feedback. The control of these chimera states by external periodic forcing is demonstrated by numerical simulations. Both deterministic and stochastic external periodic forcing are considered. It is shown that multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. The constructive role of noise in the formation of a chimera states is shown.
Micromechanical modelling of mechanical behaviour and strength of wood
DEFF Research Database (Denmark)
Mishnaevsky, Leon; Qing, Hai
2008-01-01
An overview of the micromechanical theoretical and numerical models of wood is presented. Different methods of analysis of the effects of wood microstructures at different scale levels on the mechanical behaviour, deformation and strength of wood are discussed and compared. Micromechanical models...
A simplified quantum mechanical model of diatomic molecules
DEFF Research Database (Denmark)
Nielsen, Lars Drud
1978-01-01
A one-dimensional molecule model with Coulomb potentials replaced by delta functions is introduced. The mathematical simplicity of the model facilitates the quantum mechanical treatment and offers a straightforward demonstration of the essentials of two-particle problems. In spite of the crudeness...
Thermal and mechanical modelling of convergent plate margins
van den Beukel, P.J.
1990-01-01
In this thesis, the thermal and mechanical structure of convergent plate margins will be investigated by means of numerical modelling. In addition, we will discuss the implications of modelling results for geological processes such as metamorphism or the break-up of a plate at a convergent plate
A simple mechanical model for the isotropic harmonic oscillator
International Nuclear Information System (INIS)
Nita, Gelu M
2010-01-01
A constrained elastic pendulum is proposed as a simple mechanical model for the isotropic harmonic oscillator. The conceptual and mathematical simplicity of this model recommends it as an effective pedagogical tool in teaching basic physics concepts at advanced high school and introductory undergraduate course levels.
Impedance model for quantum-mechanical barrier problems
International Nuclear Information System (INIS)
Nelin, Evgenii A
2007-01-01
Application of the impedance model to typical quantum-mechanical barrier problems, including those for structures with resonant electron tunneling, is discussed. The efficiency of the approach is illustrated. The physical transparency and compactness of the model and its potential as a teaching and learning tool are discussed. (methodological notes)
Deterministic and fuzzy-based methods to evaluate community resilience
Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo
2018-04-01
Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.
Deterministic network interdiction optimization via an evolutionary approach
International Nuclear Information System (INIS)
Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel
2009-01-01
This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem
Rapid detection of small oscillation faults via deterministic learning.
Wang, Cong; Chen, Tianrui
2011-08-01
Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.
Rock Mechanics Forsmark. Site descriptive modelling Forsmark stage 2.2
Energy Technology Data Exchange (ETDEWEB)
Glamheden, Rune; Fredriksson, Anders (Golder Associates AB (SE)); Roeshoff, Kennert; Karlsson, Johan (Berg Bygg Konsult AB (SE)); Hakami, Hossein (Itasca Geomekanik AB (SE)); Christiansson, Rolf (Swedish Nuclear Fuel and Waste Management Co., Stockholm (SE))
2007-12-15
volume show that all rock domains covered by the empirical analysis have rock of good competent quality. The evaluated mechanical properties of the deterministic deformation zones are on the whole relatively close to the properties evaluated for the fracture domains. The in situ state of stress at the Forsmark site has been estimated based on direct measurements including overcoring and hydraulic fracturing, as well as indirect observations including borehole breakout, core disking and micro crack porosity. The results were utilised as input data to a numerical model for evaluation of stress variability caused by deformation zones as well as the discrete fracture network. Both direct and indirect measurements of the in situ stresses end up with a stable and constant orientation of the major horizontal stress in NW-SE direction. The magnitude of the major principal stress is constrained by indirect observations of core and borehole damage along with stress measurements. Fracture domains FMM01 and FFM06 are presumed to have the same stress gradient. The adopted model results in a mean magnitude of the major horizontal stress of around 41 MPa and of the minor horizontal stress around 23 MPa at 500 m depth in FFM01. Compared to the previous model version, the present estimate corresponds to a slight reduction in situ stress magnitudes. Results from numerical modelling with respect to deformation zones, show that the stress field in the target volume is relatively homogeneous, and that it is mainly the gently dipping deformation zone ZFMA2 that has importance for the stress field in the target volume. For the rock mass conditions at Forsmark numerical modelling, regarding local stress spatial variability due to discrete fractures, indicates that the major principal stress could be expected to vary spatially by +- 5 MPa in magnitude and +- 9 degrees in orientation
Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay
2017-11-01
Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.
A Specific N=2 Supersymmetric Quantum Mechanical Model: Supervariable Approach
Directory of Open Access Journals (Sweden)
Aradhya Shukla
2017-01-01
Full Text Available By exploiting the supersymmetric invariant restrictions on the chiral and antichiral supervariables, we derive the off-shell nilpotent symmetry transformations for a specific (0 + 1-dimensional N=2 supersymmetric quantum mechanical model which is considered on a (1, 2-dimensional supermanifold (parametrized by a bosonic variable t and a pair of Grassmannian variables (θ,θ¯. We also provide the geometrical meaning to the symmetry transformations. Finally, we show that this specific N=2 SUSY quantum mechanical model is a model for Hodge theory.
ONKALO rock mechanics model (RMM) - Version 2.0
International Nuclear Information System (INIS)
Moenkkoenen, H.; Hakala, M.; Paananen, M.; Laine, E.
2012-02-01
The Rock Mechanics Model of the ONKALO rock volume is a description of the significant features and parameters related to rock mechanics. The main objective is to develop a tool to predict the rock properties, quality and hence the potential for stress failure which can then be used for continuing design of the ONKALO and the repository. This is the second implementation of the Rock Mechanics Model and it includes sub-models of the intact rock strength, in situ stress, thermal properties, rock mass quality and properties of the brittle deformation zones. Because of the varying quantities of available data for the different parameters, the types of presentations also vary: some data sets can be presented in the style of a 3D block model but, in other cases, a single distribution represents the whole rock volume hosting the ONKALO. (orig.)
Katira, Parag; Bonnecaze, Roger T; Zaman, Muhammad H
2013-01-01
Malignant transformation, though primarily driven by genetic mutations in cells, is also accompanied by specific changes in cellular and extra-cellular mechanical properties such as stiffness and adhesivity. As the transformed cells grow into tumors, they interact with their surroundings via physical contacts and the application of forces. These forces can lead to changes in the mechanical regulation of cell fate based on the mechanical properties of the cells and their surrounding environment. A comprehensive understanding of cancer progression requires the study of how specific changes in mechanical properties influences collective cell behavior during tumor growth and metastasis. Here we review some key results from computational models describing the effect of changes in cellular and extra-cellular mechanical properties and identify mechanistic pathways for cancer progression that can be targeted for the prediction, treatment, and prevention of cancer.
An adaptation model for trabecular bone at different mechanical levels
Directory of Open Access Journals (Sweden)
Lv Linwei
2010-07-01
Full Text Available Abstract Background Bone has the ability to adapt to mechanical usage or other biophysical stimuli in terms of its mass and architecture, indicating that a certain mechanism exists for monitoring mechanical usage and controlling the bone's adaptation behaviors. There are four zones describing different bone adaptation behaviors: the disuse, adaptation, overload, and pathologic overload zones. In different zones, the changes of bone mass, as calculated by the difference between the amount of bone formed and what is resorbed, should be different. Methods An adaptation model for the trabecular bone at different mechanical levels was presented in this study based on a number of experimental observations and numerical algorithms in the literature. In the proposed model, the amount of bone formation and the probability of bone remodeling activation were proposed in accordance with the mechanical levels. Seven numerical simulation cases under different mechanical conditions were analyzed as examples by incorporating the adaptation model presented in this paper with the finite element method. Results The proposed bone adaptation model describes the well-known bone adaptation behaviors in different zones. The bone mass and architecture of the bone tissue within the adaptation zone almost remained unchanged. Although the probability of osteoclastic activation is enhanced in the overload zone, the potential of osteoblasts to form bones compensate for the osteoclastic resorption, eventually strengthening the bones. In the disuse zone, the disuse-mode remodeling removes bone tissue in disuse zone. Conclusions The study seeks to provide better understanding of the relationships between bone morphology and the mechanical, as well as biological environments. Furthermore, this paper provides a computational model and methodology for the numerical simulation of changes of bone structural morphology that are caused by changes of mechanical and biological
Pre-quantum mechanics. Introduction to models with hidden variables
International Nuclear Information System (INIS)
Grea, J.
1976-01-01
Within the context of formalism of hidden variable type, the author considers the models used to describe mechanical systems before the introduction of the quantum model. An account is given of the characteristics of the theoretical models and their relationships with experimental methodology. The models of analytical, pre-ergodic, stochastic and thermodynamic mechanics are studied in succession. At each stage the physical hypothesis is enunciated by postulate corresponding to the type of description of the reality of the model. Starting from this postulate, the physical propositions which are meaningful for the model under consideration are defined and their logical structure is indicated. It is then found that on passing from one level of description to another, one can obtain successively Boolean lattices embedded in lattices of continuous geometric type, which are themselves embedded in Boolean lattices. It is therefore possible to envisage a more detailed description than that given by the quantum lattice and to construct it by analogy. (Auth.)
The dialectical thinking about deterministic and probabilistic safety analysis
International Nuclear Information System (INIS)
Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong
2005-01-01
There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)
Constitutive Modeling of the Mechanical Properties of Optical Fibers
Moeti, L.; Moghazy, S.; Veazie, D.; Cuddihy, E.
1998-01-01
Micromechanical modeling of the composite mechanical properties of optical fibers was conducted. Good agreement was obtained between the values of Young's modulus obtained by micromechanics modeling and those determined experimentally for a single mode optical fiber where the wave guide and the jacket are physically coupled. The modeling was also attempted on a polarization-maintaining optical fiber (PANDA) where the wave guide and the jacket are physically decoupled, and found not to applicable since the modeling required perfect bonding at the interface. The modeling utilized constituent physical properties such as the Young's modulus, Poisson's ratio, and shear modulus to establish bounds on the macroscopic behavior of the fiber.
Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?
Kubota, Noriaki
2018-03-01
The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.
Probabilistic approach in treatment of deterministic analyses results of severe accidents
International Nuclear Information System (INIS)
Krajnc, B.; Mavko, B.
1996-01-01
Severe accidents sequences resulting in loss of the core geometric integrity have been found to have small probability of the occurrence. Because of their potential consequences to public health and safety, an evaluation of the core degradation progression and the resulting effects on the containment is necessary to determine the probability of a significant release of radioactive materials. This requires assessment of many interrelated phenomena including: steel and zircaloy oxidation, steam spikes, in-vessel debris cooling, potential vessel failure mechanisms, release of core material to the containment, containment pressurization from steam generation, or generation of non-condensable gases or hydrogen burn, and ultimately coolability of degraded core material. To asses the answer from the containment event trees in the sense of weather certain phenomenological event would happen or not the plant specific deterministic analyses should be performed. Due to the fact that there is a large uncertainty in the prediction of severe accidents phenomena in Level 2 analyses (containment event trees) the combination of probabilistic and deterministic approach should be used. In fact the result of the deterministic analyses of severe accidents are treated in probabilistic manner due to large uncertainty of results as a consequence of a lack of detailed knowledge. This paper discusses approach used in many IPEs, and which assures that the assigned probability for certain question in the event tree represent the probability that the event will or will not happen and that this probability also includes its uncertainty, which is mainly result of lack of knowledge. (author)
MATHEMATICAL MODEL FOR ESTIMATION OF MECHANICAL SYSTEM CONDITION IN DYNAMICS
Directory of Open Access Journals (Sweden)
D. N. Mironov
2011-01-01
Full Text Available The paper considers an estimation of a complicated mechanical system condition in dynamics with due account of material degradation and accumulation of micro-damages. An element of continuous medium has been simulated and described with the help of a discrete element. The paper contains description of a model for determination of mechanical system longevity in accordance with number of cycles and operational period.
How the growth rate of host cells affects cancer risk in a deterministic way
Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe
2017-09-01
It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei
2013-04-01
Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally
A mechanical model for FRP-strengthened beams in bending
Directory of Open Access Journals (Sweden)
P. S. Valvo
2012-10-01
Full Text Available We analyse the problem of a simply supported beam, strengthened with a fibre-reinforced polymer (FRP strip bonded to its intrados and subjected to bending couples applied to its end sections. A mechanical model is proposed, whereby the beam and FRP strip are modelled according to classical beam theory, while the adhesive and its neighbouring layers are modelled as an interface having a piecewise linear constitutive law defined over three intervals (elastic response – softening response – debonding. The model is described by a set of differential equations with appropriate boundary conditions. An analytical solution to the problem is determined, including explicit expressions for the internal forces, displacements and interfacial stresses. The model predicts an overall non-linear mechanical response for the strengthened beam, ranging over several stages: from linearly elastic behaviour to damage, until the complete detachment of the FRP reinforcement.
Mechanical Model of Traditional Thai Massage for Integrated Healthcare
Directory of Open Access Journals (Sweden)
Salinee Rattanaphan
2015-01-01
Full Text Available In this study, a mechanical model was developed, aiming to provide standardized and programmable traditional Thai massage (TTM therapy to patients. The TTM was modeled and integrated into a mechanical hand (MH system, and a prototype massage chair was built and tested for user satisfaction. Three fundamental principles of Thai massage were integrated: pull, press, and pin. Based on these principles, the mechanics of Thai massage was studied and a mathematical model was developed to describe the dynamics and conditions for the design and prototyping of an MH. On average, it was found that users were satisfied with the treatment and felt that the treatment was similar to that performed by human hands. According to the interview results, users indicated that they were likely to utilize the MH as an alternative to traditional massage. Therefore, integrated TTM with an MH may help healthcare providers deliver standardized, programmable massage therapy to patients as opposed to variable, inconsistent human massage.
Directory of Open Access Journals (Sweden)
Cristoforo Demartino
2018-01-01
Full Text Available This paper presents a numerical study on the deterministic and probabilistic serviceability assessment of footbridge vibrations due to a single walker crossing. The dynamic response of the footbridge is analyzed by means of modal analysis, considering only the first lateral and vertical modes. Single span footbridges with uniform mass distribution are considered, with different values of the span length, natural frequencies, mass, and structural damping and with different support conditions. The load induced by a single walker crossing the footbridge is modeled as a moving sinusoidal force either in the lateral or in the vertical direction. The variability of the characteristics of the load induced by walkers is modeled using probability distributions taken from the literature defining a Standard Population of walkers. Deterministic and probabilistic approaches were adopted to assess the peak response. Based on the results of the simulations, deterministic and probabilistic vibration serviceability assessment methods are proposed, not requiring numerical analyses. Finally, an example of the application of the proposed method to a truss steel footbridge is presented. The results highlight the advantages of the probabilistic procedure in terms of reliability quantification.
Introduction to stochastic models in biology
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Samson, Adeline
2013-01-01
This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential equations (ODEs). These models assume that the observed dynamics are driven exclusively by internal, deterministic mechanisms. However, real biological systems will always be exp...
Ganzert, Steven; Guttmann, Josef; Steinmann, Daniel; Kramer, Stefan
Lung protective ventilation strategies reduce the risk of ventilator associated lung injury. To develop such strategies, knowledge about mechanical properties of the mechanically ventilated human lung is essential. This study was designed to develop an equation discovery system to identify mathematical models of the respiratory system in time-series data obtained from mechanically ventilated patients. Two techniques were combined: (i) the usage of declarative bias to reduce search space complexity and inherently providing the processing of background knowledge. (ii) A newly developed heuristic for traversing the hypothesis space with a greedy, randomized strategy analogical to the GSAT algorithm. In 96.8% of all runs the applied equation discovery system was capable to detect the well-established equation of motion model of the respiratory system in the provided data. We see the potential of this semi-automatic approach to detect more complex mathematical descriptions of the respiratory system from respiratory data.
Numerical Modeling and Mechanical Analysis of Flexible Risers
Directory of Open Access Journals (Sweden)
J. Y. Li
2015-01-01
Full Text Available ABAQUS is used to create a detailed finite element model for a 10-layer unbonded flexible riser to simulate the riser’s mechanical behavior under three load conditions: tension force and internal and external pressure. It presents a technique to create detailed finite element model and to analyze flexible risers. In FEM model, all layers are modeled separately with contact interfaces; interaction between steel trips in certain layers has been considered as well. FEM model considering contact interaction, geometric nonlinearity, and friction has been employed to accurately simulate the structural behavior of riser. The model includes the main features of the riser geometry with very little simplifying assumptions. The model was solved using a fully explicit time-integration scheme implemented in a parallel environment on an eight-processor cluster and 24 G memory computer. There is a very good agreement obtained from numerical and analytical comparisons, which validates the use of numerical model here. The results from the numerical simulation show that the numerical model takes into account various details of the riser. It has been shown that the detailed finite element model can be used to predict riser’s mechanics behavior under various load cases and bound conditions.
Resolving Lexical Ambiguity in a Deterministic Parser
Milne, Robert W.
1983-01-01
This work is an investigation into part of the human sentence parsing mechanism (HSPM), where parsing implies syntactic and non-syntactic analysis. It is hypothesised. that the HSPM consists of at least two processors. We will call the first processor the syntactic processor, and the second will be known as the non-syntactic processor. For normal sentence processing, the two processors are controlled by a 'normal component", whilst when an error occurs, they are controlled by a...
A three-dimensional computational model of collagen network mechanics.
Directory of Open Access Journals (Sweden)
Byoungkoo Lee
Full Text Available Extracellular matrix (ECM strongly influences cellular behaviors, including cell proliferation, adhesion, and particularly migration. In cancer, the rigidity of the stromal collagen environment is thought to control tumor aggressiveness, and collagen alignment has been linked to tumor cell invasion. While the mechanical properties of collagen at both the single fiber scale and the bulk gel scale are quite well studied, how the fiber network responds to local stress or deformation, both structurally and mechanically, is poorly understood. This intermediate scale knowledge is important to understanding cell-ECM interactions and is the focus of this study. We have developed a three-dimensional elastic collagen fiber network model (bead-and-spring model and studied fiber network behaviors for various biophysical conditions: collagen density, crosslinker strength, crosslinker density, and fiber orientation (random vs. prealigned. We found the best-fit crosslinker parameter values using shear simulation tests in a small strain region. Using this calibrated collagen model, we simulated both shear and tensile tests in a large linear strain region for different network geometry conditions. The results suggest that network geometry is a key determinant of the mechanical properties of the fiber network. We further demonstrated how the fiber network structure and mechanics evolves with a local formation, mimicking the effect of pulling by a pseudopod during cell migration. Our computational fiber network model is a step toward a full biomechanical model of cellular behaviors in various ECM conditions.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Towards deterministic optical quantum computation with coherently driven atomic ensembles
International Nuclear Information System (INIS)
Petrosyan, David
2005-01-01
Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons
Deterministic and efficient quantum cryptography based on Bell's theorem
International Nuclear Information System (INIS)
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-01-01
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology
The hydro-mechanical modeling of the fractured media
International Nuclear Information System (INIS)
Kadiri, I.
2002-10-01
The hydro-mechanical modeling of the fractured media is quite complex. Simplifications are necessary for the modeling of such media, but, not always justified, Only permeable fractures are often considered. The rest of the network is approximated by an equivalent continuous medium. Even if we suppose that this approach is validated, the hydraulic and mechanical properties of the fractures and of the continuous medium are seldom known. Calibrations are necessary for the determination of these properties. Until now, one does not know very well the nature of measurements which must be carried out in order to carry on a modeling in discontinuous medium, nor elements of enough robust validation for this kind of modeling. For a better understanding of the hydro-mechanical phenomena in fractured media, two different sites have been selected for the work. The first is the site of Grimsel in Switzerland in which an underground laboratory is located at approximately 400 m of depth. The FEBEX experiment aims at the in-situ study of the consecutive phenomena due to the installation of a heat source representative of radioactive waste in the last 17 meters of the FEBEX tunnel in the laboratory of Grimsel. Only, the modeling of the hydro-mechanical of the excavation was model. The modeling of the Febex enabled us to establish a methodology of calibration of the hydraulic properties in the discontinuous media. However, this kind of study on such complex sites does not make possible to answer all the questions which arise on the hydro-mechanical behavior of the fractured media. We thus carried out modeling on an other site, smaller than the fist one and more accessible. The experimental site of Coaraze, in the Maritime Alps, is mainly constituted of limestone and fractures. Then the variation of water pressure along fractures is governed by the opening/closure sequence of a water gate. Normal displacement as well as the pore pressure along these fractures are recorded, and then
Mechanical model for filament buckling and growth by phase ordering.
Rey, Alejandro D; Abukhdeir, Nasser M
2008-02-05
A mechanical model of open filament shape and growth driven by phase ordering is formulated. For a given phase-ordering driving force, the model output is the filament shape evolution and the filament end-point kinematics. The linearized model for the slope of the filament is the Cahn-Hilliard model of spinodal decomposition, where the buckling corresponds to concentration fluctuations. Two modes are predicted: (i) sequential growth and buckling and (ii) simultaneous buckling and growth. The relation among the maximum buckling rate, filament tension, and matrix viscosity is given. These results contribute to ongoing work in smectic A filament buckling.
A unified mobility model for quantum mechanical simulation of MOSFETs
International Nuclear Information System (INIS)
Park, Ji Sun; Lee, Ji Young; Lee, Sang Kyung; Shin, Hyung Soon; Jin, Seong Hoon; Park, Young June; Min, Hong Shik
2004-01-01
A unified electron and hole mobility model for inversion and accumulation layers with quantum effect is presented for the first time. By accounting for the screened Coulomb scattering based on the well-known bulk mobility model and allowing the surface roughness scattering term to be a function of net charge, the new model is applicable to the bulk, inversion, and accumulation layers with only one set of fitting parameters. The new model is implemented in the 2-D quantum mechanical device simulator and gives excellent agreement with the experimentally measured effective mobility data over a wide range of effective transverse field, substrate doping, substrate bias, and temperature.
Mechanisms for integration of information models across related domains
Atkinson, Rob
2010-05-01
It is well recognised that there are opportunities and challenges in cross-disciplinary data integration. A significant barrier, however, is creating a conceptual model of the combined domains and the area of integration. For example, a groundwater domain application may require information from several related domains: geology, hydrology, water policy, etc. Each domain may have its own data holdings and conceptual models, but these will share various common concepts (eg. The concept of an aquifer). These areas of semantic overlap present significant challenges, firstly to choose a single representation (model) of a concept that appears in multiple disparate models,, then to harmonise these other models with the single representation. In addition, models may exist at different levels of abstraction depending on how closely aligned they are with a particular implementation. This makes it hard for modellers in one domain to introduce elements from another domain without either introducing a specific style of implementation, or conversely dealing with a set of abstract patterns that are hard to integrate with existing implementations. Models are easier to integrate if they are broken down into small units, with common concepts implemented using common models from well-known, and predictably managed shared libraries. This vision however requires development of a set of mechanisms (tools and procedures) for implementing and exploiting libraries of model components. These mechanisms need to handle publication, discovery, subscription, versioning and implementation of models in different forms. In this presentation a coherent suite of such mechanisms is proposed, using a scenario based on re-use of geosciences models. This approach forms the basis of a comprehensive strategy to empower domain modellers to create more interoperable systems. The strategy address a range of concerns and practice, and includes methodologies, an accessible toolkit, improvements to available
International Nuclear Information System (INIS)
Mathieu, J.Ph.
2006-10-01
Reactor Pressure Vessel is the second containment barrier between nuclear fuel and the environment. Electricite de France's reactors are made with french 16MND5 low-alloyed steel (equ. ASTM A508 Cl.3). Various experimental techniques (scanning electron microscopy, X-ray diffraction...) are set up in order to characterize mechanical heterogeneities inside material microstructure during tensile testing at different low temperatures [-150 C;-60 C]. Heterogeneities can be seen as the effect of both 'polycrystalline' and 'composite' microstructural features. Interphase (until 150 MPa in average between ferritic and bainitic macroscopic stress state) and intra-phase (until 100 MPa in average between ferritic orientations) stress variations are highlighted. Modelling involves micro-mechanical description of plastic glide, mean fields models and realistic three-dimensional aggregates, all put together inside a multi-scale approach. Calibration is done on macroscopic stress-strain curves at different low temperatures, and modelling reproduces experimental stress heterogeneities. This modelling allows to apply a local micro-mechanical fracture criterion for crystallographic cleavage. Deterministic computations of time to fracture for different carbides random selection provide a way to express probability of fracture for the elementary volume. Results are in good agreement with hypothesis made by local approach to fracture. Hence, the main difference is that no dependence to loading nor microstructure features is supposed for probability of fracture on the representative volume: this dependence is naturally introduced by modelling. (author)
Mechanical test of the model coil wound with large conductor
International Nuclear Information System (INIS)
Hiue, Hisaaki; Sugimoto, Makoto; Nakajima, Hideo; Yasukawa, Yukio; Yoshida, Kiyoshi; Hasegawa, Mitsuru; Ito, Ikuo; Konno, Masayuki.
1992-09-01
The high rigidity and strength of the winding pack are required to realize the large superconducting magnet for the fusion reactor. This paper describes mechanical tests concerning the rigidity of the winding pack. Samples were prepared to evaluate the adhesive strength between conductors and insulators. Epoxy and Bismaleimide-Triazine resin (BT resin) were used as the conductor insulator. The stainless steel (SS) 304 bars, whose surface was treated mechanically and chemically, was applied to the modeled conductor. The model coil was would with the model conductors covered with the insulator by grand insulator. A winding model combining 3 x 3 conductors was produced for measuring shearing rigidity. The sample was loaded with pure shearing force at the LN 2 temperature. The bar winding sample, by 8 x 6 conductors, was measured the bending rigidity. These three point bending tests were carried out at room temperature. The pancake winding sample was loaded with compressive forces to measure compressive rigidity of winding. (author)
Mechanics, Mechanobiology, and Modeling of Human Abdominal Aorta and Aneurysms
Humphrey, J.D.; Holzapfel, G.A.
2011-01-01
Biomechanical factors play fundamental roles in the natural history of abdominal aortic aneurysms (AAAs) and their responses to treatment. Advances during the past two decades have increased our understanding of the mechanics and biology of the human abdominal aorta and AAAs, yet there remains a pressing need for considerable new data and resulting patient-specific computational models that can better describe the current status of a lesion and better predict the evolution of lesion geometry, composition, and material properties and thereby improve interventional planning. In this paper, we briefly review data on the structure and function of the human abdominal aorta and aneurysmal wall, past models of the mechanics, and recent growth and remodeling models. We conclude by identifying open problems that we hope will motivate studies to improve our computational modeling and thus general understanding of AAAs. PMID:22189249
Three mechanical models for blebbing and multi-blebbing
Woolley, T. E.
2014-06-17
Membrane protrusions known as blebs play important roles in many cellular phenomena. Here we present three mathematical models of the bleb formation, which use biological insights to produce phenotypically accurate pressure-driven expansions. First, we introduce a recently suggested solid mechanics framework that is able to create blebs through stretching the membrane. This framework is then extended to include reference state reconfigurations, which models membrane growth. Finally, the stretching and reconfiguring mechanical models are compared with a much simpler geometrically constrained solution. This allows us to demonstrate that simpler systems are able to capture much of the biological complexity despite more restrictive assumptions. Moreover, the simplicity of the spherical model allows us to consider multiple blebs in a tractable framework. © 2014 The authors 2014. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Deterministic analysis of mid scale outdoor fire
International Nuclear Information System (INIS)
Vidmar, P.; Petelin, S.
2003-01-01
The idea behind the article is how to define fire behaviour. The work is based on an analytical study of fire origin, its development and spread. Mathematical fire model called FDS (Fire Dynamic Simulator) is used in the presented work. A CFD (Computational Fluid Dynamic) model using LES (Large Eddie Simulation) is used to calculate fire development and spread of combustion products in the environment. The fire source is located in the vicinity of the hazardous plant, power, chemical etc. The article presents the brief background of the FDS computer program and the initial and boundary conditions used in the mathematical model. Results discuss output data and check the validity of results. The work also presents some corrections of the physical model used, which influence the quality of results. The obtained results were discussed and compared with the Fire Safety Analysis report included in the Probabilistic Safety Assessment of Krsko nuclear power plant. (author)
A model for the origin and mechanisms of CP violation
International Nuclear Information System (INIS)
Wu, Y.
1995-01-01
In this talk I will show that the two-Higgs doublet model with vacuum CP violation and approximate global U(1) family symmetries may provide one of the simplest and attractive models for understanding the origin and mechanisms of CP violation. It is shown that the mechanism of spontaneous symmetry breaking provides not only a mechanism for generating masses of the bosons and fermions, but also a mechanism for creating CP-phases of the bosons and fermions, so that CP violation occurs, after spontaneous symmetry breaking, in all possible ways from a single CP phase of the vacuum and is generally classified into four types of CP-violating mechanism. A new type of CP-violating mechanism in the charged Higgs boson interactions of the fermions is emphasized and can provide a consistent description for both established and reported CP-, P-, and T-violating phenomena. Of particular importance is the new source of CP violation for charged Higgs boson interactions that lead to the value of ε'/ε as large as 10 -3 independent of the CKM phase. copyright 1995 American Institute of Physics