Theory and application of deterministic multidimensional pointwise energy lattice physics methods
International Nuclear Information System (INIS)
Zerkle, M.L.
1999-01-01
The theory and application of deterministic, multidimensional, pointwise energy lattice physics methods are discussed. These methods may be used to solve the neutron transport equation in multidimensional geometries using near-continuous energy detail to calculate equivalent few-group diffusion theory constants that rigorously account for spatial and spectral self-shielding effects. A dual energy resolution slowing down algorithm is described which reduces the computer memory and disk storage requirements for the slowing down calculation. Results are presented for a 2D BWR pin cell depletion benchmark problem
International Nuclear Information System (INIS)
Hoisie, A.; Lubeck, O.; Wasserman, H.
1998-01-01
The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, the authors analyze two problem sizes. Their model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor
Energy-pointwise discrete ordinates transport methods
International Nuclear Information System (INIS)
Williams, M.L.; Asgari, M.; Tashakorri, R.
1997-01-01
A very brief description is given of a one-dimensional code, CENTRM, which computes a detailed, space-dependent flux spectrum in a pointwise-energy representation within the resolved resonance range. The code will become a component in the SCALE system to improve computation of self-shielded cross sections, thereby enhancing the accuracy of codes such as KENO. CENTRM uses discrete-ordinates transport theory with an arbitrary angular quadrature order and a Legendre expansion of scattering anisotropy for moderator materials and heavy nuclides. The CENTRM program provides capability to deterministically compute full energy range, space-dependent angular flux spectra, rigorously accounting for resonance fine-structure and scattering anisotropy effects
Pointwise convergence of Fourier series
Arias de Reyna, Juan
2002-01-01
This book contains a detailed exposition of Carleson-Hunt theorem following the proof of Carleson: to this day this is the only one giving better bounds. It points out the motivation of every step in the proof. Thus the Carleson-Hunt theorem becomes accessible to any analyst.The book also contains the first detailed exposition of the fine results of Hunt, Sjölin, Soria, etc on the convergence of Fourier Series. Its final chapters present original material. With both Fefferman's proof and the recent one of Lacey and Thiele in print, it becomes more important than ever to understand and compare these two related proofs with that of Carleson and Hunt. These alternative proofs do not yield all the results of the Carleson-Hunt proof. The intention of this monograph is to make Carleson's proof accessible to a wider audience, and to explain its consequences for the pointwise convergence of Fourier series for functions in spaces near $äcal Lü^1$, filling a well-known gap in the literature.
Pointwise probability reinforcements for robust statistical inference.
Frénay, Benoît; Verleysen, Michel
2014-02-01
Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rapid pointwise stabilization of vibrating strings and beams
Directory of Open Access Journals (Sweden)
Alia BARHOUMI
2009-11-01
Full Text Available Applying a general construction and using former results on the observability we prove, under rather general assumptions, a rapid pointwise stabilization of vibrating strings and beams.
The undefined function differs from the pointwise undefined function
Dosch, Walter (Prof.)
1993-01-01
The undefined function differs from the pointwise undefined function. - In: Joint Conference on Declarative Programming : Proceedings / Maria I. Sessa ... (eds.). - Salerno : Univ. degli Studi, 1995. - S. 257-268
Morales, Esteban; de Leon, John Mark S; Abdollahi, Niloufar; Yu, Fei; Nouri-Mahdavi, Kouros; Caprioli, Joseph
2016-03-01
The study was conducted to evaluate threshold smoothing algorithms to enhance prediction of the rates of visual field (VF) worsening in glaucoma. We studied 798 patients with primary open-angle glaucoma and 6 or more years of follow-up who underwent 8 or more VF examinations. Thresholds at each VF location for the first 4 years or first half of the follow-up time (whichever was greater) were smoothed with clusters defined by the nearest neighbor (NN), Garway-Heath, Glaucoma Hemifield Test (GHT), and weighting by the correlation of rates at all other VF locations. Thresholds were regressed with a pointwise exponential regression (PER) model and a pointwise linear regression (PLR) model. Smaller root mean square error (RMSE) values of the differences between the observed and the predicted thresholds at last two follow-ups indicated better model predictions. The mean (SD) follow-up times for the smoothing and prediction phase were 5.3 (1.5) and 10.5 (3.9) years. The mean RMSE values for the PER and PLR models were unsmoothed data, 6.09 and 6.55; NN, 3.40 and 3.42; Garway-Heath, 3.47 and 3.48; GHT, 3.57 and 3.74; and correlation of rates, 3.59 and 3.64. Smoothed VF data predicted better than unsmoothed data. Nearest neighbor provided the best predictions; PER also predicted consistently more accurately than PLR. Smoothing algorithms should be used when forecasting VF results with PER or PLR. The application of smoothing algorithms on VF data can improve forecasting in VF points to assist in treatment decisions.
A computer program for the pointwise functions generation
International Nuclear Information System (INIS)
Caldeira, Alexandre D.
1995-01-01
A computer program that was developed with the objective of generating pointwise functions, by a combination of tabulated values and/or mathematical expressions, to be used as weighting functions for nuclear data is presented. This simple program can be an important tool for researchers involved in group constants generation. (author). 5 refs, 2 figs
Learning With Mixed Hard/Soft Pointwise Constraints.
Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello
2015-09-01
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
A Point-Wise Quantification of Asymmetry Using Deformation Fields
DEFF Research Database (Denmark)
Ólafsdóttir, Hildur; Lanche, Stephanie; Darvann, Tron Andre
2007-01-01
of the resulting displacement vectors on the left and right side of the symmetry plane, gives a point-wise measure of asymmetry. The asymmetry measure was applied to the study of Crouzon syndrome using Micro CT scans of genetically modified mice. Crouzon syndrome is characterised by the premature fusion of cranial...
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
Finn, Conor; Lizier, Joseph
2018-04-01
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.
Zanni, Martin Thomas; Damrauer, Niels H.
2010-07-20
A multidimensional spectrometer for the infrared, visible, and ultraviolet regions of the electromagnetic spectrum, and a method for making multidimensional spectroscopic measurements in the infrared, visible, and ultraviolet regions of the electromagnetic spectrum. The multidimensional spectrometer facilitates measurements of inter- and intra-molecular interactions.
Directory of Open Access Journals (Sweden)
W. Łenski
2015-01-01
Full Text Available The results generalizing some theorems on N, pnE, γ summability are shown. The same degrees of pointwise approximation as in earlier papers by weaker assumptions on considered functions and examined summability methods are obtained. From presented pointwise results, the estimation on norm approximation is derived. Some special cases as corollaries are also formulated.
Pointwise convergence and Ascoli theorems for nearness spaces
Directory of Open Access Journals (Sweden)
Zhanbo Yang
2009-04-01
Full Text Available We first study subspaces and product spaces in the context of nearness spaces and prove that U-N spaces, C-N spaces, PN spaces and totally bounded nearness spaces are nearness hereditary; T-N spaces and compact nearness spaces are N-closed hereditary. We prove that N2 plus compact implies N-closed subsets. We prove that totally bounded, compact and N2 are productive. We generalize the concepts of neighborhood systems into the nearness spaces and prove that the nearness neighborhood systems are consistent with existing concepts of neighborhood systems in topological spaces, uniform spaces and proximity spaces respectively when considered in the respective sub-categories. We prove that a net of functions is convergent under the pointwise convergent nearness structure if and only if its cross-section at each point is convergent. We have also proved two Ascoli-Arzelà type of theorems.
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
A Pointwise Dimension Analysis of the Las Campanas Redshift Survey
Best, J. S.
1999-12-01
The modern motivation for fractal geometry may best be summed up by this quote of Benoit Mandelbrot: ``Mountains are not cones, clouds are not spheres, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.'' Fractals are, in simplest terms, ``objects which are (approximately) self-similar on all scales.'' The renewed modern interest in fractals has found as one of its applications the study of large-scale structure, giving a quantitative descriptive scheme to ideas that had been expressed qualitatively as early as the 1920s. This paper presents the preliminary results of an analysis of the structure of the Las Campanas Redshift Survey, or LCRS. LCRS is an approximately 26000 galaxy survey (surveyed as six declination slices) that has been studied extensively over the past few years, with an eye towards understanding large-scale structure. For this analysis, I have used the pointwise dimension, an easy-to-apply fractal statistic which has been previously used to study cluster interiors, galactic distributions, and cluster distributions. The present analysis has been performed to serve as a guide for the study of future large redshift surveys. This research has been funded by National Science Foundation grant AST-9808608.
Czech Academy of Sciences Publication Activity Database
Světlák, M.; Bob, P.; Roman, R.; Ježek, S.; Damborská, A.; Chládek, Jan; Shaw, D. J.; Kukleta, M.
2013-01-01
Roč. 62, č. 6 (2013), s. 711-719 ISSN 0862-8408 Institutional support: RVO:68081731 Keywords : electrodermal activity * pointwise trasinformation * autonomic nervous system * asymmetry * stress Subject RIV: CE - Biochemistry Impact factor: 1.487, year: 2013
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Advances in stochastic and deterministic global optimization
Zhigljavsky, Anatoly; Žilinskas, Julius
2016-01-01
Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...
ALTERNATE PURSUIT WITH THREE PARTICIPANTS (THE CASE OF POINTWISE MEETING
Directory of Open Access Journals (Sweden)
Viktor Shiryayev
2016-03-01
Full Text Available The issues connected with alternate pursuit of escapees group are considered in a number of papers. So in papers [1–3] the solution of the problem has been found in the assumption that the next meeting is selected at the initial time (by the program and the players are moving straight. In paper [4] the solution of the task using the approach of R. Isaacs is given. In paper [5] the choice opportunities of the next meeting ( both software and positional are considered. The article deals with a simple differential game on the pursuer plane P and the coalition of two escapees E={E1,E2}.The movement of all the players are assumed as inertialess. The pursuer speed P exceeds the speed of each of the escapees. The targets, physical abilities and the exact location of each other in any moment of the game are known to all players. The price of the coalition (the pursuer P is (minus the total time spent by the pursuer P on the pointwise meeting with E1 and E2. A coincidence of pursuer and escapee location is meant under the meeting. The choice at the initial time of the persecution is supposed as given (software selectable regular meeting. The limit of the security zone of the second escapee has been found. A geometric approach is used in the problem solving. The resulting system of equations is solved numerically by means of computer algebra, in particular through the Wolfram Mathematics. After defining the boundary of the second escapee security zone one can study the game between the pursuer Р and three escapees acting in concord (the first escapee is eliminated from the game.
Deterministic Compressed Sensing
2011-11-01
39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
International Nuclear Information System (INIS)
1990-01-01
In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)
Deterministic behavioural models for concurrency
DEFF Research Database (Denmark)
Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn
1993-01-01
This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....
Pointwise Multipliers on Spaces of Homogeneous Type in the Sense of Coifman and Weiss
Directory of Open Access Journals (Sweden)
Yanchang Han
2014-01-01
homogeneous type in the sense of Coifman and Weiss, pointwise multipliers of inhomogeneous Besov and Triebel-Lizorkin spaces are obtained. We make no additional assumptions on the quasi-metric or the doubling measure. Hence, the results of this paper extend earlier related results to a more general setting.
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...
Directory of Open Access Journals (Sweden)
Alexis Cedeño Trujillo
2006-04-01
Full Text Available
Data Warehousing, es una tecnología para el almacenamiento de grandes volúmenes de datos en una amplia perspectiva de tiempo para el soporte a la toma de decisiones. Debido a su orientación analítica, impone un procesamiento distinto al de los sistemas operacionales y requiere de un diseño de base de datos más cercano a la visión de los usuarios finales, permitiendo que sea más fácil la recuperación de información y la navegación. Este diseño de base de datos se conoce como modelo multidimensional, este artículo, abordará sus características principales.
Streamflow disaggregation: a nonlinear deterministic approach
Directory of Open Access Journals (Sweden)
B. Sivakumar
2004-01-01
Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-12-01
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Height-Deterministic Pushdown Automata
DEFF Research Database (Denmark)
Nowotka, Dirk; Srba, Jiri
2007-01-01
We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
Balsara, Dinshaw S.; Nkonga, Boniface
2017-10-01
Just as the quality of a one-dimensional approximate Riemann solver is improved by the inclusion of internal sub-structure, the quality of a multidimensional Riemann solver is also similarly improved. Such multidimensional Riemann problems arise when multiple states come together at the vertex of a mesh. The interaction of the resulting one-dimensional Riemann problems gives rise to a strongly-interacting state. We wish to endow this strongly-interacting state with physically-motivated sub-structure. The fastest way of endowing such sub-structure consists of making a multidimensional extension of the HLLI Riemann solver for hyperbolic conservation laws. Presenting such a multidimensional analogue of the HLLI Riemann solver with linear sub-structure for use on structured meshes is the goal of this work. The multidimensional MuSIC Riemann solver documented here is universal in the sense that it can be applied to any hyperbolic conservation law. The multidimensional Riemann solver is made to be consistent with constraints that emerge naturally from the Galerkin projection of the self-similar states within the wave model. When the full eigenstructure in both directions is used in the present Riemann solver, it becomes a complete Riemann solver in a multidimensional sense. I.e., all the intermediate waves are represented in the multidimensional wave model. The work also presents, for the very first time, an important analysis of the dissipation characteristics of multidimensional Riemann solvers. The present Riemann solver results in the most efficient implementation of a multidimensional Riemann solver with sub-structure. Because it preserves stationary linearly degenerate waves, it might also help with well-balancing. Implementation-related details are presented in pointwise fashion for the one-dimensional HLLI Riemann solver as well as the multidimensional MuSIC Riemann solver.
Deterministic methods in radiation transport
International Nuclear Information System (INIS)
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
Pointwise Stabilization of a Hybrid System and Optimal Location of Actuator
International Nuclear Information System (INIS)
Ammari, Kais; Saidi, Abdelkader
2007-01-01
We consider a pointwise stabilization problem for a model arising in the control of noise. We prove that we have exponential stability for the low frequencies but not for the high frequencies. Thus, we give an explicit polynomial decay estimation at high frequencies that is valid for regular initial data while clarifying that the behavior of the constant which intervenes in this estimation there, functions as the frequency of cut. We propose a numerical approximation of the model and study numerically the best location of the actuator at low frequencies
Multidimensional Heat Conduction
DEFF Research Database (Denmark)
Rode, Carsten
1998-01-01
Analytical theory of multidimensional heat conduction. General heat conduction equation in three dimensions. Steay state, analytical solutions. The Laplace equation. Method of separation of variables. Principle of superposition. Shape factors. Transient, multidimensional heat conduction....
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-04-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-06-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Common Fixed Points for Asymptotic Pointwise Nonexpansive Mappings in Metric and Banach Spaces
Directory of Open Access Journals (Sweden)
P. Pasom
2012-01-01
Full Text Available Let C be a nonempty bounded closed convex subset of a complete CAT(0 space X. We prove that the common fixed point set of any commuting family of asymptotic pointwise nonexpansive mappings on C is nonempty closed and convex. We also show that, under some suitable conditions, the sequence {xk}k=1∞ defined by xk+1=(1-tmkxk⊕tmkTmnky(m-1k, y(m-1k=(1-t(m-1kxk⊕t(m-1kTm-1nky(m-2k,y(m-2k=(1-t(m-2kxk⊕t(m-2kTm-2nky(m-3k,…,y2k=(1-t2kxk⊕t2kT2nky1k,y1k=(1-t1kxk⊕t1kT1nky0k,y0k=xk, k∈N, converges to a common fixed point of T1,T2,…,Tm where they are asymptotic pointwise nonexpansive mappings on C, {tik}k=1∞ are sequences in [0,1] for all i=1,2,…,m, and {nk} is an increasing sequence of natural numbers. The related results for uniformly convex Banach spaces are also included.
Converting point-wise nuclear cross sections to pole representation using regularized vector fitting
Peng, Xingjie; Ducru, Pablo; Liu, Shichang; Forget, Benoit; Liang, Jingang; Smith, Kord
2018-03-01
Direct Doppler broadening of nuclear cross sections in Monte Carlo codes has been widely sought for coupled reactor simulations. One recent approach proposed analytical broadening using a pole representation of the commonly used resonance models and the introduction of a local windowing scheme to improve performance (Hwang, 1987; Forget et al., 2014; Josey et al., 2015, 2016). This pole representation has been achieved in the past by converting resonance parameters in the evaluation nuclear data library into poles and residues. However, cross sections of some isotopes are only provided as point-wise data in ENDF/B-VII.1 library. To convert these isotopes to pole representation, a recent approach has been proposed using the relaxed vector fitting (RVF) algorithm (Gustavsen and Semlyen, 1999; Gustavsen, 2006; Liu et al., 2018). This approach however needs to specify ahead of time the number of poles. This article addresses this issue by adding a poles and residues filtering step to the RVF procedure. This regularized VF (ReV-Fit) algorithm is shown to efficiently converge the poles close to the physical ones, eliminating most of the superfluous poles, and thus enabling the conversion of point-wise nuclear cross sections.
Criticality benchmarks for COG: A new point-wise Monte Carlo code
International Nuclear Information System (INIS)
Alesso, H.P.; Pearson, J.; Choi, J.S.
1989-01-01
COG is a new point-wise Monte Carlo code being developed and tested at LLNL for the Cray computer. It solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) charged particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems. However, its point-wise cross-sections also make it effective for a wide variety of criticality problems. COG has some similarities to a number of other computer codes used in the shielding and criticality community. These include the Lawrence Livermore National Laboratory (LLNL) codes TART and ALICE, the Los Alamos National Laboratory code MCNP, the Oak Ridge National Laboratory codes 05R, 06R, KENO, and MORSE, the SACLAY code TRIPOLI, and the MAGI code SAM. Each code is a little different in its geometry input and its random-walk modification options. Validating COG consists in part of running benchmark calculations against critical experiments as well as other codes. The objective of this paper is to present calculational results of a variety of critical benchmark experiments using COG, and to present the resulting code bias. Numerous benchmark calculations have been completed for a wide variety of critical experiments which generally involve both simple and complex physical problems. The COG results, which they report in this paper, have been excellent
Deterministic indexing for packed strings
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye
2017-01-01
Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ...
Nonlinear Markov processes: Deterministic case
International Nuclear Information System (INIS)
Frank, T.D.
2008-01-01
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
A fast pointwise strategy for anisotropic wave-mode separation in TI media
Liu, Qiancheng
2017-08-17
The multi-component wavefield contains both compressional and shear waves. Separating wave-modes has many applications in seismic workflows. Conventionally, anisotropic wave-mode separation is implemented by either directly filtering in the wavenumber domain or nonstationary filtering in the space domain, which are computationally expensive. These methods could be categorized into the pseudo-derivative family and only work well within Finite Difference (FD) methods. In this paper, we establish a relationship between group-velocity direction and polarity direction and propose a method, which could go beyond modeling by FD. In particular, we are interested in performing wave-mode separation in a Spectral Element Method (SEM), which is widely used for seismic wave propagation on various scales. The separation is implemented pointwise, independent of its neighbor points, suitable for running in parallel. Moreover, no correction for amplitude and phase changes caused by the derivative operator is required. We have verified our scheme using numerical examples.
Chernyshov, A. D.; Goryainov, V. V.; Danshin, A. A.
2018-03-01
The stress problem for the elastic wedge-shaped cutter of finite dimensions with mixed boundary conditions is considered. The differential problem is reduced to the system of linear algebraic equations by applying twice the fast expansions with respect to the angular and radial coordinate. In order to determine the unknown coefficients of fast expansions, the pointwise method is utilized. The problem solution derived has explicit analytical form and it’s valid for the entire domain including its boundary. The computed profiles of the displacements and stresses in a cross-section of the cutter are provided. The stress field is investigated for various values of opening angle and cusp’s radius.
A fast pointwise strategy for anisotropic wave-mode separation in TI media
Liu, Qiancheng; Peter, Daniel; Lu, Yongming
2017-01-01
The multi-component wavefield contains both compressional and shear waves. Separating wave-modes has many applications in seismic workflows. Conventionally, anisotropic wave-mode separation is implemented by either directly filtering in the wavenumber domain or nonstationary filtering in the space domain, which are computationally expensive. These methods could be categorized into the pseudo-derivative family and only work well within Finite Difference (FD) methods. In this paper, we establish a relationship between group-velocity direction and polarity direction and propose a method, which could go beyond modeling by FD. In particular, we are interested in performing wave-mode separation in a Spectral Element Method (SEM), which is widely used for seismic wave propagation on various scales. The separation is implemented pointwise, independent of its neighbor points, suitable for running in parallel. Moreover, no correction for amplitude and phase changes caused by the derivative operator is required. We have verified our scheme using numerical examples.
Frankowska, Hélène; Hoehener, Daniel
2017-06-01
This paper is devoted to pointwise second-order necessary optimality conditions for the Mayer problem arising in optimal control theory. We first show that with every optimal trajectory it is possible to associate a solution p (ṡ) of the adjoint system (as in the Pontryagin maximum principle) and a matrix solution W (ṡ) of an adjoint matrix differential equation that satisfy a second-order transversality condition and a second-order maximality condition. These conditions seem to be a natural second-order extension of the maximum principle. We then prove a Jacobson like necessary optimality condition for general control systems and measurable optimal controls that may be only ;partially singular; and may take values on the boundary of control constraints. Finally we investigate the second-order sensitivity relations along optimal trajectories involving both p (ṡ) and W (ṡ).
Directory of Open Access Journals (Sweden)
Irwin Yousept
2010-07-01
Full Text Available An optimal control problem arising in the context of 3D electromagnetic induction heating is investigated. The state equation is given by a quasilinear stationary heat equation coupled with a semilinear time harmonic eddy current equation. The temperature-dependent electrical conductivity and the presence of pointwise inequality state-constraints represent the main challenge of the paper. In the first part of the paper, the existence and regularity of the state are addressed. The second part of the paper deals with the analysis of the corresponding linearized equation. Some suffcient conditions are presented which guarantee thesolvability of the linearized system. The final part of the paper is concerned with the optimal control. The aim of the optimization is to find the optimal voltage such that a desired temperature can be achieved optimally. The corresponding first-order necessary optimality condition is presented.
Summary - COG: A new point-wise Monte Carlo code for burnup credit analysis
International Nuclear Information System (INIS)
Alesso, H.P.
1989-01-01
COG, a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL) for the Cray-1, solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) other particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems and a wide variety of criticality problems. COG is similar to a number of other computer codes used in the shielding community. Each code is a little different in its geometry input and its random-walk modification options. COG is a Monte Carlo code specifically designed for the CRAY (in 1986) to be as precise as the current state of physics knowledge. It has been extensively benchmarked and used as a shielding code at LLNL since 1986, and has recently been extended to accomplish criticality calculations. It will make an excellent tool for future shipping cask studies
Measuring global oil trade dependencies: An application of the point-wise mutual information method
International Nuclear Information System (INIS)
Kharrazi, Ali; Fath, Brian D.
2016-01-01
Oil trade is one of the most vital networks in the global economy. In this paper, we analyze the 1998–2012 oil trade networks using the point-wise mutual information (PMI) method and determine the pairwise trade preferences and dependencies. Using examples of the USA's trade partners, this research demonstrates the usefulness of the PMI method as an additional methodological tool to evaluate the outcomes from countries' decisions to engage in preferred trading partners. A positive PMI value indicates trade preference where trade is larger than would be expected. For example, in 2012 the USA imported 2,548.7 kbpd despite an expected 358.5 kbpd of oil from Canada. Conversely, a negative PMI value indicates trade dis-preference where the amount of trade is smaller than what would be expected. For example, the 15-year average of annual PMI between Saudi Arabia and the U.S.A. is −0.130 and between Russia and the USA −1.596. We reflect the three primary reasons of discrepancies between actual and neutral model trade can be related to position, price, and politics. The PMI can quantify the political success or failure of trade preferences and can more accurately account temporal variation of interdependencies. - Highlights: • We analyzed global oil trade networks using the point-wise mutual information method. • We identified position, price, & politics as drivers of oil trade preference. • The PMI method is useful in research on complex trade networks and dependency theory. • A time-series analysis of PMI can track dependencies & evaluate policy decisions.
Deterministic extraction from weak random sources
Gabizon, Ariel
2011-01-01
In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.
Deterministic hydrodynamics: Taking blood apart
Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.
2006-10-01
We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication
ICRP (1991) and deterministic effects
International Nuclear Information System (INIS)
Mole, R.H.
1992-01-01
A critical review of ICRP Publication 60 (1991) shows that considerable revisions are needed in both language and thinking about deterministic effects (DE). ICRP (1991) makes a welcome and clear distinction between change, caused by irradiation; damage, some degree of deleterious change, for example to cells, but not necessarily deleterious to the exposed individual; harm, clinically observable deleterious effects expressed in individuals or their descendants; and detriment, a complex concept combining the probability, severity and time of expression of harm (para42). (All added emphases come from the author.) Unfortunately these distinctions are not carried through into the discussion of deterministic effects (DE) and two important terms are left undefined. Presumably effect may refer to change, damage, harm or detriment, according to context. Clinically observable is also undefined although its meaning is crucial to any consideration of DE since DE are defined as causing observable harm (para 20). (Author)
Directory of Open Access Journals (Sweden)
Mihaela MUNTEAN
2006-01-01
Full Text Available Using SQL you can manipulate multidimensional data and extract that data into a relational table. There are many PL/SQL packages that you can use directly in SQL*Plus or indirectly in Analytic Workspace Manager and OLAP Worksheet. In this article I discussed about some methods that you can use for manipulating and extracting multidimensional data.
Multidimensional high harmonic spectroscopy
International Nuclear Information System (INIS)
Bruner, Barry D; Soifer, Hadas; Shafir, Dror; Dudovich, Nirit; Serbinenko, Valeria; Smirnova, Olga
2015-01-01
High harmonic generation (HHG) has opened up a new frontier in ultrafast science where attosecond time resolution and Angstrom spatial resolution are accessible in a single measurement. However, reconstructing the dynamics under study is limited by the multiple degrees of freedom involved in strong field interactions. In this paper we describe a new class of measurement schemes for resolving attosecond dynamics, integrating perturbative nonlinear optics with strong-field physics. These approaches serve as a basis for multidimensional high harmonic spectroscopy. Specifically, we show that multidimensional high harmonic spectroscopy can measure tunnel ionization dynamics with high precision, and resolves the interference between multiple ionization channels. In addition, we show how multidimensional HHG can function as a type of lock-in amplifier measurement. Similar to multi-dimensional approaches in nonlinear optical spectroscopy that have resolved correlated femtosecond dynamics, multi-dimensional high harmonic spectroscopy reveals the underlying complex dynamics behind attosecond scale phenomena. (paper)
Deterministic chaos in entangled eigenstates
Schlegel, K. G.; Förster, S.
2008-05-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
Energy Technology Data Exchange (ETDEWEB)
Schlegel, K.G. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)], E-mail: guenter.schlegel@arcor.de; Foerster, S. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)
2008-05-12
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
International Nuclear Information System (INIS)
Schlegel, K.G.; Foerster, S.
2008-01-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator
A deterministic width function model
Directory of Open Access Journals (Sweden)
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
Světlák, M; Bob, P; Roman, R; Ježek, S; Damborská, A; Chládek, J; Shaw, D J; Kukleta, M
2013-01-01
In this study, we tested the hypothesis that experimental stress induces a specific change of left-right electrodermal activity (EDA) coupling pattern, as indexed by pointwise transinformation (PTI). Further, we hypothesized that this change is associated with scores on psychometric measures of the chronic stress-related psychopathology. Ninety-nine university students underwent bilateral measurement of EDA during rest and stress-inducing Stroop test and completed a battery of self-report measures of chronic stress-related psychopathology. A significant decrease in the mean PTI value was the prevalent response to the stress conditions. No association between chronic stress and PTI was found. Raw scores of psychometric measures of stress-related psychopathology had no effect on either the resting levels of PTI or the amount of stress-induced PTI change. In summary, acute stress alters the level of coupling pattern of cortico-autonomic influences on the left and right sympathetic pathways to the palmar sweat glands. Different results obtained using the PTI, EDA laterality coefficient, and skin conductance level also show that the PTI algorithm represents a new analytical approach to EDA asymmetry description.
International Nuclear Information System (INIS)
Buchhardt, F.; Brandl, P.
1981-01-01
In the application of reinforced or prestressed concrete reactor containments, the safety enclosure will be obtained through a steel liner membrane, which is attached pointwise to the interior concrete surface. It is the objective and aim of this study to analyse the overall structural behaviour of the bonded system consisting of concrete containment, studs, and steel liner - especially under the aspect of extreme load and deformation conditions. The parametric analysis is carried out on the basis of the geometric length/depth ratio l/t = 12 of a single liner field. In order to reduce the considerable computational effort to a minimum, it is necessary to decouple the overall system in its structural components, i.e., at first an imperfect predeflected 'buckling' field and the residual 'plane' liner field are considered separately. A further reduction enables the use of stud anchor characteristics which are based on experiments. Three-dimensional analyses are performed for the single 'buckling' field to obtain specific load-displacement functions; the residual plane system is considered with two- as well as one-dimensional models. For the comprehensive parametric evalution of the overall system behaviour, a linear model is assumed to which these load-displacement functions are applied. Constraint temperatures are introduced as a unit scale - up to failure of the overall system; hereby partial structural failure might lead to temporary relief. (orig.)
Deterministic global optimization an introduction to the diagonal approach
Sergeyev, Yaroslav D
2017-01-01
This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...
Directory of Open Access Journals (Sweden)
Pavlo O. Kasyanov
2012-01-01
Full Text Available We consider autonomous evolution inclusions and hemivariational inequalities with nonsmooth dependence between determinative parameters of a problem. The dynamics of all weak solutions defined on the positive semiaxis of time is studied. We prove the existence of trajectory and global attractors and investigate their structure. New properties of complete trajectories are justified. We study classes of mathematical models for geophysical processes and fields containing the multidimensional “reaction-displacement” law as one of possible application. The pointwise behavior of such problem solutions on attractor is described.
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
Energy Technology Data Exchange (ETDEWEB)
Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.
2014-02-01
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
International Nuclear Information System (INIS)
S. GOLUOGLU, C. BENTLEY, R. DEMEGLIO, M. DUNN, K. NORTON, R. PEVEY I.SUSLOV AND H.L. DODDS
1998-01-01
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems
Energy Technology Data Exchange (ETDEWEB)
Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr [Université Paris-Est, LAMA (UMR 8050), UPEMLV, UPEC, CNRS (France)
2016-10-15
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.
International Nuclear Information System (INIS)
Goreac, Dan; Kobylanski, Magdalena; Martinez, Miguel
2016-01-01
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.
Directory of Open Access Journals (Sweden)
Daniel M Spagnolo
2016-01-01
Full Text Available Background: Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME are key contributors to heterogeneity. Methods: We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI and visually represent heterogeneity with a two-dimensional map. Results: We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. Conclusions: This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI, which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different
Transport stochastic multi-dimensional media
International Nuclear Information System (INIS)
Haran, O.; Shvarts, D.
1996-01-01
Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors)
Transport stochastic multi-dimensional media
Energy Technology Data Exchange (ETDEWEB)
Haran, O; Shvarts, D [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev; Thiberger, R [Ben-Gurion Univ. of the Negev, Beersheba (Israel)
1996-12-01
Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors).
Applied multidimensional systems theory
Bose, Nirmal K
2017-01-01
Revised and updated, this concise new edition of the pioneering book on multidimensional signal processing is ideal for a new generation of students. Multidimensional systems or m-D systems are the necessary mathematical background for modern digital image processing with applications in biomedicine, X-ray technology and satellite communications. Serving as a firm basis for graduate engineering students and researchers seeking applications in mathematical theories, this edition eschews detailed mathematical theory not useful to students. Presentation of the theory has been revised to make it more readable for students, and introduce some new topics that are emerging as multidimensional DSP topics in the interdisciplinary fields of image processing. New topics include Groebner bases, wavelets, and filter banks.
Javidi, Bahram; Andres, Pedro
2014-01-01
Provides a broad overview of advanced multidimensional imaging systems with contributions from leading researchers in the field Multi-dimensional Imaging takes the reader from the introductory concepts through to the latest applications of these techniques. Split into 3 parts covering 3D image capture, processing, visualization and display, using 1) a Multi-View Approach and 2.) a Holographic Approach, followed by a 3rd part addressing other 3D systems approaches, applications and signal processing for advanced 3D imaging. This book describes recent developments, as well as the prospects and
Brovelli, M. A.; Oxoli, D.; Zurbarán, M. A.
2016-06-01
During the past years Web 2.0 technologies have caused the emergence of platforms where users can share data related to their activities which in some cases are then publicly released with open licenses. Popular categories for this include community platforms where users can upload GPS tracks collected during slow travel activities (e.g. hiking, biking and horse riding) and platforms where users share their geolocated photos. However, due to the high heterogeneity of the information available on the Web, the sole use of these user-generated contents makes it an ambitious challenge to understand slow mobility flows as well as to detect the most visited locations in a region. Exploiting the available data on community sharing websites allows to collect near real-time open data streams and enables rigorous spatial-temporal analysis. This work presents an approach for collecting, unifying and analysing pointwise geolocated open data available from different sources with the aim of identifying the main locations and destinations of slow mobility activities. For this purpose, we collected pointwise open data from the Wikiloc platform, Twitter, Flickr and Foursquare. The analysis was confined to the data uploaded in Lombardy Region (Northern Italy) - corresponding to millions of pointwise data. Collected data was processed through the use of Free and Open Source Software (FOSS) in order to organize them into a suitable database. This allowed to run statistical analyses on data distribution in both time and space by enabling the detection of users' slow mobility preferences as well as places of interest at a regional scale.
Directory of Open Access Journals (Sweden)
M. A. Brovelli
2016-06-01
Full Text Available During the past years Web 2.0 technologies have caused the emergence of platforms where users can share data related to their activities which in some cases are then publicly released with open licenses. Popular categories for this include community platforms where users can upload GPS tracks collected during slow travel activities (e.g. hiking, biking and horse riding and platforms where users share their geolocated photos. However, due to the high heterogeneity of the information available on the Web, the sole use of these user-generated contents makes it an ambitious challenge to understand slow mobility flows as well as to detect the most visited locations in a region. Exploiting the available data on community sharing websites allows to collect near real-time open data streams and enables rigorous spatial-temporal analysis. This work presents an approach for collecting, unifying and analysing pointwise geolocated open data available from different sources with the aim of identifying the main locations and destinations of slow mobility activities. For this purpose, we collected pointwise open data from the Wikiloc platform, Twitter, Flickr and Foursquare. The analysis was confined to the data uploaded in Lombardy Region (Northern Italy – corresponding to millions of pointwise data. Collected data was processed through the use of Free and Open Source Software (FOSS in order to organize them into a suitable database. This allowed to run statistical analyses on data distribution in both time and space by enabling the detection of users’ slow mobility preferences as well as places of interest at a regional scale.
Symbolic Multidimensional Scaling
P.J.F. Groenen (Patrick); Y. Terada
2015-01-01
markdownabstract__Abstract__ Multidimensional scaling (MDS) is a technique that visualizes dissimilarities between pairs of objects as distances between points in a low dimensional space. In symbolic MDS, a dissimilarity is not just a value but can represent an interval or even a histogram. Here,
Deterministic and unambiguous dense coding
International Nuclear Information System (INIS)
Wu Shengjun; Cohen, Scott M.; Sun Yuqing; Griffiths, Robert B.
2006-01-01
Optimal dense coding using a partially-entangled pure state of Schmidt rank D and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most L d messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τ x ) Bob knows for sure that Alice sent message x, and when it fails (probability 1-τ x ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For D≤D a bound is obtained for L d in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes et al. [Phys. Rev. A71, 012311 (2005)]. For D>D it is shown that L d is strictly less than D 2 unless D is an integer multiple of D, in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for D≤D, assuming τ x >0 for a set of DD messages, and a bound is obtained for the average . A bound on the average requires an additional assumption of encoding by isometries (unitaries when D=D) that are orthogonal for different messages. Both bounds are saturated when τ x is a constant independent of x, by a protocol based on one-shot entanglement concentration. For D>D it is shown that (at least) D 2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states
Deterministic quantitative risk assessment development
Energy Technology Data Exchange (ETDEWEB)
Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)
2009-07-01
Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)
Deterministic computation of functional integrals
International Nuclear Information System (INIS)
Lobanov, Yu.Yu.
1995-09-01
A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the
Numeric invariants from multidimensional persistence
Energy Technology Data Exchange (ETDEWEB)
Skryzalin, Jacek [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlsson, Gunnar [Stanford Univ., Stanford, CA (United States)
2017-05-19
In this paper, we analyze the space of multidimensional persistence modules from the perspectives of algebraic geometry. We first build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence over one-dimensional persistence. We argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Lastly, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be used to study data.
Deterministic secure communication protocol without using entanglement
Cai, Qing-yu
2003-01-01
We show a deterministic secure direct communication protocol using single qubit in mixed state. The security of this protocol is based on the security proof of BB84 protocol. It can be realized with current technologies.
Deterministic chaos in the processor load
International Nuclear Information System (INIS)
Halbiniak, Zbigniew; Jozwiak, Ireneusz J.
2007-01-01
In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case
Zanarini, Alessandro
2018-01-01
The progress of optical systems gives nowadays at disposal on lightweight structures complex dynamic measurements and modal tests, each with its own advantages, drawbacks and preferred usage domains. It is thus more easy than before to obtain highly spatially defined vibration patterns for many applications in vibration engineering, testing and general product development. The potential of three completely different technologies is here benchmarked on a common test rig and advanced applications. SLDV, dynamic ESPI and hi-speed DIC are here first deployed in a complex and unique test on the estimation of FRFs with high spatial accuracy from a thin vibrating plate. The latter exhibits a broad band dynamics and high modal density in the common frequency domain where the techniques can find an operative intersection. A peculiar point-wise comparison is here addressed by means of discrete geometry transforms to put all the three technologies on trial at each physical point of the surface. Full field measurement technologies cannot estimate only displacement fields on a refined grid, but can exploit the spatial consistency of the results through neighbouring locations by means of numerical differentiation operators in the spatial domain to obtain rotational degrees of freedom and superficial dynamic strain distributions, with enhanced quality, compared to other technologies in literature. Approaching the task with the aid of superior quality receptance maps from the three different full field gears, this work calculates and compares rotational and dynamic strain FRFs. Dynamic stress FRFs can be modelled directly from the latter, by means of a constitutive model, avoiding the costly and time-consuming steps of building and tuning a numerical dynamic model of a flexible component or a structure in real life conditions. Once dynamic stress FRFs are obtained, spectral fatigue approaches can try to predict the life of a component in many excitation conditions. Different
Risk-based and deterministic regulation
International Nuclear Information System (INIS)
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
Multidimensional nonlinear descriptive analysis
Nishisato, Shizuhiko
2006-01-01
Quantification of categorical, or non-numerical, data is a problem that scientists face across a wide range of disciplines. Exploring data analysis in various areas of research, such as the social sciences and biology, Multidimensional Nonlinear Descriptive Analysis presents methods for analyzing categorical data that are not necessarily sampled randomly from a normal population and often involve nonlinear relations. This reference not only provides an overview of multidimensional nonlinear descriptive analysis (MUNDA) of discrete data, it also offers new results in a variety of fields. The first part of the book covers conceptual and technical preliminaries needed to understand the data analysis in subsequent chapters. The next two parts contain applications of MUNDA to diverse data types, with each chapter devoted to one type of categorical data, a brief historical comment, and basic skills peculiar to the data types. The final part examines several problems and then concludes with suggestions for futu...
The multidimensional nucleon structure
Directory of Open Access Journals (Sweden)
Pasquini Barbara
2016-01-01
Full Text Available We discuss different kinds of parton distributions, which allow one to obtain a multidimensional picture of the internal structure of the nucleon. We use the concept of generalized transverse momentum dependent parton distributions and Wigner distributions, which combine the features of transverse-momentum dependent parton distributions and generalized parton distributions. We show examples of these functions within a phenomenological quark model, with focus on the role of the spin-spin and spin-orbit correlations of quarks.
Design of deterministic interleaver for turbo codes
International Nuclear Information System (INIS)
Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.
2008-01-01
The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)
Multidimensional Models of Information Need
Yun-jie (Calvin) Xu; Kai Huang (Joseph) Tan
2009-01-01
User studies in information science have recognised relevance as a multidimensional construct. An implication of multidimensional relevance is that a user's information need should be modeled by multiple data structures to represent different relevance dimensions. While the extant literature has attempted to model multiple dimensions of a user's information need, the fundamental assumption that a multidimensional model is better than a uni-dimensional model has not been addressed. This study ...
Proving Non-Deterministic Computations in Agda
Directory of Open Access Journals (Sweden)
Sergio Antoy
2017-01-01
Full Text Available We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.
Deterministic dense coding with partially entangled states
Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni
2005-01-01
The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.
Deterministic methods for multi-control fuel loading optimization
Rahman, Fariz B. Abdul
We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.
DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS
Directory of Open Access Journals (Sweden)
MICULEAC Melania Elena
2014-06-01
Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...
Optimal Deterministic Investment Strategies for Insurers
Directory of Open Access Journals (Sweden)
Ulrich Rieder
2013-11-01
Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.
Multidimensional sexual perfectionism.
Stoeber, Joachim; Harvey, Laura N; Almeida, Isabel; Lyons, Emma
2013-11-01
Perfectionism is a multidimensional personality characteristic that can affect all areas of life. This article presents the first systematic investigation of multidimensional perfectionism in the domain of sexuality exploring the unique relationships that different forms of sexual perfectionism show with positive and negative aspects of sexuality. A sample of 272 university students (52 male, 220 female) completed measures of four forms of sexual perfectionism: self-oriented, partner-oriented, partner-prescribed, and socially prescribed. In addition, they completed measures of sexual esteem, sexual self-efficacy, sexual optimism, sex life satisfaction (capturing positive aspects of sexuality) and sexual problem self-blame, sexual anxiety, sexual depression, and negative sexual perfectionism cognitions during sex (capturing negative aspects). Results showed unique patterns of relationships for the four forms of sexual perfectionism, suggesting that partner-prescribed and socially prescribed sexual perfectionism are maladaptive forms of sexual perfectionism associated with negative aspects of sexuality whereas self-oriented and partner-oriented sexual perfectionism emerged as ambivalent forms associated with positive and negative aspects.
A Theory of Deterministic Event Structures
Lee, I.; Rensink, Arend; Smolka, S.A.
1995-01-01
We present an w-complete algebra of a class of deterministic event structures which are labelled prime event structures where the labelling function satises a certain distinctness condition. The operators of the algebra are summation sequential composition and join. Each of these gives rise to a
A Numerical Simulation for a Deterministic Compartmental ...
African Journals Online (AJOL)
In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...
[Intraoperative multidimensional visualization].
Sperling, J; Kauffels, A; Grade, M; Alves, F; Kühn, P; Ghadimi, B M
2016-12-01
Modern intraoperative techniques of visualization are increasingly being applied in general and visceral surgery. The combination of diverse techniques provides the possibility of multidimensional intraoperative visualization of specific anatomical structures. Thus, it is possible to differentiate between normal tissue and tumor tissue and therefore exactly define tumor margins. The aim of intraoperative visualization of tissue that is to be resected and tissue that should be spared is to lead to a rational balance between oncological and functional results. Moreover, these techniques help to analyze the physiology and integrity of tissues. Using these methods surgeons are able to analyze tissue perfusion and oxygenation. However, to date it is not clear to what extent these imaging techniques are relevant in the clinical routine. The present manuscript reviews the relevant modern visualization techniques focusing on intraoperative computed tomography and magnetic resonance imaging as well as augmented reality, fluorescence imaging and optoacoustic imaging.
Multidimensional HAM-conditions
DEFF Research Database (Denmark)
Hansen, Ernst Jan de Place
Heat, Air and Moisture (HAM) conditions, experimental data are needed. Tests were performed in the large climate simulator at SBi involving full-scale wall elements. The elements were exposed for steady-state conditions, and temperature cycles simulating April and September climate in Denmark....... The effect on the moisture and temperature conditions of the addition of a vapour barrier and an outer cladding on timber frame walls was studied. The report contains comprehensive appendices documenting the full-scale tests. The tests were performed as a part of the project 'Model for Multidimensional Heat......, Air and Moisture Conditions in Building Envelope Components' carried out as a co-project between DTU Byg and SBi....
Multidimensional Databases and Data Warehousing
Jensen, Christian
2010-01-01
The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases.The book also covers advanced multidimensional concepts that are considered to b
Piecewise deterministic processes in biological models
Rudnicki, Ryszard
2017-01-01
This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...
Deterministic nonlinear systems a short course
Anishchenko, Vadim S; Strelkova, Galina I
2014-01-01
This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems. This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.
Deterministic nanoparticle assemblies: from substrate to solution
International Nuclear Information System (INIS)
Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J
2014-01-01
The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)
Deterministic dynamics of plasma focus discharges
International Nuclear Information System (INIS)
Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.
1992-04-01
The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs
Understanding deterministic diffusion by correlated random walks
International Nuclear Information System (INIS)
Klages, R.; Korabel, N.
2002-01-01
Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)
Dynamic optimization deterministic and stochastic models
Hinderer, Karl; Stieglitz, Michael
2016-01-01
This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.
Deterministic geologic processes and stochastic modeling
International Nuclear Information System (INIS)
Rautman, C.A.; Flint, A.L.
1992-01-01
This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling
Deterministic analyses of severe accident issues
International Nuclear Information System (INIS)
Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.
2004-01-01
Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody; Tembine, Hamidou; Tempone, Raul
2016-01-01
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Structure of multidimensional patterns
International Nuclear Information System (INIS)
Smith, S.P.
1982-01-01
The problem of describing the structure of multidimensional data is important in exploratory data analysis, statistical pattern recognition, and image processing. A data set is viewed as a collection of points embedded in a high dimensional space. The primary goal of this research is to determine if the data have any clustering structure; such a structure implies the presence of class information (categories) in the data. A statistical hypothesis is used in the decision making. To this end, data with no structure are defined as data following the uniform distribution over some compact convex set in K-dimensional space, called the sampling window. This thesis defines two new tests for uniformity along with various sampling window estimators. The first test is a volume-based test which captures density changes in the data. The second test compares a uniformly distributed sample to the data by using the minimal spanning tree (MST) of the polled samples. Sampling window estimators are provided for simple sampling windows and use the convex hull of the data as a general sampling window estimator. For both of the tests for uniformity, theoretical results are provided on their size, and study their size and power against clustered alternatives is studied. Simulation is also used to study the efficacy of the sampling window estimators
Karakawa, Ayako; Murata, Hiroshi; Hirasawa, Hiroyo; Mayama, Chihiro; Asaoka, Ryo
2013-01-01
To compare the performance of newly proposed point-wise linear regression (PLR) with the binomial test (binomial PLR) against mean deviation (MD) trend analysis and permutation analyses of PLR (PoPLR), in detecting global visual field (VF) progression in glaucoma. 15 VFs (Humphrey Field Analyzer, SITA standard, 24-2) were collected from 96 eyes of 59 open angle glaucoma patients (6.0 ± 1.5 [mean ± standard deviation] years). Using the total deviation of each point on the 2(nd) to 16(th) VFs (VF2-16), linear regression analysis was carried out. The numbers of VF test points with a significant trend at various probability levels (pbinomial test (one-side). A VF series was defined as "significant" if the median p-value from the binomial test was binomial PLR method (0.14 to 0.86) was significantly higher than MD trend analysis (0.04 to 0.89) and PoPLR (0.09 to 0.93). The PIS of the proposed method (0.0 to 0.17) was significantly lower than the MD approach (0.0 to 0.67) and PoPLR (0.07 to 0.33). The PBNS of the three approaches were not significantly different. The binomial BLR method gives more consistent results than MD trend analysis and PoPLR, hence it will be helpful as a tool to 'flag' possible VF deterioration.
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Deterministic and probabilistic approach to safety analysis
International Nuclear Information System (INIS)
Heuser, F.W.
1980-01-01
The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)
Diffusion in Deterministic Interacting Lattice Systems
Medenjak, Marko; Klobas, Katja; Prosen, Tomaž
2017-09-01
We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.
Safety margins in deterministic safety analysis
International Nuclear Information System (INIS)
Viktorov, A.
2011-01-01
The concept of safety margins has acquired certain prominence in the attempts to demonstrate quantitatively the level of the nuclear power plant safety by means of deterministic analysis, especially when considering impacts from plant ageing and discovery issues. A number of international or industry publications exist that discuss various applications and interpretations of safety margins. The objective of this presentation is to bring together and examine in some detail, from the regulatory point of view, the safety margins that relate to deterministic safety analysis. In this paper, definitions of various safety margins are presented and discussed along with the regulatory expectations for them. Interrelationships of analysis input and output parameters with corresponding limits are explored. It is shown that the overall safety margin is composed of several components each having different origins and potential uses; in particular, margins associated with analysis output parameters are contrasted with margins linked to the analysis input. While these are separate, it is possible to influence output margins through the analysis input, and analysis method. Preserving safety margins is tantamount to maintaining safety. At the same time, efficiency of operation requires optimization of safety margins taking into account various technical and regulatory considerations. For this, basic definitions and rules for safety margins must be first established. (author)
A mathematical theory for deterministic quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)
2007-05-15
Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.
Design of deterministic OS for SPLC
International Nuclear Information System (INIS)
Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop
2012-01-01
Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events
Deterministic prediction of surface wind speed variations
Directory of Open Access Journals (Sweden)
G. V. Drisya
2014-11-01
Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Biomedical applications of two- and three-dimensional deterministic radiation transport methods
International Nuclear Information System (INIS)
Nigg, D.W.
1992-01-01
Multidimensional deterministic radiation transport methods are routinely used in support of the Boron Neutron Capture Therapy (BNCT) Program at the Idaho National Engineering Laboratory (INEL). Typical applications of two-dimensional discrete-ordinates methods include neutron filter design, as well as phantom dosimetry. The epithermal-neutron filter for BNCT that is currently available at the Brookhaven Medical Research Reactor (BMRR) was designed using such methods. Good agreement between calculated and measured neutron fluxes was observed for this filter. Three-dimensional discrete-ordinates calculations are used routinely for dose-distribution calculations in three-dimensional phantoms placed in the BMRR beam, as well as for treatment planning verification for live canine subjects. Again, good agreement between calculated and measured neutron fluxes and dose levels is obtained
Contributions to multidimensional quadrature formulas
International Nuclear Information System (INIS)
Guenther, C.
1976-11-01
The general objective of this paper is to construct multidimensional quadrature formulas similar to the Gaussian Quadrature Formulas in one dimension. The correspondence between these formulas and orthogonal and nonnegative polynomials is established. One part of the paper considers the construction of multidimensional quadrature formulas using only methods of algebraic geometry, on the other part it is tried to obtain results on quadrature formulas with real nodes and, if possible, with positive weights. The results include the existence of quadrature formulas, information on the number resp. on the maximum possible number of points in the formulas for given polynomial degree N and the construction of formulas. (orig.) [de
Multi-Dimensional Path Queries
DEFF Research Database (Denmark)
Bækgaard, Lars
1998-01-01
to create nested path structures. We present an SQL-like query language that is based on path expressions and we show how to use it to express multi-dimensional path queries that are suited for advanced data analysis in decision support environments like data warehousing environments......We present the path-relationship model that supports multi-dimensional data modeling and querying. A path-relationship database is composed of sets of paths and sets of relationships. A path is a sequence of related elements (atoms, paths, and sets of paths). A relationship is a binary path...
Multidimensional real analysis I differentiation
Duistermaat, J J; van Braam Houckgeest, J P
2004-01-01
Part one of the authors' comprehensive and innovative work on multidimensional real analysis. This book is based on extensive teaching experience at Utrecht University and gives a thorough account of differential analysis in multidimensional Euclidean space. It is an ideal preparation for students who wish to go on to more advanced study. The notation is carefully organized and all proofs are clean, complete and rigorous. The authors have taken care to pay proper attention to all aspects of the theory. In many respects this book presents an original treatment of the subject and it contains man
A Multidimensional Software Engineering Course
Barzilay, O.; Hazzan, O.; Yehudai, A.
2009-01-01
Software engineering (SE) is a multidimensional field that involves activities in various areas and disciplines, such as computer science, project management, and system engineering. Though modern SE curricula include designated courses that address these various subjects, an advanced summary course that synthesizes them is still missing. Such a…
Multidimensional Databases and Data Warehousing
DEFF Research Database (Denmark)
Jensen, Christian S.; Pedersen, Torben Bach; Thomsen, Christian
The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes...
Recycling Behavior: A Multidimensional Approach
Meneses, Gonzalo Diaz; Palacio, Asuncion Beerli
2005-01-01
This work centers on the study of consumer recycling roles to examine the sociodemographic and psychographic profile of the distribution of recycling tasks and roles within the household. With this aim in mind, an empirical work was carried out, the results of which suggest that recycling behavior is multidimensional and comprises the undertaking…
Influence of fusion dynamics on fission observables: A multidimensional analysis
Schmitt, C.; Mazurek, K.; Nadtochy, P. N.
2018-01-01
An attempt to unfold the respective influence of the fusion and fission stages on typical fission observables, and namely the neutron prescission multiplicity, is proposed. A four-dimensional dynamical stochastic Langevin model is used to calculate the decay by fission of excited compound nuclei produced in a wide set of heavy-ion collisions. The comparison of the results from such a calculation and experimental data is discussed, guided by predictions of the dynamical deterministic HICOL code for the compound-nucleus formation time. While the dependence of the latter on the entrance-channel properties can straigthforwardly explain some observations, a complex interplay between the various parameters of the reaction is found to occur in other cases. A multidimensional analysis of the respective role of these parameters, including entrance-channel asymmetry, bombarding energy, compound-nucleus fissility, angular momentum, and excitation energy, is proposed. It is shown that, depending on the size of the system, apparent inconsistencies may be deduced when projecting onto specific ordering parameters. The work suggests the possibility of delicate compensation effects in governing the measured fission observables, thereby highlighting the necessity of a multidimensional discussion.
Inferring hierarchical clustering structures by deterministic annealing
International Nuclear Information System (INIS)
Hofmann, T.; Buhmann, J.M.
1996-01-01
The unsupervised detection of hierarchical structures is a major topic in unsupervised learning and one of the key questions in data analysis and representation. We propose a novel algorithm for the problem of learning decision trees for data clustering and related problems. In contrast to many other methods based on successive tree growing and pruning, we propose an objective function for tree evaluation and we derive a non-greedy technique for tree growing. Applying the principles of maximum entropy and minimum cross entropy, a deterministic annealing algorithm is derived in a meanfield approximation. This technique allows us to canonically superimpose tree structures and to fit parameters to averaged or open-quote fuzzified close-quote trees
Mechanics from Newton's laws to deterministic chaos
Scheck, Florian
2018-01-01
This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present 6th edition is updated and revised with more explanations, additional examples and problems with solutions, together with new sections on applications in science. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 150 problems ...
Deterministic Diffusion in Delayed Coupled Maps
International Nuclear Information System (INIS)
Sozanski, M.
2005-01-01
Coupled Map Lattices (CML) are discrete time and discrete space dynamical systems used for modeling phenomena arising in nonlinear systems with many degrees of freedom. In this work, the dynamical and statistical properties of a modified version of the CML with global coupling are considered. The main modification of the model is the extension of the coupling over a set of local map states corresponding to different time iterations. The model with both stochastic and chaotic one-dimensional local maps is studied. Deterministic diffusion in the CML under variation of a control parameter is analyzed for unimodal maps. As a main result, simple relations between statistical and dynamical measures are found for the model and the cases where substituting nonlinear lattices with simpler processes is possible are presented. (author)
Deterministic effects of interventional radiology procedures
International Nuclear Information System (INIS)
Shope, Thomas B.
1997-01-01
The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)
Deterministic Chaos in Radon Time Variation
International Nuclear Information System (INIS)
Planinic, J.; Vukovic, B.; Radolic, V.; Faj, Z.; Stanic, D.
2003-01-01
Radon concentrations were continuously measured outdoors, in living room and basement in 10-minute intervals for a month. The radon time series were analyzed by comparing algorithms to extract phase-space dynamical information. The application of fractal methods enabled to explore the chaotic nature of radon in the atmosphere. The computed fractal dimensions, such as Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non random changes) of the time series, but the positive values of the λ pointed out the grate sensitivity on initial conditions and appearing deterministic chaos by radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere. (author)
Radon time variations and deterministic chaos
Energy Technology Data Exchange (ETDEWEB)
Planinic, J. E-mail: planinic@pedos.hr; Vukovic, B.; Radolic, V
2004-07-01
Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent ({lambda}) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0
Radon time variations and deterministic chaos
International Nuclear Information System (INIS)
Planinic, J.; Vukovic, B.; Radolic, V.
2004-01-01
Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non-random changes) of the time series, but the positive values of λ pointed out the grate sensitivity on initial conditions and the deterministic chaos that appeared due to radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere
Deterministic SLIR model for tuberculosis disease mapping
Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat
2017-11-01
Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.
Primality deterministic and primality probabilistic tests
Directory of Open Access Journals (Sweden)
Alfredo Rizzi
2007-10-01
Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.
CSL model checking of deterministic and stochastic Petri nets
Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.
2006-01-01
Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under
Recognition of deterministic ETOL languages in logarithmic space
DEFF Research Database (Denmark)
Jones, Neil D.; Skyum, Sven
1977-01-01
It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian...
Experimental aspects of deterministic secure quantum key distribution
Energy Technology Data Exchange (ETDEWEB)
Walenta, Nino; Korn, Dietmar; Puhlmann, Dirk; Felbinger, Timo; Hoffmann, Holger; Ostermeyer, Martin [Universitaet Potsdam (Germany). Institut fuer Physik; Bostroem, Kim [Universitaet Muenster (Germany)
2008-07-01
Most common protocols for quantum key distribution (QKD) use non-deterministic algorithms to establish a shared key. But deterministic implementations can allow for higher net key transfer rates and eavesdropping detection rates. The Ping-Pong coding scheme by Bostroem and Felbinger[1] employs deterministic information encoding in entangled states with its characteristic quantum channel from Bob to Alice and back to Bob. Based on a table-top implementation of this protocol with polarization-entangled photons fundamental advantages as well as practical issues like transmission losses, photon storage and requirements for progress towards longer transmission distances are discussed and compared to non-deterministic protocols. Modifications of common protocols towards a deterministic quantum key distribution are addressed.
Szymanowski, Mariusz; Kryza, Maciej
2017-02-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
Multi-dimensional Fuzzy Euler Approximation
Directory of Open Access Journals (Sweden)
Yangyang Hao
2017-05-01
Full Text Available Multi-dimensional Fuzzy differential equations driven by multi-dimen-sional Liu process, have been intensively applied in many fields. However, we can not obtain the analytic solution of every multi-dimensional fuzzy differential equation. Then, it is necessary for us to discuss the numerical results in most situations. This paper focuses on the numerical method of multi-dimensional fuzzy differential equations. The multi-dimensional fuzzy Taylor expansion is given, based on this expansion, a numerical method which is designed for giving the solution of multi-dimensional fuzzy differential equation via multi-dimensional Euler method will be presented, and its local convergence also will be discussed.
International Nuclear Information System (INIS)
Ford, W.E. III; Diggs, B.R.; Petrie, L.M.; Webster, C.C.; Westfall, R.M.
1982-01-01
A P 3 227-neutron-group cross-section library has been processed for the subsequent generation of problem-dependent fine- or broad-group cross sections for a broad range of applications, including shipping cask calculations, general criticality safety analyses, and reactor core and shielding analyses. The energy group structure covers the range 10 -5 eV - 20 MeV, including 79 thermal groups below 3 eV. The 129-material library includes processed data for all materials in the ENDF/B-V General Purpose File, several data sets prepared from LENDL data, hydrogen with water- and polyethyelene-bound thermal kernels, deuterium with C 2 O-bound thermal kernels, carbon with a graphite thermal kernel, a special 1/V data set, and a dose factor data set. The library, which is in AMPX master format, is designated CSRL-V (Criticality Safety Reference Library based on ENDF/B-V data). Also included in CSRL-V is a pointwise total, fission, elastic scattering, and (n,γ) cross-section library containing data sets for all ENDF/B-V resonance materials. Data in the pointwise library were processed with the infinite dilute approximation at a temperature of 296 0 K
International Nuclear Information System (INIS)
Chuang, Kuo-Chih; Ma, Chien-Ching; Liao, Heng-Tseng
2012-01-01
In this work, active vibration suppression of a smart cantilever beam subjected to disturbances from multiple impact loadings is investigated with a point-wise fiber Bragg grating (FBG) displacement sensing system. An FBG demodulator is employed in the proposed fiber sensing system to dynamically demodulate the responses obtained by the FBG displacement sensor with high sensitivity. To investigate the ability of the proposed FBG displacement sensor as a feedback sensor, velocity feedback control and delay control are employed to suppress the vibrations of the first three bending modes of the smart cantilever beam. To improve the control performance for the first bending mode when the cantilever beam is subjected to an impact loading, we improve the conventional velocity feedback controller by tuning the control gain online with the aid of information from a higher vibration mode. Finally, active control of vibrations induced by multiple impact loadings due to a plastic ball is performed with the improved velocity feedback control. The experimental results show that active vibration control of smart structures subjected to disturbances such as impact loadings can be achieved by employing the proposed FBG sensing system to feed back out-of-plane point-wise displacement responses with high sensitivity. (paper)
Executive Information Systems' Multidimensional Models
Directory of Open Access Journals (Sweden)
2007-01-01
Full Text Available Executive Information Systems are design to improve the quality of strategic level of management in organization through a new type of technology and several techniques for extracting, transforming, processing, integrating and presenting data in such a way that the organizational knowledge filters can easily associate with this data and turn it into information for the organization. These technologies are known as Business Intelligence Tools. But in order to build analytic reports for Executive Information Systems (EIS in an organization we need to design a multidimensional model based on the business model from the organization. This paper presents some multidimensional models that can be used in EIS development and propose a new model that is suitable for strategic business requests.
Lagrangian multiforms and multidimensional consistency
Energy Technology Data Exchange (ETDEWEB)
Lobb, Sarah; Nijhoff, Frank [Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT (United Kingdom)
2009-10-30
We show that well-chosen Lagrangians for a class of two-dimensional integrable lattice equations obey a closure relation when embedded in a higher dimensional lattice. On the basis of this property we formulate a Lagrangian description for such systems in terms of Lagrangian multiforms. We discuss the connection of this formalism with the notion of multidimensional consistency, and the role of the lattice from the point of view of the relevant variational principle.
Cuba: Multidimensional numerical integration library
Hahn, Thomas
2016-08-01
The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.
Deterministic models for energy-loss straggling
International Nuclear Information System (INIS)
Prinja, A.K.; Gleicher, F.; Dunham, G.; Morel, J.E.
1999-01-01
Inelastic ion interactions with target electrons are dominated by extremely small energy transfers that are difficult to resolve numerically. The continuous-slowing-down (CSD) approximation is then commonly employed, which, however, only preserves the mean energy loss per collision through the stopping power, S(E) = ∫ 0 ∞ dEprime (E minus Eprime) σ s (E → Eprime). To accommodate energy loss straggling, a Gaussian distribution with the correct mean-squared energy loss (akin to a Fokker-Planck approximation in energy) is commonly used in continuous-energy Monte Carlo codes. Although this model has the unphysical feature that ions can be upscattered, it nevertheless yields accurate results. A multigroup model for energy loss straggling was recently presented for use in multigroup Monte Carlo codes or in deterministic codes that use multigroup data. The method has the advantage that the mean and mean-squared energy loss are preserved without unphysical upscatter and hence is computationally efficient. Results for energy spectra compared extremely well with Gaussian distributions under the idealized conditions for which the Gaussian may be considered to be exact. Here, the authors present more consistent comparisons by extending the method to accommodate upscatter and, further, compare both methods with exact solutions obtained from an analog Monte Carlo simulation, for a straight-ahead transport problem
A Deterministic Approach to Earthquake Prediction
Directory of Open Access Journals (Sweden)
Vittorio Sgrigna
2012-01-01
Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.
Deterministic Approach to Detect Heart Sound Irregularities
Directory of Open Access Journals (Sweden)
Richard Mengko
2017-07-01
Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.
Deterministic dense coding and entanglement entropy
International Nuclear Information System (INIS)
Bourdon, P. S.; Gerjuoy, E.; McDonald, J. P.; Williams, H. T.
2008-01-01
We present an analytical study of the standard two-party deterministic dense-coding protocol, under which communication of perfectly distinguishable messages takes place via a qudit from a pair of nonmaximally entangled qudits in a pure state |ψ>. Our results include the following: (i) We prove that it is possible for a state |ψ> with lower entanglement entropy to support the sending of a greater number of perfectly distinguishable messages than one with higher entanglement entropy, confirming a result suggested via numerical analysis in Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. (ii) By explicit construction of families of local unitary operators, we verify, for dimensions d=3 and d=4, a conjecture of Mozes et al. about the minimum entanglement entropy that supports the sending of d+j messages, 2≤j≤d-1; moreover, we show that the j=2 and j=d-1 cases of the conjecture are valid in all dimensions. (iii) Given that |ψ> allows the sending of K messages and has √(λ 0 ) as its largest Schmidt coefficient, we show that the inequality λ 0 ≤d/K, established by Wu et al. [Phys. Rev. A 73, 042311 (2006)], must actually take the form λ 0 < d/K if K=d+1, while our constructions of local unitaries show that equality can be realized if K=d+2 or K=2d-1
Analysis of pinching in deterministic particle separation
Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German
2011-11-01
We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.
Energy Technology Data Exchange (ETDEWEB)
Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA
2017-03-28
Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.
Equivalence relations between deterministic and quantum mechanical systems
International Nuclear Information System (INIS)
Hooft, G.
1988-01-01
Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...
Operational State Complexity of Deterministic Unranked Tree Automata
Directory of Open Access Journals (Sweden)
Xiaoxue Piao
2010-08-01
Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.
ZERODUR: deterministic approach for strength design
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two
Measures for a multidimensional multiverse
Chung, Hyeyoun
2015-04-01
We explore the phenomenological implications of generalizing the causal patch and fat geodesic measures to a multidimensional multiverse, where the vacua can have differing numbers of large dimensions. We consider a simple model in which the vacua are nucleated from a D -dimensional parent spacetime through dynamical compactification of the extra dimensions, and compute the geometric contribution to the probability distribution of observations within the multiverse for each measure. We then study how the shape of this probability distribution depends on the time scales for the existence of observers, for vacuum domination, and for curvature domination (tobs,tΛ , and tc, respectively.) In this work we restrict ourselves to bubbles with positive cosmological constant, Λ . We find that in the case of the causal patch cutoff, when the bubble universes have p +1 large spatial dimensions with p ≥2 , the shape of the probability distribution is such that we obtain the coincidence of time scales tobs˜tΛ˜tc . Moreover, the size of the cosmological constant is related to the size of the landscape. However, the exact shape of the probability distribution is different in the case p =2 , compared to p ≥3 . In the case of the fat geodesic measure, the result is even more robust: the shape of the probability distribution is the same for all p ≥2 , and we once again obtain the coincidence tobs˜tΛ˜tc . These results require only very mild conditions on the prior probability of the distribution of vacua in the landscape. Our work shows that the observed double coincidence of time scales is a robust prediction even when the multiverse is generalized to be multidimensional; that this coincidence is not a consequence of our particular Universe being (3 +1 )-dimensional; and that this observable cannot be used to preferentially select one measure over another in a multidimensional multiverse.
Ordinal Comparison of Multidimensional Deprivation
DEFF Research Database (Denmark)
Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter
This paper develops an ordinal method of comparison of multidimensional inequality. In our model, population distribution g is more unequal than f when the distributions have common median and can be obtained from f by one or more shifts in population density that increase inequality. For our be...... benchmark 2x2 case (i.e. the case of two binary outcome variables), we derive an empirical method for making inequality comparisons. As an illustration, we apply the model to childhood poverty in Mozambique....
Deterministic Echo State Networks Based Stock Price Forecasting
Directory of Open Access Journals (Sweden)
Jingpei Dan
2014-01-01
Full Text Available Echo state networks (ESNs, as efficient and powerful computational models for approximating nonlinear dynamical systems, have been successfully applied in financial time series forecasting. Reservoir constructions in standard ESNs rely on trials and errors in real applications due to a series of randomized model building stages. A novel form of ESN with deterministically constructed reservoir is competitive with standard ESN by minimal complexity and possibility of optimizations for ESN specifications. In this paper, forecasting performances of deterministic ESNs are investigated in stock price prediction applications. The experiment results on two benchmark datasets (Shanghai Composite Index and S&P500 demonstrate that deterministic ESNs outperform standard ESN in both accuracy and efficiency, which indicate the prospect of deterministic ESNs for financial prediction.
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
2017-01-01
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)=Z(t) Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are X 2 -distributed....
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....
Method to deterministically study photonic nanostructures in different experimental instruments
Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the
Pseudo-random number generator based on asymptotic deterministic randomness
Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming
2008-06-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.
Pseudo-random number generator based on asymptotic deterministic randomness
International Nuclear Information System (INIS)
Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming
2008-01-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks
Non deterministic finite automata for power systems fault diagnostics
Directory of Open Access Journals (Sweden)
LINDEN, R.
2009-06-01
Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.
Transmission power control in WSNs : from deterministic to cognitive methods
Chincoli, M.; Liotta, A.; Gravina, R.; Palau, C.E.; Manso, M.; Liotta, A.; Fortino, G.
2018-01-01
Communications in Wireless Sensor Networks (WSNs) are affected by dynamic environments, variable signal fluctuations and interference. Thus, prompt actions are necessary to achieve dependable communications and meet Quality of Service (QoS) requirements. To this end, the deterministic algorithms
The probabilistic approach and the deterministic licensing procedure
International Nuclear Information System (INIS)
Fabian, H.; Feigel, A.; Gremm, O.
1984-01-01
If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)
Local deterministic theory surviving the violation of Bell's inequalities
International Nuclear Information System (INIS)
Cormier-Delanoue, C.
1984-01-01
Bell's theorem which asserts that no deterministic theory with hidden variables can give the same predictions as quantum theory, is questioned. Such a deterministic theory is presented and carefully applied to real experiments performed on pairs of correlated photons, derived from the EPR thought experiment. The ensuing predictions violate Bell's inequalities just as quantum mechanics does, and it is further shown that this discrepancy originates in the very nature of radiations. Complete locality is therefore restored while separability remains more limited [fr
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Deterministic chaos in the pitting phenomena of passivable alloys
International Nuclear Information System (INIS)
Hoerle, Stephane
1998-01-01
It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr
Perceptual Salience and Children's Multidimensional Problem Solving
Odom, Richard D.; Corbin, David W.
1973-01-01
Uni- and multidimensional processing of 6- to 9-year olds was studied using recall tasks in which an array of stimuli was reconstructed to match a model array. Results indicated that both age groups were able to solve multidimensional problems, but that solution rate was retarded by the unidimensional processing of highly salient dimensions.…
Multidimensional fatigue and its correlates in hospitalised advanced cancer patients.
Echteld, M.A.; Passchier, J.; Teunissen, S.; Claessen, S.; Wit, R. de; Rijt, C.C.D. van der
2007-01-01
Although fatigue is a multidimensional concept, multidimensional fatigue is rarely investigated in hospitalised cancer patients. We determined the levels and correlates of multidimensional fatigue in 100 advanced cancer patients admitted for symptom control. Fatigue dimensions were general fatigue
Deterministic effects of the ionizing radiation
International Nuclear Information System (INIS)
Raslawski, Elsa C.
2001-01-01
Full text: The deterministic effect is the somatic damage that appears when radiation dose is superior to the minimum value or 'threshold dose'. Over this threshold dose, the frequency and seriousness of the damage increases with the amount given. Sixteen percent of patients younger than 15 years of age with the diagnosis of cancer have the possibility of a cure. The consequences of cancer treatment in children are very serious, as they are physically and emotionally developing. The seriousness of the delayed effects of radiation therapy depends on three factors: a)- The treatment ( dose of radiation, schedule of treatment, time of treatment, beam energy, treatment volume, distribution of the dose, simultaneous chemotherapy, etc.); b)- The patient (state of development, patient predisposition, inherent sensitivity of tissue, the present of other alterations, etc.); c)- The tumor (degree of extension or infiltration, mechanical effects, etc.). The effect of radiation on normal tissue is related to cellular activity and the maturity of the tissue irradiated. Children have a mosaic of tissues in different stages of maturity at different moments in time. On the other hand, each tissue has a different pattern of development, so that sequelae are different in different irradiated tissues of the same patient. We should keep in mind that all the tissues are affected in some degree. Bone tissue evidences damage with growth delay and degree of calcification. Damage is small at 10 Gy; between 10 and 20 Gy growth arrest is partial, whereas at doses larger than 20 Gy growth arrest is complete. The central nervous system is the most affected because the radiation injuries produce demyelination with or without focal or diffuse areas of necrosis in the white matter causing character alterations, lower IQ and functional level, neuro cognitive impairment,etc. The skin is also affected, showing different degrees of erythema such as ulceration and necrosis, different degrees of
SUSTAINABLE DEVELOPMENT, A MULTIDIMENSIONAL CONCEPT
Directory of Open Access Journals (Sweden)
TEODORESCU ANA MARIA
2015-06-01
Full Text Available Sustainable development imposed itself as a corollary of economic term "development". Sustainable development is meant to be the summation of economic, environmental and social considerations for the present and especially for the future. The concept of sustainable development plays an important role in european and global meetings since 1972, the year it has been set for the first time. Strategies necessary to achieve the objectives of sustainable development have been developed, indicators meant to indicate the result of the implementation of policies have been created, national plans were oriented towards achieving the proposed targets. I wanted to highlight the multidimensional character of the concept of sustainable development. Thus, using specialized national and international literature, I have revealed different approaches of one pillar to the detriment of another pillar depending on the specific field. In the different concepts of sustainable development, the consensus is undoubtedly agreed on its components: economic, social, environmental. Based on this fact, the concept of sustainability has different connotations depending on the specific content of each discipline: biology, economics, sociology, environmental ethics. The multidimensional valence of sustainable development consists of three pillars ability to act together for the benefit of present and future generations. Being a multidimensional concept, importance attached to a pillar over another is directed according to the particularities of each field: in economy profit prevails, in ecology care of natural resources is the most important, the social aims improving human living conditions. The challenge of sustainable development is to combine all the economic, environmental and social benefits and the present generation to come. Ecological approach is reflected in acceptance of limited natural resources by preserving natural capital. In terms of the importance of
Heuristics for Multidimensional Packing Problems
DEFF Research Database (Denmark)
Egeblad, Jens
for a minimum height container required for the items. The main contributions of the thesis are three new heuristics for strip-packing and knapsack packing problems where items are both rectangular and irregular. In the two first papers we describe a heuristic for the multidimensional strip-packing problem...... that is based on a relaxed placement principle. The heuristic starts with a random overlapping placement of items and large container dimensions. From the overlapping placement overlap is reduced iteratively until a non-overlapping placement is found and a new problem is solved with a smaller container size...... of this heuristic are among the best published in the literature both for two- and three-dimensional strip-packing problems for irregular shapes. In the third paper, we introduce a heuristic for two- and three-dimensional rectangular knapsack packing problems. The two-dimensional heuristic uses the sequence pair...
Applied multidimensional scaling and unfolding
Borg, Ingwer; Mair, Patrick
2018-01-01
This book introduces multidimensional scaling (MDS) and unfolding as data analysis techniques for applied researchers. MDS is used for the analysis of proximity data on a set of objects, representing the data as distances between points in a geometric space (usually of two dimensions). Unfolding is a related method that maps preference data (typically evaluative ratings of different persons on a set of objects) as distances between two sets of points (representing the persons and the objects, resp.). This second edition has been completely revised to reflect new developments and the coverage of unfolding has also been substantially expanded. Intended for applied researchers whose main interests are in using these methods as tools for building substantive theories, it discusses numerous applications (classical and recent), highlights practical issues (such as evaluating model fit), presents ways to enforce theoretical expectations for the scaling solutions, and addresses the typical mistakes that MDS/unfoldin...
Minimal models of multidimensional computations.
Directory of Open Access Journals (Sweden)
Jeffrey D Fitzgerald
2011-03-01
Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.
The dialectical thinking about deterministic and probabilistic safety analysis
International Nuclear Information System (INIS)
Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong
2005-01-01
There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)
Learning to Act: Qualitative Learning of Deterministic Action Models
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2017-01-01
In this article we study learnability of fully observable, universally applicable action models of dynamic epistemic logic. We introduce a framework for actions seen as sets of transitions between propositional states and we relate them to their dynamic epistemic logic representations as action...... in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...
Deterministic and stochastic CTMC models from Zika disease transmission
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
Multidimensional singular integrals and integral equations
Mikhlin, Solomon Grigorievich; Stark, M; Ulam, S
1965-01-01
Multidimensional Singular Integrals and Integral Equations presents the results of the theory of multidimensional singular integrals and of equations containing such integrals. Emphasis is on singular integrals taken over Euclidean space or in the closed manifold of Liapounov and equations containing such integrals. This volume is comprised of eight chapters and begins with an overview of some theorems on linear equations in Banach spaces, followed by a discussion on the simplest properties of multidimensional singular integrals. Subsequent chapters deal with compounding of singular integrals
Directory of Open Access Journals (Sweden)
Wenying Yue
2014-01-01
Full Text Available Cloud computing has come to be a significant commercial infrastructure offering utility-oriented IT services to users worldwide. However, data centers hosting cloud applications consume huge amounts of energy, leading to high operational cost and greenhouse gas emission. Therefore, green cloud computing solutions are needed not only to achieve high level service performance but also to minimize energy consumption. This paper studies the dynamic placement of virtual machines (VMs with deterministic and stochastic demands. In order to ensure a quick response to VM requests and improve the energy efficiency, a two-phase optimization strategy has been proposed, in which VMs are deployed in runtime and consolidated into servers periodically. Based on an improved multidimensional space partition model, a modified energy efficient algorithm with balanced resource utilization (MEAGLE and a live migration algorithm based on the basic set (LMABBS are, respectively, developed for each phase. Experimental results have shown that under different VMs’ stochastic demand variations, MEAGLE guarantees the availability of stochastic resources with a defined probability and reduces the number of required servers by 2.49% to 20.40% compared with the benchmark algorithms. Also, the difference between the LMABBS solution and Gurobi solution is fairly small, but LMABBS significantly excels in computational efficiency.
International Nuclear Information System (INIS)
Santamarina, A.
1991-01-01
A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Towards deterministic optical quantum computation with coherently driven atomic ensembles
International Nuclear Information System (INIS)
Petrosyan, David
2005-01-01
Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons
Deterministic and efficient quantum cryptography based on Bell's theorem
International Nuclear Information System (INIS)
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-01-01
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology
Multidimensionally encoded magnetic resonance imaging.
Lin, Fa-Hsuan
2013-07-01
Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.
Discovering Multidimensional Structure in Relational Data
DEFF Research Database (Denmark)
Jensen, Mikael Rune; Holmgren, Thomas; Pedersen, Torben Bach
2004-01-01
On-Line Analytical Processing (OLAP) systems based on multidimensional databases are essential elements of decision support. However, most existing data is stored in ordinary relational OLTP databases, i.e., data has to be (re-) modeled as multidimensional cubes before the advantages of OLAP to...... algorithms for discovering multidimensional schemas from relational databases. The algorithms take a wide range of available metadata into account in the discovery process, including functional and inclusion dependencies, and key and cardinality information....... tools are available. In this paper we present an approach for the automatic construction of multidimensional OLAP database schemas from existing relational OLTP databases, enabling easy OLAP design and analysis for most existing data sources. This is achieved through a set of practical and effective...
Two multi-dimensional uncertainty relations
International Nuclear Information System (INIS)
Skala, L; Kapsa, V
2008-01-01
Two multi-dimensional uncertainty relations, one related to the probability density and the other one related to the probability density current, are derived and discussed. Both relations are stronger than the usual uncertainty relations for the coordinates and momentum
Multidimensional artificial field embedding with spatial sensitivity
CSIR Research Space (South Africa)
Lunga, D
2013-06-01
Full Text Available Multidimensional embedding is a technique useful for characterizing spectral signature relations in hyperspectral images. However, such images consist of disjoint similar spectral classes that are spatially sensitive, thus presenting challenges...
CAMS: OLAPing Multidimensional Data Streams Efficiently
Cuzzocrea, Alfredo
In the context of data stream research, taming the multidimensionality of real-life data streams in order to efficiently support OLAP analysis/mining tasks is a critical challenge. Inspired by this fundamental motivation, in this paper we introduce CAMS (C ube-based A cquisition model for M ultidimensional S treams), a model for efficiently OLAPing multidimensional data streams. CAMS combines a set of data stream processing methodologies, namely (i) the OLAP dimension flattening process, which allows us to obtain dimensionality reduction of multidimensional data streams, and (ii) the OLAP stream aggregation scheme, which aggregates data stream readings according to an OLAP-hierarchy-based membership approach. We complete our analytical contribution by means of experimental assessment and analysis of both the efficiency and the scalability of OLAPing capabilities of CAMS on synthetic multidimensional data streams. Both analytical and experimental results clearly connote CAMS as an enabling component for next-generation Data Stream Management Systems.
Multidimensional Poverty and Child Survival in India
Mohanty, Sanjay K.
2011-01-01
Background Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations) and included in the development agenda, its measurement and application are still limited. Objectives and Methodology Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses. Results The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed. Conclusion Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population. PMID:22046384
Multidimensional poverty and child survival in India.
Mohanty, Sanjay K
2011-01-01
Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations) and included in the development agenda, its measurement and application are still limited. OBJECTIVES AND METHODOLOGY: Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses. The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed. Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population.
Multidimensional poverty and child survival in India.
Directory of Open Access Journals (Sweden)
Sanjay K Mohanty
Full Text Available Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations and included in the development agenda, its measurement and application are still limited. OBJECTIVES AND METHODOLOGY: Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses.The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed.Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population.
Deterministic Predictions of Vessel Responses Based on Past Measurements
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam; Jensen, Jørgen Juncher
2017-01-01
The paper deals with a prediction procedure from which global wave-induced responses can be deterministically predicted a short time, 10-50 s, ahead of current time. The procedure relies on the autocorrelation function and takes into account prior measurements only; i.e. knowledge about wave...
About the Possibility of Creation of a Deterministic Unified Mechanics
International Nuclear Information System (INIS)
Khomyakov, G.K.
2005-01-01
The possibility of creation of a unified deterministic scheme of classical and quantum mechanics, allowing to preserve their achievements is discussed. It is shown that the canonical system of ordinary differential equation of Hamilton classical mechanics can be added with the vector system of ordinary differential equation for the variables of equations. The interpretational problems of quantum mechanics are considered
Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Niels Jacob
1994-01-01
An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...
The State of Deterministic Thinking among Mothers of Autistic Children
Directory of Open Access Journals (Sweden)
Mehrnoush Esbati
2011-10-01
Full Text Available Objectives: The purpose of the present study was to investigate the effectiveness of cognitive-behavior education on decreasing deterministic thinking in mothers of children with autism spectrum disorders. Methods: Participants were 24 mothers of autistic children who were referred to counseling centers of Tehran and their children’s disorder had been diagnosed at least by a psychiatrist and a counselor. They were randomly selected and assigned into control and experimental groups. Measurement tool was Deterministic Thinking Questionnaire and both groups answered it before and after education and the answers were analyzed by analysis of covariance. Results: The results indicated that cognitive-behavior education decreased deterministic thinking among mothers of autistic children, it decreased four sub scale of deterministic thinking: interaction with others, absolute thinking, prediction of future, and negative events (P<0.05 as well. Discussions: By learning cognitive and behavioral techniques, parents of children with autism can reach higher level of psychological well-being and it is likely that these cognitive-behavioral skills would have a positive impact on general life satisfaction of mothers of children with autism.
Deterministic multimode photonic device for quantum-information processing
DEFF Research Database (Denmark)
Nielsen, Anne E. B.; Mølmer, Klaus
2010-01-01
We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...
Deterministic Chaos - Complex Chance out of Simple Necessity ...
Indian Academy of Sciences (India)
This is a very lucid and lively book on deterministic chaos. Chaos is very common in nature. However, the understanding and realisation of its potential applications is very recent. Thus this book is a timely addition to the subject. There are several books on chaos and several more are being added every day. In spite of this ...
Nonlinear deterministic structures and the randomness of protein sequences
Huang Yan Zhao
2003-01-01
To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.
Line and lattice networks under deterministic interference models
Goseling, Jasper; Gastpar, Michael; Weber, Jos H.
Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Comparison of deterministic and Monte Carlo methods in shielding design
International Nuclear Information System (INIS)
Oliveira, A. D.; Oliveira, C.
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
Deterministic teleportation using single-photon entanglement as a resource
DEFF Research Database (Denmark)
Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.
2012-01-01
We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...
Empirical and deterministic accuracies of across-population genomic prediction
Wientjes, Y.C.J.; Veerkamp, R.F.; Bijma, P.; Bovenhuis, H.; Schrooten, C.; Calus, M.P.L.
2015-01-01
Background: Differences in linkage disequilibrium and in allele substitution effects of QTL (quantitative trait loci) may hinder genomic prediction across populations. Our objective was to develop a deterministic formula to estimate the accuracy of across-population genomic prediction, for which
A Deterministic Approach to the Synchronization of Cellular Automata
Garcia, J.; Garcia, P.
2011-01-01
In this work we introduce a deterministic scheme of synchronization of linear and nonlinear cellular automata (CA) with complex behavior, connected through a master-slave coupling. By using a definition of Boolean derivative, we use the linear approximation of the automata to determine a function of coupling that promotes synchronization without perturbing all the sites of the slave system.
Deterministic and Stochastic Study of Wind Farm Harmonic Currents
DEFF Research Database (Denmark)
Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus
2010-01-01
Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic char...
Mixed motion in deterministic ratchets due to anisotropic permeability
Kulrattanarak, T.; Sman, van der R.G.M.; Lubbersen, Y.S.; Schroën, C.G.P.H.; Pham, H.T.M.; Sarro, P.M.; Boom, R.M.
2011-01-01
Nowadays microfluidic devices are becoming popular for cell/DNA sorting and fractionation. One class of these devices, namely deterministic ratchets, seems most promising for continuous fractionation applications of suspensions (Kulrattanarak et al., 2008 [1]). Next to the two main types of particle
Simulation of quantum computation : A deterministic event-based approach
Michielsen, K; De Raedt, K; De Raedt, H
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Simulation of Quantum Computation : A Deterministic Event-Based Approach
Michielsen, K.; Raedt, K. De; Raedt, H. De
2005-01-01
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Using a satisfiability solver to identify deterministic finite state automata
Heule, M.J.H.; Verwer, S.
2009-01-01
We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we
Deterministic mean-variance-optimal consumption and investment
DEFF Research Database (Denmark)
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
Simulation of photonic waveguides with deterministic aperiodic nanostructures for biosensing
DEFF Research Database (Denmark)
Neustock, Lars Thorben; Paulsen, Moritz; Jahns, Sabrina
2016-01-01
Photonic waveguides with deterministic aperiodic corrugations offer rich spectral characteristics under surface-normal illumination. The finite-element method (FEM), the finite-difference time-domain (FDTD) method and a rigorous coupled wave algorithm (RCWA) are compared for computing the near...
Langevin equation with the deterministic algebraically correlated noise
International Nuclear Information System (INIS)
Ploszajczak, M.; Srokowski, T.
1995-01-01
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author)
Deterministic dense coding and faithful teleportation with multipartite graph states
International Nuclear Information System (INIS)
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-01-01
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
Deterministic algorithms for multi-criteria Max-TSP
Manthey, Bodo
2012-01-01
We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of
A Deterministic Annealing Approach to Clustering AIRS Data
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Issanchou, Clara; Bilbao, Stefan; Le Carrou, Jean-Loïc; Touzé, Cyril; Doaré, Olivier
2017-04-01
This article is concerned with the vibration of a stiff linear string in the presence of a rigid obstacle. A numerical method for unilateral and arbitrary-shaped obstacles is developed, based on a modal approach in order to take into account the frequency dependence of losses in strings. The contact force of the barrier interaction is treated using a penalty approach, while a conservative scheme is derived for time integration, in order to ensure long-term numerical stability. In this way, the linear behaviour of the string when not in contact with the barrier can be controlled via a mode by mode fitting, so that the model is particularly well suited for comparisons with experiments. An experimental configuration is used with a point obstacle either centered or near an extremity of the string. In this latter case, such a pointwise obstruction approximates the end condition found in the tanpura, an Indian stringed instrument. The second polarisation of the string is also analysed and included in the model. Numerical results are compared against experiments, showing good accuracy over a long time scale.
Improved multidimensional semiclassical tunneling theory.
Wagner, Albert F
2013-12-12
We show that the analytic multidimensional semiclassical tunneling formula of Miller et al. [Miller, W. H.; Hernandez, R.; Handy, N. C.; Jayatilaka, D.; Willets, A. Chem. Phys. Lett. 1990, 172, 62] is qualitatively incorrect for deep tunneling at energies well below the top of the barrier. The origin of this deficiency is that the formula uses an effective barrier weakly related to the true energetics but correctly adjusted to reproduce the harmonic description and anharmonic corrections of the reaction path at the saddle point as determined by second order vibrational perturbation theory. We present an analytic improved semiclassical formula that correctly includes energetic information and allows a qualitatively correct representation of deep tunneling. This is done by constructing a three segment composite Eckart potential that is continuous everywhere in both value and derivative. This composite potential has an analytic barrier penetration integral from which the semiclassical action can be derived and then used to define the semiclassical tunneling probability. The middle segment of the composite potential by itself is superior to the original formula of Miller et al. because it incorporates the asymmetry of the reaction barrier produced by the known reaction exoergicity. Comparison of the semiclassical and exact quantum tunneling probability for the pure Eckart potential suggests a simple threshold multiplicative factor to the improved formula to account for quantum effects very near threshold not represented by semiclassical theory. The deep tunneling limitations of the original formula are echoed in semiclassical high-energy descriptions of bound vibrational states perpendicular to the reaction path at the saddle point. However, typically ab initio energetic information is not available to correct it. The Supporting Information contains a Fortran code, test input, and test output that implements the improved semiclassical tunneling formula.
Data matching for free-surface multiple attenuation by multidimensional deconvolution
van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald
2012-09-01
A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.
A deterministic-probabilistic model for contaminant transport. User manual
Energy Technology Data Exchange (ETDEWEB)
Schwartz, F W; Crowe, A
1980-08-01
This manual describes a deterministic-probabilistic contaminant transport (DPCT) computer model designed to simulate mass transfer by ground-water movement in a vertical section of the earth's crust. The model can account for convection, dispersion, radioactive decay, and cation exchange for a single component. A velocity is calculated from the convective transport of the ground water for each reference particle in the modeled region; dispersion is accounted for in the particle motion by adding a readorn component to the deterministic motion. The model is sufficiently general to enable the user to specify virtually any type of water table or geologic configuration, and a variety of boundary conditions. A major emphasis in the model development has been placed on making the model simple to use, and information provided in the User Manual will permit changes to the computer code to be made relatively easily for those that might be required for specific applications. (author)
Deterministic chaos at the ocean surface: applications and interpretations
Directory of Open Access Journals (Sweden)
A. J. Palmer
1998-01-01
Full Text Available Ocean surface, grazing-angle radar backscatter data from two separate experiments, one of which provided coincident time series of measured surface winds, were found to exhibit signatures of deterministic chaos. Evidence is presented that the lowest dimensional underlying dynamical system responsible for the radar backscatter chaos is that which governs the surface wind turbulence. Block-averaging time was found to be an important parameter for determining the degree of determinism in the data as measured by the correlation dimension, and by the performance of an artificial neural network in retrieving wind and stress from the radar returns, and in radar detection of an ocean internal wave. The correlation dimensions are lowered and the performance of the deterministic retrieval and detection algorithms are improved by averaging out the higher dimensional surface wave variability in the radar returns.
Deterministic Properties of Serially Connected Distributed Lag Models
Directory of Open Access Journals (Sweden)
Piotr Nowak
2013-01-01
Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract
Deterministic Brownian motion generated from differential delay equations.
Lei, Jinzhi; Mackey, Michael C
2011-10-01
This paper addresses the question of how Brownian-like motion can arise from the solution of a deterministic differential delay equation. To study this we analytically study the bifurcation properties of an apparently simple differential delay equation and then numerically investigate the probabilistic properties of chaotic solutions of the same equation. Our results show that solutions of the deterministic equation with randomly selected initial conditions display a Gaussian-like density for long time, but the densities are supported on an interval of finite measure. Using these chaotic solutions as velocities, we are able to produce Brownian-like motions, which show statistical properties akin to those of a classical Brownian motion over both short and long time scales. Several conjectures are formulated for the probabilistic properties of the solution of the differential delay equation. Numerical studies suggest that these conjectures could be "universal" for similar types of "chaotic" dynamics, but we have been unable to prove this.
Progress in nuclear well logging modeling using deterministic transport codes
International Nuclear Information System (INIS)
Kodeli, I.; Aldama, D.L.; Maucec, M.; Trkov, A.
2002-01-01
Further studies in continuation of the work presented in 2001 in Portoroz were performed in order to study and improve the performances, precission and domain of application of the deterministic transport codes with respect to the oil well logging analysis. These codes are in particular expected to complement the Monte Carlo solutions, since they can provide a detailed particle flux distribution in the whole geometry in a very reasonable CPU time. Real-time calculation can be envisaged. The performances of deterministic transport methods were compared to those of the Monte Carlo method. IRTMBA generic benchmark was analysed using the codes MCNP-4C and DORT/TORT. Centric as well as excentric casings were considered using 14 MeV point neutron source and NaI scintillation detectors. Neutron and gamma spectra were compared at two detector positions.(author)
Deterministic blade row interactions in a centrifugal compressor stage
Kirtley, K. R.; Beach, T. A.
1991-01-01
The three-dimensional viscous flow in a low speed centrifugal compressor stage is simulated using an average passage Navier-Stokes analysis. The impeller discharge flow is of the jet/wake type with low momentum fluid in the shroud-pressure side corner coincident with the tip leakage vortex. This nonuniformity introduces periodic unsteadiness in the vane frame of reference. The effect of such deterministic unsteadiness on the time-mean is included in the analysis through the average passage stress, which allows the analysis of blade row interactions. The magnitude of the divergence of the deterministic unsteady stress is of the order of the divergence of the Reynolds stress over most of the span, from the impeller trailing edge to the vane throat. Although the potential effects on the blade trailing edge from the diffuser vane are small, strong secondary flows generated by the impeller degrade the performance of the diffuser vanes.
One-step deterministic multipartite entanglement purification with linear optics
Energy Technology Data Exchange (ETDEWEB)
Sheng, Yu-Bo [Department of Physics, Tsinghua University, Beijing 100084 (China); Long, Gui Lu, E-mail: gllong@tsinghua.edu.cn [Department of Physics, Tsinghua University, Beijing 100084 (China); Center for Atomic and Molecular NanoSciences, Tsinghua University, Beijing 100084 (China); Key Laboratory for Quantum Information and Measurements, Beijing 100084 (China); Deng, Fu-Guo [Department of Physics, Applied Optics Beijing Area Major Laboratory, Beijing Normal University, Beijing 100875 (China)
2012-01-09
We present a one-step deterministic multipartite entanglement purification scheme for an N-photon system in a Greenberger–Horne–Zeilinger state with linear optical elements. The parties in quantum communication can in principle obtain a maximally entangled state from each N-photon system with a success probability of 100%. That is, it does not consume the less-entangled photon systems largely, which is far different from other multipartite entanglement purification schemes. This feature maybe make this scheme more feasible in practical applications. -- Highlights: ► We proposed a deterministic entanglement purification scheme for GHZ states. ► The scheme uses only linear optical elements and has a success probability of 100%. ► The scheme gives a purified GHZ state in just one-step.
Relationship of Deterministic Thinking With Loneliness and Depression in the Elderly
Directory of Open Access Journals (Sweden)
Mehdi Sharifi
2017-12-01
Conclusion According to the results, it can be said that deterministic thinking has a significant relationship with depression and sense of loneliness in older adults. So, deterministic thinking acts as a predictor of depression and sense of loneliness in older adults. Therefore, psychological interventions for challenging cognitive distortion of deterministic thinking and attention to mental health in older adult are very important.
Ordinal optimization and its application to complex deterministic problems
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Evaluation of Deterministic and Stochastic Components of Traffic Counts
Directory of Open Access Journals (Sweden)
Ivan Bošnjak
2012-10-01
Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Langevin equation with the deterministic algebraically correlated noise
Energy Technology Data Exchange (ETDEWEB)
Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Srokowski, T. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)]|[Institute of Nuclear Physics, Cracow (Poland)
1995-12-31
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author). 58 refs.
Beeping a Deterministic Time-Optimal Leader Election
Dufoulon , Fabien; Burman , Janna; Beauquier , Joffroy
2018-01-01
The beeping model is an extremely restrictive broadcast communication model that relies only on carrier sensing. In this model, we solve the leader election problem with an asymptotically optimal round complexity of O(D + log n), for a network of unknown size n and unknown diameter D (but with unique identifiers). Contrary to the best previously known algorithms in the same setting, the proposed one is deterministic. The techniques we introduce give a new insight as to how local constraints o...
Are deterministic methods suitable for short term reserve planning?
International Nuclear Information System (INIS)
Voorspools, Kris R.; D'haeseleer, William D.
2005-01-01
Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods
Deterministic hazard quotients (HQs): Heading down the wrong road
International Nuclear Information System (INIS)
Wilde, L.; Hunter, C.; Simpson, J.
1995-01-01
The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches
Distinguishing deterministic and noise components in ELM time series
International Nuclear Information System (INIS)
Zvejnieks, G.; Kuzovkov, V.N
2004-01-01
Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
Convergence studies of deterministic methods for LWR explicit reflector methodology
International Nuclear Information System (INIS)
Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.
2013-01-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)
Precision production: enabling deterministic throughput for precision aspheres with MRF
Maloney, Chris; Entezarian, Navid; Dumas, Paul
2017-10-01
Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
Applicability of deterministic methods in seismic site effects modeling
International Nuclear Information System (INIS)
Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.
2005-01-01
The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)
Intuitionistic fuzzy (IF) evaluations of multidimensional model
International Nuclear Information System (INIS)
Valova, I.
2012-01-01
There are different logical methods for data structuring, but no one is perfect enough. Multidimensional model-MD of data is presentation of data in a form of cube (referred also as info-cube or hypercube) with data or in form of 'star' type scheme (referred as multidimensional scheme), by use of F-structures (Facts) and set of D-structures (Dimensions), based on the notion of hierarchy of D-structures. The data, being subject of analysis in a specific multidimensional model is located in a Cartesian space, being restricted by D-structures. In fact, the data is either dispersed or 'concentrated', therefore the data cells are not distributed evenly within the respective space. The moment of occurrence of any event is difficult to be predicted and the data is concentrated as per time periods, location of performed business event, etc. To process such dispersed or concentrated data, various technical strategies are needed. The basic methods for presentation of such data should be selected. The approaches of data processing and respective calculations are connected with different options for data representation. The use of intuitionistic fuzzy evaluations (IFE) provide us new possibilities for alternative presentation and processing of data, subject of analysis in any OLAP application. The use of IFE at the evaluation of multidimensional models will result in the following advantages: analysts will dispose with more complete information for processing and analysis of respective data; benefit for the managers is that the final decisions will be more effective ones; enabling design of more functional multidimensional schemes. The purpose of this work is to apply intuitionistic fuzzy evaluations of multidimensional model of data. (authors)
Multi-Dimensional Aggregation for Temporal Data
DEFF Research Database (Denmark)
Böhlen, M. H.; Gamper, J.; Jensen, Christian Søndergaard
2006-01-01
Business Intelligence solutions, encompassing technologies such as multi-dimensional data modeling and aggregate query processing, are being applied increasingly to non-traditional data. This paper extends multi-dimensional aggregation to apply to data with associated interval values that capture...... that the data holds for each point in the interval, as well as the case where the data holds only for the entire interval, but must be adjusted to apply to sub-intervals. The paper reports on an implementation of the new operator and on an empirical study that indicates that the operator scales to large data...
Simulation of a Multidimensional Input Quantum Perceptron
Yamamoto, Alexandre Y.; Sundqvist, Kyle M.; Li, Peng; Harris, H. Rusty
2018-06-01
In this work, we demonstrate the improved data separation capabilities of the Multidimensional Input Quantum Perceptron (MDIQP), a fundamental cell for the construction of more complex Quantum Artificial Neural Networks (QANNs). This is done by using input controlled alterations of ancillary qubits in combination with phase estimation and learning algorithms. The MDIQP is capable of processing quantum information and classifying multidimensional data that may not be linearly separable, extending the capabilities of the classical perceptron. With this powerful component, we get much closer to the achievement of a feedforward multilayer QANN, which would be able to represent and classify arbitrary sets of data (both quantum and classical).
Multi-dimensional Laplace transforms and applications
International Nuclear Information System (INIS)
Mughrabi, T.A.
1988-01-01
In this dissertation we establish new theorems for computing certain types of multidimensional Laplace transform pairs from known one-dimensional Laplace transforms. The theorems are applied to the most commonly used special functions and so we obtain many two and three dimensional Laplace transform pairs. As applications, some boundary value problems involving linear partial differential equations are solved by the use of multi-dimensional Laplace transformation. Also we establish some relations between the Laplace transformation and other integral transformation in two variables
Application of multidimensional IRT models to longitudinal data
te Marvelde, J.M.; Glas, Cornelis A.W.; Van Landeghem, Georges; Van Damme, Jan
2006-01-01
The application of multidimensional item response theory (IRT) models to longitudinal educational surveys where students are repeatedly measured is discussed and exemplified. A marginal maximum likelihood (MML) method to estimate the parameters of a multidimensional generalized partial credit model
The emergence and evolution of the multidimensional organization
Strikwerda, J.; Stoelhorst, J.W.
2009-01-01
The article discusses multidimensional organizations and the evolution of complex organizations. The six characteristics of multidimensional organizations, disadvantages of the successful organizational structure that is categorized as a multidivisional, multi-unit or M-form, research by the
Multidimensional Screening as a Pharmacology Laboratory Experience.
Malone, Marvin H.; And Others
1979-01-01
A multidimensional pharmacodynamic screening experiment that addresses drug interaction is included in the pharmacology-toxicology laboratory experience of pharmacy students at the University of the Pacific. The student handout with directions for the procedure is reproduced, drug compounds tested are listed, and laboratory evaluation results are…
Continued validation of the Multidimensional Perfectionism Scale.
Clavin, S L; Clavin, R H; Gayton, W F; Broida, J
1996-06-01
Scores on the Multidimensional Perfectionism Scale have been correlated with measures of obsessive-compulsive tendencies for women, so the validity of scores on this scale for 41 men was examined. Scores on the Perfectionism Scale were significantly correlated (.47-.03) with scores on the Maudsley Obsessive-Compulsive Inventory.
Multi-dimensional indoor location information model
Xiong, Q.; Zhu, Q.; Zlatanova, S.; Huang, L.; Zhou, Y.; Du, Z.
2013-01-01
Aiming at the increasing requirements of seamless indoor and outdoor navigation and location service, a Chinese standard of Multidimensional Indoor Location Information Model is being developed, which defines ontology of indoor location. The model is complementary to 3D concepts like CityGML and
Multi-dimensional quasitoeplitz Markov chains
Directory of Open Access Journals (Sweden)
Alexander N. Dudin
1999-01-01
Full Text Available This paper deals with multi-dimensional quasitoeplitz Markov chains. We establish a sufficient equilibrium condition and derive a functional matrix equation for the corresponding vector-generating function, whose solution is given algorithmically. The results are demonstrated in the form of examples and applications in queues with BMAP-input, which operate in synchronous random environment.
Multidimensional human dynamics in mobile phone communications.
Quadri, Christian; Zignani, Matteo; Capra, Lorenzo; Gaito, Sabrina; Rossi, Gian Paolo
2014-01-01
In today's technology-assisted society, social interactions may be expressed through a variety of techno-communication channels, including online social networks, email and mobile phones (calls, text messages). Consequently, a clear grasp of human behavior through the diverse communication media is considered a key factor in understanding the formation of the today's information society. So far, all previous research on user communication behavior has focused on a sole communication activity. In this paper we move forward another step on this research path by performing a multidimensional study of human sociality as an expression of the use of mobile phones. The paper focuses on user temporal communication behavior in the interplay between the two complementary communication media, text messages and phone calls, that represent the bi-dimensional scenario of analysis. Our study provides a theoretical framework for analyzing multidimensional bursts as the most general burst category, that includes one-dimensional bursts as the simplest case, and offers empirical evidence of their nature by following the combined phone call/text message communication patterns of approximately one million people over three-month period. This quantitative approach enables the design of a generative model rooted in the three most significant features of the multidimensional burst - the number of dimensions, prevalence and interleaving degree - able to reproduce the main media usage attitude. The other findings of the paper include a novel multidimensional burst detection algorithm and an insight analysis of the human media selection process.
Multidimensional stochastic approximation using locally contractive functions
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Multidimensional human dynamics in mobile phone communications.
Directory of Open Access Journals (Sweden)
Christian Quadri
Full Text Available In today's technology-assisted society, social interactions may be expressed through a variety of techno-communication channels, including online social networks, email and mobile phones (calls, text messages. Consequently, a clear grasp of human behavior through the diverse communication media is considered a key factor in understanding the formation of the today's information society. So far, all previous research on user communication behavior has focused on a sole communication activity. In this paper we move forward another step on this research path by performing a multidimensional study of human sociality as an expression of the use of mobile phones. The paper focuses on user temporal communication behavior in the interplay between the two complementary communication media, text messages and phone calls, that represent the bi-dimensional scenario of analysis. Our study provides a theoretical framework for analyzing multidimensional bursts as the most general burst category, that includes one-dimensional bursts as the simplest case, and offers empirical evidence of their nature by following the combined phone call/text message communication patterns of approximately one million people over three-month period. This quantitative approach enables the design of a generative model rooted in the three most significant features of the multidimensional burst - the number of dimensions, prevalence and interleaving degree - able to reproduce the main media usage attitude. The other findings of the paper include a novel multidimensional burst detection algorithm and an insight analysis of the human media selection process.
MCMC estimation of multidimensional IRT models
Beguin, Anton; Glas, Cornelis A.W.
1998-01-01
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will
Multidimensional Data Model and Query Language for Informetrics.
Niemi, Timo; Hirvonen, Lasse; Jarvelin, Kalervo
2003-01-01
Discusses multidimensional data analysis, or online analytical processing (OLAP), which offer a single subject-oriented source for analyzing summary data based on various dimensions. Develops a conceptual/logical multidimensional model for supporting the needs of informetrics, including a multidimensional query language whose basic idea is to…
Progress in multidimensional neutron transport computation
International Nuclear Information System (INIS)
Lewis, E.E.
1977-01-01
The methods available for solution of the time-independent neutron transport problems arising in the analysis of nuclear systems are examined. The merits of deterministic and Monte Carlo methods are briefly compared. The capabilities of deterministic computational methods derived from the first-order form of the transport equation, from the second-order even-parity form of this equation, and from integral transport formulations are discussed in some detail. Emphasis is placed on the approaches for dealing with the related problems of computer memory requirements, computational cost, and achievable accuracy. Attention is directed to some areas where problems exist currently and where the need for further work appears to be particularly warranted
Deterministic quantum state transfer and remote entanglement using microwave photons.
Kurpiers, P; Magnard, P; Walter, T; Royer, B; Pechal, M; Heinsoo, J; Salathé, Y; Akin, A; Storz, S; Besse, J-C; Gasparinetti, S; Blais, A; Wallraff, A
2018-06-01
Sharing information coherently between nodes of a quantum network is fundamental to distributed quantum information processing. In this scheme, the computation is divided into subroutines and performed on several smaller quantum registers that are connected by classical and quantum channels 1 . A direct quantum channel, which connects nodes deterministically rather than probabilistically, achieves larger entanglement rates between nodes and is advantageous for distributed fault-tolerant quantum computation 2 . Here we implement deterministic state-transfer and entanglement protocols between two superconducting qubits fabricated on separate chips. Superconducting circuits 3 constitute a universal quantum node 4 that is capable of sending, receiving, storing and processing quantum information 5-8 . Our implementation is based on an all-microwave cavity-assisted Raman process 9 , which entangles or transfers the qubit state of a transmon-type artificial atom 10 with a time-symmetric itinerant single photon. We transfer qubit states by absorbing these itinerant photons at the receiving node, with a probability of 98.1 ± 0.1 per cent, achieving a transfer-process fidelity of 80.02 ± 0.07 per cent for a protocol duration of only 180 nanoseconds. We also prepare remote entanglement on demand with a fidelity as high as 78.9 ± 0.1 per cent at a rate of 50 kilohertz. Our results are in excellent agreement with numerical simulations based on a master-equation description of the system. This deterministic protocol has the potential to be used for quantum computing distributed across different nodes of a cryogenic network.
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2017-09-01
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p cranial nerves. Probabilistic tracking with a gradual
Narayanan, Kiran; Samtaney, Ravi
2018-04-01
We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z (x ,t ) of the form Z (x ,t ) →1 /√{h ▵ t }N (i h ,n Δ t ) for spatial interval h , time interval Δ t , h , and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Z (h3,h03)>x ,t →1 /√ ▵ t max(i h ,n Δ t ) , with h0=ξ h ∀h mixing behavior emerges as the ensemble-averaged behavior of several fluctuating instances, whereas when Bo≈1 , a deviation from deterministic behavior is observed. For all cases, the FCNS solution provides bounds on the growth rate of the amplitude of the mixing layer.
Deterministic nonlinear phase gates induced by a single qubit
Park, Kimin; Marek, Petr; Filip, Radim
2018-05-01
We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
CALTRANS: A parallel, deterministic, 3D neutronics code
Energy Technology Data Exchange (ETDEWEB)
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
MIMO capacity for deterministic channel models: sublinear growth
DEFF Research Database (Denmark)
Bentosela, Francois; Cornean, Horia; Marchetti, Nicola
2013-01-01
. In the current paper, we apply those results in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given fixed volume. Under...... some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior....
Deterministic Single-Photon Source for Distributed Quantum Networking
International Nuclear Information System (INIS)
Kuhn, Axel; Hennrich, Markus; Rempe, Gerhard
2002-01-01
A sequence of single photons is emitted on demand from a single three-level atom strongly coupled to a high-finesse optical cavity. The photons are generated by an adiabatically driven stimulated Raman transition between two atomic ground states, with the vacuum field of the cavity stimulating one branch of the transition, and laser pulses deterministically driving the other branch. This process is unitary and therefore intrinsically reversible, which is essential for quantum communication and networking, and the photons should be appropriate for all-optical quantum information processing
On the progress towards probabilistic basis for deterministic codes
International Nuclear Information System (INIS)
Ellyin, F.
1975-01-01
Fundamentals arguments for a probabilistic basis of codes are presented. A class of code formats is outlined in which explicit statistical measures of uncertainty of design variables are incorporated. The format looks very much like present codes (deterministic) except for having probabilistic background. An example is provided whereby the design factors are plotted against the safety index, the probability of failure, and the risk of mortality. The safety level of the present codes is also indicated. A decision regarding the new probabilistically based code parameters thus could be made with full knowledge of implied consequences
The deterministic optical alignment of the HERMES spectrograph
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Enhanced deterministic phase retrieval using a partially developed speckle field
DEFF Research Database (Denmark)
Almoro, Percival F.; Waller, Laura; Agour, Mostafa
2012-01-01
A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle...... intensity measurements are recorded at the output plane corresponding to axially-propagated representations of the PDSF in the input plane. The speckle intensity measurements are then used in a conventional transport of intensity equation (TIE) to reconstruct directly the test wavefront. The PDSF in our...
Deterministic and efficient quantum cryptography based on Bell's theorem
International Nuclear Information System (INIS)
Chen, Z.-B.; Zhang, Q.; Bao, X.-H.; Schmiedmayer, J.; Pan, J.-W.
2005-01-01
Full text: We propose a novel double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish a key bit with the help of classical communications. Eavesdropping can be detected by checking the violation of local realism for the detected events. We also show that our protocol allows a robust implementation under current technology. (author)
Use of deterministic methods in survey calculations for criticality problems
International Nuclear Information System (INIS)
Hutton, J.L.; Phenix, J.; Course, A.F.
1991-01-01
A code package using deterministic methods for solving the Boltzmann Transport equation is the WIMS suite. This has been very successful for a range of situations. In particular it has been used with great success to analyse trends in reactivity with a range of changes in state. The WIMS suite of codes have a range of methods and are very flexible in the way they can be combined. A wide variety of situations can be modelled ranging through all the current Thermal Reactor variants to storage systems and items of chemical plant. These methods have recently been enhanced by the introduction of the CACTUS method. This is based on a characteristics technique for solving the Transport equation and has the advantage that complex geometrical situations can be treated. In this paper the basis of the method is outlined and examples of its use are illustrated. In parallel with these developments the validation for out of pile situations has been extended to include experiments with relevance to criticality situations. The paper will summarise this evidence and show how these results point to a partial re-adoption of deterministic methods for some areas of criticality. The paper also presents results to illustrate the use of WIMS in criticality situations and in particular show how it can complement codes such as MONK when used for surveying the reactivity effect due to changes in geometry or materials. (Author)
Strongly Deterministic Population Dynamics in Closed Microbial Communities
Directory of Open Access Journals (Sweden)
Zak Frentz
2015-10-01
Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
Bayesian analysis of deterministic and stochastic prisoner's dilemma games
Directory of Open Access Journals (Sweden)
Howard Kunreuther
2009-08-01
Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
Stochastic and deterministic causes of streamer branching in liquid dielectrics
International Nuclear Information System (INIS)
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-01-01
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching
Deterministic sensitivity analysis for the numerical simulation of contaminants transport
International Nuclear Information System (INIS)
Marchand, E.
2007-12-01
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
A study of deterministic models for quantum mechanics
International Nuclear Information System (INIS)
Sutherland, R.
1980-01-01
A theoretical investigation is made into the difficulties encountered in constructing a deterministic model for quantum mechanics and into the restrictions that can be placed on the form of such a model. The various implications of the known impossibility proofs are examined. A possible explanation for the non-locality required by Bell's proof is suggested in terms of backward-in-time causality. The efficacy of the Kochen and Specker proof is brought into doubt by showing that there is a possible way of avoiding its implications in the only known physically realizable situation to which it applies. A new thought experiment is put forward to show that a particle's predetermined momentum and energy values cannot satisfy the laws of momentum and energy conservation without conflicting with the predictions of quantum mechanics. Attention is paid to a class of deterministic models for which the individual outcomes of measurements are not dependent on hidden variables associated with the measuring apparatus and for which the hidden variables of a particle do not need to be randomized after each measurement
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-03
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
Using MCBEND for neutron or gamma-ray deterministic calculations
Directory of Open Access Journals (Sweden)
Geoff Dobson
2017-01-01
Full Text Available MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.
Using MCBEND for neutron or gamma-ray deterministic calculations
Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith
2017-09-01
MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Shock-induced explosive chemistry in a deterministic sample configuration.
Energy Technology Data Exchange (ETDEWEB)
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Narayanan, Kiran
2018-04-19
We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z(x,t) of the form Z(x,t)→1/-tN(ih,nΔt) for spatial interval h, time interval Δt, h, and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Zx,t→1/-tmaxh3,h03Nih,nΔt, with h0=ξh
Narayanan, Kiran; Samtaney, Ravi
2018-01-01
We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z(x,t) of the form Z(x,t)→1/-tN(ih,nΔt) for spatial interval h, time interval Δt, h, and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Zx,t→1/-tmaxh3,h03Nih,nΔt, with h0=ξh
Multidimensional integral representations problems of analytic continuation
Kytmanov, Alexander M
2015-01-01
The monograph is devoted to integral representations for holomorphic functions in several complex variables, such as Bochner-Martinelli, Cauchy-Fantappiè, Koppelman, multidimensional logarithmic residue etc., and their boundary properties. The applications considered are problems of analytic continuation of functions from the boundary of a bounded domain in C^n. In contrast to the well-known Hartogs-Bochner theorem, this book investigates functions with the one-dimensional property of holomorphic extension along complex lines, and includes the problems of receiving multidimensional boundary analogs of the Morera theorem. This book is a valuable resource for specialists in complex analysis, theoretical physics, as well as graduate and postgraduate students with an understanding of standard university courses in complex, real and functional analysis, as well as algebra and geometry.
Applications of Convex Analysis to Multidimensional Scaling
Jan de Leeuw
2011-01-01
In this paper we discuss the convergence of an algorithm for metric and nonmetric multidimensional scaling that is very similar to the C-matrix algorithm of Guttman. The paper improves some earlier results in two respects. In the first place the analysis is extended to cover general Minkovski metrics, in the second place a more elementary proof of convergence based on results of Robert is presented.
Multidimensional Scaling Visualization using Parametric Similarity Indices
Machado, J. A. Tenreiro; Lopes, António M.; Galhano, A.M.
2015-01-01
In this paper, we apply multidimensional scaling (MDS) and parametric similarity indices (PSI) in the analysis of complex systems (CS). Each CS is viewed as a dynamical system, exhibiting an output time-series to be interpreted as a manifestation of its behavior. We start by adopting a sliding window to sample the original data into several consecutive time periods. Second, we define a given PSI for tracking pieces of data. We then compare the windows for different values of the parameter, an...
Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide (Russian Edition)
International Nuclear Information System (INIS)
2014-01-01
The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References
International Nuclear Information System (INIS)
Bor-Jing Chang; Yen-Wan H. Liu
1992-01-01
The HYBRID, or mixed group and point, method was developed to solve the neutron transport equation deterministically using detailed treatment at cross section minima for deep penetration calculations. Its application so far is limited to one-dimensional calculations due to the enormous computing time involved in multi-dimensional calculations. In this article, a collapsing method is developed for the mixed group and point cross section sets to provide a more direct and practical way of using the HYBRID method in the multi-dimensional calculations. A testing problem is run. The method is then applied to the calculation of a deep penetration benchmark experiment. It is observed that half of the window effect is smeared in the collapsing treatment, but it still provide a better cross section set than the VITAMIN-C cross sections for the deep penetrating calculations
Frost Multidimensional Perfectionism Scale: the portuguese version
Directory of Open Access Journals (Sweden)
Ana Paula Monteiro Amaral
2013-01-01
Full Text Available BACKGROUND: The Frost Multidimensional Perfectionism Scale is one of the most world widely used measures of perfectionism. OBJECTIVE: To analyze the psychometric properties of the Portuguese version of the Frost Multidimensional Perfectionism Scale. METHODS: Two hundred and seventeen (178 females students from two Portuguese Universities filled in the scale, and a subgroup (n = 166 completed a retest with a four weeks interval. RESULTS: The scale reliability was good (Cronbach alpha = .857. Corrected item-total correlations ranged from .019 to .548. The scale test-retest reliability suggested a good temporal stability with a test-retest correlation of .765. A principal component analysis with Varimax rotation was performed and based on the Scree plot, two robust factorial structures were found (four and six factors. The principal component analyses, using Monte Carlo PCA for parallel analyses confirmed the six factor solution. The concurrent validity with Hewitt and Flett MPS was high, as well as the discriminant validity of positive and negative affect (Profile of Mood Stats-POMS. DISCUSSION: The two factorial structures (of four and six dimensions of the Portuguese version of Frost Multidimensional Perfectionism Scale replicate the results from different authors, with different samples and cultures. This suggests this scale is a robust instrument to assess perfectionism, in several clinical and research settings as well as in transcultural studies.
On the implementation of a deterministic secure coding protocol using polarization entangled photons
Ostermeyer, Martin; Walenta, Nino
2007-01-01
We demonstrate a prototype-implementation of deterministic information encoding for quantum key distribution (QKD) following the ping-pong coding protocol [K. Bostroem, T. Felbinger, Phys. Rev. Lett. 89 (2002) 187902-1]. Due to the deterministic nature of this protocol the need for post-processing the key is distinctly reduced compared to non-deterministic protocols. In the course of our implementation we analyze the practicability of the protocol and discuss some security aspects of informat...
Classification and unification of the microscopic deterministic traffic models.
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
International Nuclear Information System (INIS)
Zio, Enrico
2014-01-01
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives
Minaret, a deterministic neutron transport solver for nuclear core calculations
International Nuclear Information System (INIS)
Moller, J-Y.; Lautard, J-J.
2011-01-01
We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)
Analysis of deterministic cyclic gene regulatory network models with delays
Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian
2015-01-01
This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.
Distributed Design of a Central Service to Ensure Deterministic Behavior
Directory of Open Access Journals (Sweden)
Imran Ali Jokhio
2012-10-01
Full Text Available A central authentication service to EPC (Electronic Product Code system architecture is proposed in our previous work. A challenge for a central service always arises that how it can ensure a certain level of delay while processing emergent data. The increasing data in the EPC system architecture is tags data. Therefore, authenticating increasing number of tag in the central authentication service with a deterministic time response is investigated and a distributed authentication service is designed in a layered approach. A distributed design of tag searching services in SOA (Service Oriented Architecture style is also presented. Using the SOA architectural style a self-adaptive authentication service over Cloud is also proposed for the central authentication service, that may also be extended for other applications.
Deterministic Evolutionary Trajectories Influence Primary Tumor Growth: TRACERx Renal
DEFF Research Database (Denmark)
Turajlic, Samra; Xu, Hang; Litchfield, Kevin
2018-01-01
The evolutionary features of clear-cell renal cell carcinoma (ccRCC) have not been systematically studied to date. We analyzed 1,206 primary tumor regions from 101 patients recruited into the multi-center prospective study, TRACERx Renal. We observe up to 30 driver events per tumor and show...... that subclonal diversification is associated with known prognostic parameters. By resolving the patterns of driver event ordering, co-occurrence, and mutual exclusivity at clone level, we show the deterministic nature of clonal evolution. ccRCC can be grouped into seven evolutionary subtypes, ranging from tumors...... outcome. Our insights reconcile the variable clinical behavior of ccRCC and suggest evolutionary potential as a biomarker for both intervention and surveillance....
Minaret, a deterministic neutron transport solver for nuclear core calculations
Energy Technology Data Exchange (ETDEWEB)
Moller, J-Y.; Lautard, J-J., E-mail: jean-yves.moller@cea.fr, E-mail: jean-jacques.lautard@cea.fr [CEA - Centre de Saclay , Gif sur Yvette (France)
2011-07-01
We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks
Directory of Open Access Journals (Sweden)
Luca Marchetti
2017-01-01
Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.
Deterministic secure communications using two-mode squeezed states
International Nuclear Information System (INIS)
Marino, Alberto M.; Stroud, C. R. Jr.
2006-01-01
We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state
Deterministically entangling multiple remote quantum memories inside an optical cavity
Yan, Zhihui; Liu, Yanhong; Yan, Jieli; Jia, Xiaojun
2018-01-01
Quantum memory for the nonclassical state of light and entanglement among multiple remote quantum nodes hold promise for a large-scale quantum network, however, continuous-variable (CV) memory efficiency and entangled degree are limited due to imperfect implementation. Here we propose a scheme to deterministically entangle multiple distant atomic ensembles based on CV cavity-enhanced quantum memory. The memory efficiency can be improved with the help of cavity-enhanced electromagnetically induced transparency dynamics. A high degree of entanglement among multiple atomic ensembles can be obtained by mapping the quantum state from multiple entangled optical modes into a collection of atomic spin waves inside optical cavities. Besides being of interest in terms of unconditional entanglement among multiple macroscopic objects, our scheme paves the way towards the practical application of quantum networks.
Energy Technology Data Exchange (ETDEWEB)
Zio, Enrico, E-mail: enrico.zio@ecp.fr [Ecole Centrale Paris and Supelec, Chair on System Science and the Energetic Challenge, European Foundation for New Energy – Electricite de France (EDF), Grande Voie des Vignes, 92295 Chatenay-Malabry Cedex (France); Dipartimento di Energia, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy)
2014-12-15
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives.
A deterministic model of nettle caterpillar life cycle
Syukriyah, Y.; Nuraini, N.; Handayani, D.
2018-03-01
Palm oil is an excellent product in the plantation sector in Indonesia. The level of palm oil productivity is very potential to increase every year. However, the level of palm oil productivity is lower than its potential. Pests and diseases are the main factors that can reduce production levels by up to 40%. The existence of pests in plants can be caused by various factors, so the anticipation in controlling pest attacks should be prepared as early as possible. Caterpillars are the main pests in oil palm. The nettle caterpillars are leaf eaters that can significantly decrease palm productivity. We construct a deterministic model that describes the life cycle of the caterpillar and its mitigation by using a caterpillar predator. The equilibrium points of the model are analyzed. The numerical simulations are constructed to give a representation how the predator as the natural enemies affects the nettle caterpillar life cycle.
Location deterministic biosensing from quantum-dot-nanowire assemblies
International Nuclear Information System (INIS)
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-01-01
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Absorbing phase transitions in deterministic fixed-energy sandpile models
Park, Su-Chan
2018-03-01
We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010), 10.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.
Realization of deterministic quantum teleportation with solid state qubits
International Nuclear Information System (INIS)
Andreas Wallfraff
2014-01-01
Using modern micro and nano-fabrication techniques combined with superconducting materials we realize electronic circuits the dynamics of which are governed by the laws of quantum mechanics. Making use of the strong interaction of photons with superconducting quantum two-level systems realized in these circuits we investigate both fundamental quantum effects of light and applications in quantum information processing. In this talk I will discuss the deterministic teleportation of a quantum state in a macroscopic quantum system. Teleportation may be used for distributing entanglement between distant qubits in a quantum network and for realizing universal and fault-tolerant quantum computation. Previously, we have demonstrated the implementation of a teleportation protocol, up to the single-shot measurement step, with three superconducting qubits coupled to a single microwave resonator. Using full quantum state tomography and calculating the projection of the measured density matrix onto the basis of two qubits has allowed us to reconstruct the teleported state with an average output state fidelity of 86%. Now we have realized a new device in which four qubits are coupled pair-wise to three resonators. Making use of parametric amplifiers coupled to the output of two of the resonators we are able to perform high-fidelity single-shot read-out. This has allowed us to demonstrate teleportation by individually post-selecting on any Bell-state and by deterministically distinguishing between all four Bell states measured by the sender. In addition, we have recently implemented fast feed-forward to complete the teleportation process. In all instances, we demonstrate that the fidelity of the teleported states are above the threshold imposed by classical physics. The presented experiments are expected to contribute towards realizing quantum communication with microwave photons in the foreseeable future. (author)
Measures of thermodynamic irreversibility in deterministic and stochastic dynamics
International Nuclear Information System (INIS)
Ford, Ian J
2015-01-01
It is generally observed that if a dynamical system is sufficiently complex, then as time progresses it will share out energy and other properties amongst its component parts to eliminate any initial imbalances, retaining only fluctuations. This is known as energy dissipation and it is closely associated with the concept of thermodynamic irreversibility, measured by the increase in entropy according to the second law. It is of interest to quantify such behaviour from a dynamical rather than a thermodynamic perspective and to this end stochastic entropy production and the time-integrated dissipation function have been introduced as analogous measures of irreversibility, principally for stochastic and deterministic dynamics, respectively. We seek to compare these measures. First we modify the dissipation function to allow it to measure irreversibility in situations where the initial probability density function (pdf) of the system is asymmetric as well as symmetric in velocity. We propose that it tests for failure of what we call the obversibility of the system, to be contrasted with reversibility, the failure of which is assessed by stochastic entropy production. We note that the essential difference between stochastic entropy production and the time-integrated modified dissipation function lies in the sequence of procedures undertaken in the associated tests of irreversibility. We argue that an assumed symmetry of the initial pdf with respect to velocity inversion (within a framework of deterministic dynamics) can be incompatible with the Past Hypothesis, according to which there should be a statistical distinction between the behaviour of certain properties of an isolated system as it evolves into the far future and the remote past. Imposing symmetry on a velocity distribution is acceptable for many applications of statistical physics, but can introduce difficulties when discussing irreversible behaviour. (paper)
Deterministic Earthquake Hazard Assessment by Public Agencies in California
Mualchin, L.
2005-12-01
Even in its short recorded history, California has experienced a number of damaging earthquakes that have resulted in new codes and other legislation for public safety. In particular, the 1971 San Fernando earthquake produced some of the most lasting results such as the Hospital Safety Act, the Strong Motion Instrumentation Program, the Alquist-Priolo Special Studies Zone Act, and the California Department of Transportation (Caltrans') fault-based deterministic seismic hazard (DSH) map. The latter product provides values for earthquake ground motions based on Maximum Credible Earthquakes (MCEs), defined as the largest earthquakes that can reasonably be expected on faults in the current tectonic regime. For surface fault rupture displacement hazards, detailed study of the same faults apply. Originally, hospital, dam, and other critical facilities used seismic design criteria based on deterministic seismic hazard analyses (DSHA). However, probabilistic methods grew and took hold by introducing earthquake design criteria based on time factors and quantifying "uncertainties", by procedures such as logic trees. These probabilistic seismic hazard analyses (PSHA) ignored the DSH approach. Some agencies were influenced to adopt only the PSHA method. However, deficiencies in the PSHA method are becoming recognized, and the use of the method is now becoming a focus of strong debate. Caltrans is in the process of producing the fourth edition of its DSH map. The reason for preferring the DSH method is that Caltrans believes it is more realistic than the probabilistic method for assessing earthquake hazards that may affect critical facilities, and is the best available method for insuring public safety. Its time-invariant values help to produce robust design criteria that are soundly based on physical evidence. And it is the method for which there is the least opportunity for unwelcome surprises.
Deterministic calculations of radiation doses from brachytherapy seeds
International Nuclear Information System (INIS)
Reis, Sergio Carneiro dos; Vasconcelos, Vanderley de; Santos, Ana Maria Matildes dos
2009-01-01
Brachytherapy is used for treating certain types of cancer by inserting radioactive sources into tumours. CDTN/CNEN is developing brachytherapy seeds to be used mainly in prostate cancer treatment. Dose calculations play a very significant role in the characterization of the developed seeds. The current state-of-the-art of computation dosimetry relies on Monte Carlo methods using, for instance, MCNP codes. However, deterministic calculations have some advantages, as, for example, short computer time to find solutions. This paper presents a software developed to calculate doses in a two-dimensional space surrounding the seed, using a deterministic algorithm. The analysed seeds consist of capsules similar to IMC6711 (OncoSeed), that are commercially available. The exposure rates and absorbed doses are computed using the Sievert integral and the Meisberger third order polynomial, respectively. The software also allows the isodose visualization at the surface plan. The user can choose between four different radionuclides ( 192 Ir, 198 Au, 137 Cs and 60 Co). He also have to enter as input data: the exposure rate constant; the source activity; the active length of the source; the number of segments in which the source will be divided; the total source length; the source diameter; and the actual and effective source thickness. The computed results were benchmarked against results from literature and developed software will be used to support the characterization process of the source that is being developed at CDTN. The software was implemented using Borland Delphi in Windows environment and is an alternative to Monte Carlo based codes. (author)
Asinari, Pietro
2010-10-01
.gz Programming language: Tested with Matlab version ⩽6.5. However, in principle, any recent version of Matlab or Octave should work Computer: All supporting Matlab or Octave Operating system: All supporting Matlab or Octave RAM: 300 MBytes Classification: 23 Nature of problem: The problem consists in integrating the homogeneous Boltzmann equation for a generic collisional kernel in case of isotropic symmetry, by a deterministic direct method. Difficulties arise from the multi-dimensionality of the collisional operator and from satisfying the conservation of particle number and energy (momentum is trivial for this test case) as accurately as possible, in order to preserve the late dynamics. Solution method: The solution is based on the method proposed by Aristov (2001) [1], but with two substantial improvements: (a) the original problem is reformulated in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium). Both these corrections make possible to derive very accurate reference solutions for this test case. Restrictions: The nonlinear Boltzmann equation is extremely challenging from the computational point of view, in particular for deterministic methods, despite the increased computational power of recent hardware. In this work, only the homogeneous isotropic case is considered, for making possible the development of a minimal program (by a simple scripting language) and allowing the user to check the advantages of the proposed improvements beyond Aristov's (2001) method [1]. The initial conditions are supposed parameterized according to a fixed analytical expression, but this can be
The emergence and evolution of the multidimensional organization
Strikwerda, J.; Stoelhorst, J.W.
2009-01-01
The article discusses multidimensional organizations and the evolution of complex organizations. The six characteristics of multidimensional organizations, disadvantages of the successful organizational structure that is categorized as a multidivisional, multi-unit or M-form, research by the Foundation for Management Studies which suggests that synergies across business divisions can be exploited by the M-form, a team approach to creating economic value, examples of multidimensional firms suc...
An Improved Multidimensional MPA Procedure for Bidirectional Earthquake Excitations
Wang, Feng; Sun, Jian-Gang; Zhang, Ning
2014-01-01
Presently, the modal pushover analysis procedure is extended to multidimensional analysis of structures subjected to multidimensional earthquake excitations. an improved multidimensional modal pushover analysis (IMMPA) method is presented in the paper in order to estimate the response demands of structures subjected to bidirectional earthquake excitations, in which the unidirectional earthquake excitation applied on equivalent SDOF system is replaced by the direct superposition of two compone...
Multidimensional Risk Management for Underground Electricity Networks
Directory of Open Access Journals (Sweden)
Garcez Thalles V.
2014-08-01
Full Text Available In the paper we consider an electricity provider company that makes decision on allocating resources on electric network maintenance. The investments decrease malfunction rate of network nodes. An accidental event (explosion, fire, etc. or a malfunctioning on underground system can have various consequences and in different perspectives, such as deaths and injuries of pedestrians, fires in nearby locations, disturbances in the flow of vehicular traffic, loss to the company image, operating and financial losses, etc. For this reason it is necessary to apply an approach of the risk management that considers the multidimensional view of the consequences. Furthermore an analysis of decision making should consider network dependencies between the nodes of the electricity distribution system. In the paper we propose the use of the simulation to assess the network effects (such as the increase of the probability of other accidental event and the occurrence of blackouts of the dependent nodes in the multidimensional risk assessment in electricity grid. The analyzed effects include node overloading due to malfunction of adjacent nodes and blackouts that take place where there is temporarily no path in the grid between the power plant and a node. The simulation results show that network effects have crucial role for decisions in the network maintenance – outcomes of decisions to repair a particular node in the network can have significant influence on performance of other nodes. However, those dependencies are non-linear. The effects of network connectivity (number of connections between nodes on its multidimensional performance assessment depend heavily on the overloading effect level. The simulation results do not depend on network type structure (random or small world – however simulation outcomes for random networks have shown higher variance compared to small-world networks.
Trust and credibility: measured by multidimensional scaling
International Nuclear Information System (INIS)
Warg, L.E.; Bodin, L.
1998-01-01
Full text of publication follows: in focus of much of today's research interest in risk communication, is the fact that the communities do not trust policy and decision makers such as politicians, government or industry people. This is especially serious in the years to come when we are expecting risk issues concerning for example the nuclear industry, global warming and hazardous waste, to be even higher on the political and social agenda all over the world. Despite the research efforts devoted to trust, society needs an in depth understanding of trust for conducting successful communication regarding environmental hazards. The present abstract is about an experimental study in psychology where focus has been on the possibility to use the multidimensional scaling technique to explore the characteristics people consider to be of importance when they say that certain persons are credible. In the study, a total of 61 students of the University of Oerebro, Sweden, were required to make comparisons of the similarity between 12 well-known swedish persons from politics science, media, industry, 'TV-world' and literature (two persons at a time), regarding their credibility when making statements about risks in society. In addition, the subjects were rating the importance of 19 factors for the credibility of a source. These 61 persons comprised three groups of students: pedagogists, business economists, and chemists. There were 61 % women and 39% men and the mean age was 23 years. The results will be analyzed using multidimensional scaling technique. Differences between the three groups will be analyzed and presented as well as those between men and women. In addition, the 19 factors will be discussed and considered when trying to label the dimensions accounted for by the multidimensional scaling technique. The result from this study will contribute to our understanding of important factors behind human judgments concerning trust and credibility. It will also point to a
Multidimensional flux-limited advection schemes
International Nuclear Information System (INIS)
Thuburn, J.
1996-01-01
A general method for building multidimensional shape preserving advection schemes using flux limiters is presented. The method works for advected passive scalars in either compressible or incompressible flow and on arbitrary grids. With a minor modification it can be applied to the equation for fluid density. Schemes using the simplest form of the flux limiter can cause distortion of the advected profile, particularly sideways spreading, depending on the orientation of the flow relative to the grid. This is partly because the simple limiter is too restrictive. However, some straightforward refinements lead to a shape-preserving scheme that gives satisfactory results, with negligible grid-flow angle-dependent distortion
Point Information Gain and Multidimensional Data Analysis
Directory of Open Access Journals (Sweden)
Renata Rychtáriková
2016-10-01
Full Text Available We generalize the point information gain (PIG and derived quantities, i.e., point information gain entropy (PIE and point information gain entropy density (PIED, for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these methods for the analysis of multidimensional datasets. We demonstrate the main properties of PIE/PIED spectra for the real data with the examples of several images and discuss further possible utilizations in other fields of data processing.
New method for solving multidimensional scattering problem
International Nuclear Information System (INIS)
Melezhik, V.S.
1991-01-01
A new method is developed for solving the quantum mechanical problem of scattering of a particle with internal structure. The multichannel scattering problem is formulated as a system of nonlinear functional equations for the wave function and reaction matrix. The method is successfully tested for the scattering from a nonspherical potential well and a long-range nonspherical scatterer. The method is also applicable to solving the multidimensional Schroedinger equation with a discrete spectrum. As an example the known problem of a hydrogen atom in a homogeneous magnetic field is analyzed
An example of multidimensional analysis: Discriminant analysis
International Nuclear Information System (INIS)
Lutz, P.
1990-01-01
Among the approaches on the data multi-dimensional analysis, lectures on the discriminant analysis including theoretical and practical aspects are presented. The discrimination problem, the analysis steps and the discrimination categories are stressed. Examples on the descriptive historical analysis, the discrimination for decision making, the demonstration and separation of the top quark are given. In the linear discriminant analysis the following subjects are discussed: Huyghens theorem, projection, discriminant variable, geometrical interpretation, case for g=2, classification method, separation of the top events. Criteria allowing the obtention of relevant results are included [fr
S. Boldyreva; S. Fehr (Serge); A. O'Neill; D. Wagner
2008-01-01
textabstractThe study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic
Aspects of cell calculations in deterministic reactor core analysis
International Nuclear Information System (INIS)
Varvayanni, M.; Savva, P.; Catsaros, N.
2011-01-01
Τhe capability of achieving optimum utilization of the deterministic neutronic codes is very important, since, although elaborate tools, they are still widely used for nuclear reactor core analyses, due to specific advantages that they present compared to Monte Carlo codes. The user of a deterministic neutronic code system has to make some significant physical assumptions if correct results are to be obtained. A decisive first step at which such assumptions are required is the one-dimensional cell calculations, which provide the neutronic properties of the homogenized core cells and collapse the cross sections into user-defined energy groups. One of the most crucial determinations required at the above stage and significantly influencing the subsequent three-dimensional calculations of reactivity, concerns the transverse leakages, associated to each one-dimensional, user-defined core cell. For the appropriate definition of the transverse leakages several parameters concerning the core configuration must be taken into account. Moreover, the suitability of the assumptions made for the transverse cell leakages, depends on earlier user decisions, such as those made for the core partition into homogeneous cells. In the present work, the sensitivity of the calculated core reactivity to the determined leakages of the individual cells constituting the core, is studied. Moreover, appropriate assumptions concerning the transverse leakages in the one-dimensional cell calculations are searched out. The study is performed examining also the influence of the core size and the reflector existence, while the effect of the decisions made for the core partition into homogenous cells is investigated. In addition, the effect of broadened moderator channels formed within the core (e.g. by removing fuel plates to create space for control rod hosting) is also examined. Since the study required a large number of conceptual core configurations, experimental data could not be available for
Benchmarking the Multidimensional Stellar Implicit Code MUSIC
Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.
2017-04-01
We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.
MULTIDIMENSIONAL MODELING OF CORONAL RAIN DYNAMICS
Energy Technology Data Exchange (ETDEWEB)
Fang, X.; Xia, C.; Keppens, R. [Centre for mathematical Plasma Astrophysics, Department of Mathematics, KU Leuven, B-3001 Leuven (Belgium)
2013-07-10
We present the first multidimensional, magnetohydrodynamic simulations that capture the initial formation and long-term sustainment of the enigmatic coronal rain phenomenon. We demonstrate how thermal instability can induce a spectacular display of in situ forming blob-like condensations which then start their intimate ballet on top of initially linear force-free arcades. Our magnetic arcades host a chromospheric, transition region, and coronal plasma. Following coronal rain dynamics for over 80 minutes of physical time, we collect enough statistics to quantify blob widths, lengths, velocity distributions, and other characteristics which directly match modern observational knowledge. Our virtual coronal rain displays the deformation of blobs into V-shaped features, interactions of blobs due to mostly pressure-mediated levitations, and gives the first views of blobs that evaporate in situ or are siphoned over the apex of the background arcade. Our simulations pave the way for systematic surveys of coronal rain showers in true multidimensional settings to connect parameterized heating prescriptions with rain statistics, ultimately allowing us to quantify the coronal heating input.
Multidimensional Learner Model In Intelligent Learning System
Deliyska, B.; Rozeva, A.
2009-11-01
The learner model in an intelligent learning system (ILS) has to ensure the personalization (individualization) and the adaptability of e-learning in an online learner-centered environment. ILS is a distributed e-learning system whose modules can be independent and located in different nodes (servers) on the Web. This kind of e-learning is achieved through the resources of the Semantic Web and is designed and developed around a course, group of courses or specialty. An essential part of ILS is learner model database which contains structured data about learner profile and temporal status in the learning process of one or more courses. In the paper a learner model position in ILS is considered and a relational database is designed from learner's domain ontology. Multidimensional modeling agent for the source database is designed and resultant learner data cube is presented. Agent's modules are proposed with corresponding algorithms and procedures. Multidimensional (OLAP) analysis guidelines on the resultant learner module for designing dynamic learning strategy have been highlighted.
Multidimensional biochemical information processing of dynamical patterns.
Hasegawa, Yoshihiko
2018-02-01
Cells receive signaling molecules by receptors and relay information via sensory networks so that they can respond properly depending on the type of signal. Recent studies have shown that cells can extract multidimensional information from dynamical concentration patterns of signaling molecules. We herein study how biochemical systems can process multidimensional information embedded in dynamical patterns. We model the decoding networks by linear response functions, and optimize the functions with the calculus of variations to maximize the mutual information between patterns and output. We find that, when the noise intensity is lower, decoders with different linear response functions, i.e., distinct decoders, can extract much information. However, when the noise intensity is higher, distinct decoders do not provide the maximum amount of information. This indicates that, when transmitting information by dynamical patterns, embedding information in multiple patterns is not optimal when the noise intensity is very large. Furthermore, we explore the biochemical implementations of these decoders using control theory and demonstrate that these decoders can be implemented biochemically through the modification of cascade-type networks, which are prevalent in actual signaling pathways.
MULTIDIMENSIONAL MODELING OF CORONAL RAIN DYNAMICS
International Nuclear Information System (INIS)
Fang, X.; Xia, C.; Keppens, R.
2013-01-01
We present the first multidimensional, magnetohydrodynamic simulations that capture the initial formation and long-term sustainment of the enigmatic coronal rain phenomenon. We demonstrate how thermal instability can induce a spectacular display of in situ forming blob-like condensations which then start their intimate ballet on top of initially linear force-free arcades. Our magnetic arcades host a chromospheric, transition region, and coronal plasma. Following coronal rain dynamics for over 80 minutes of physical time, we collect enough statistics to quantify blob widths, lengths, velocity distributions, and other characteristics which directly match modern observational knowledge. Our virtual coronal rain displays the deformation of blobs into V-shaped features, interactions of blobs due to mostly pressure-mediated levitations, and gives the first views of blobs that evaporate in situ or are siphoned over the apex of the background arcade. Our simulations pave the way for systematic surveys of coronal rain showers in true multidimensional settings to connect parameterized heating prescriptions with rain statistics, ultimately allowing us to quantify the coronal heating input.
Testlet-Based Multidimensional Adaptive Testing.
Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen
2016-01-01
Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.
Testlet-based Multidimensional Adaptive Testing
Directory of Open Access Journals (Sweden)
Andreas Frey
2016-11-01
Full Text Available Multidimensional adaptive testing (MAT is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT. MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, 1.5 and testlet sizes (3 items, 6 items, 9 items with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.
A Multidimensional Theory of Suicide.
Leenaars, Antoon A; Dieserud, Gudrun; Wenckstern, Susanne; Dyregrov, Kari; Lester, David; Lyke, Jennifer
2018-04-05
Theory is the foundation of science; this is true in suicidology. Over decades of studies of suicide notes, Leenaars developed a multidimensional model of suicide, with international (crosscultural) studies and independent verification. To corroborate Leenaars's theory with a psychological autopsy (PA) study, examining age and sex of the decedent, and survivor's relationship to deceased. A PA study in Norway, with 120 survivors/informants was undertaken. Leenaars' theoretical-conceptual (protocol) analysis was undertaken of the survivors' narratives and in-depth interviews combined. Substantial interjudge reliability was noted (κ = .632). Overall, there was considerable confirmatory evidence of Leenaars's intrapsychic and interpersonal factors in suicide survivors' narratives. Differences were found in the age of the decedent, but not in sex, nor in the survivor's closeness of the relationship. Older deceased people were perceived to exhibit more heightened unbearable intrapsychic pain, associated with the suicide. Leenaars's theory has corroborative verification, through the decedents' suicide notes and the survivors' narratives. However, the multidimensional model needs further testing to develop a better evidence-based way of understanding suicide.
[Multidimensional family therapy: which influences, which specificities?].
Bonnaire, C; Bastard, N; Couteron, J-P; Har, A; Phan, O
2014-10-01
Among illegal psycho-active drugs, cannabis is the most consumed by French adolescents. Multidimensional family therapy (MDFT) is a family-based outpatient therapy which has been developed for adolescents with drug and behavioral problems. MDFT has shown its effectiveness in adolescents with substance abuse disorders (notably cannabis abuse) not only in the United States but also in Europe (International Cannabis Need of Treatment project). MDFT is a multidisciplinary approach and an evidence-based treatment, at the crossroads of developmental psychology, ecological theories and family therapy. Its psychotherapeutic techniques find its roots in a variety of approaches which include systemic family therapy and cognitive therapy. The aims of this paper are: to describe all the backgrounds of MDFT by highlighting its characteristics; to explain how structural and strategy therapies have influenced this approach; to explore the links between MDFT, brief strategic family therapy and multi systemic family therapy; and to underline the specificities of this family therapy method. The multidimensional family therapy was created on the bases of 1) the integration of multiple therapeutic techniques stemming from various family therapy theories; and 2) studies which have shown family therapy efficiency. Several trials have shown a better efficiency of MDFT compared to group treatment, cognitive-behavioral therapy and home-based treatment. Studies have also highlighted that MDFT led to superior treatment outcomes, especially among young people with severe drug use and psychiatric co-morbidities. In the field of systemic family therapies, MDFT was influenced by: 1) the structural family therapy (S. Minuchin), 2) the strategic family theory (J. Haley), and 3) the intergenerational family therapy (Bowen and Boszormenyi-Nagy). MDFT has specific aspects: MDFT therapists think in a multidimensional perspective (because an adolescent's drug abuse is a multidimensional disorder), they
A deterministic seismic hazard map of India and adjacent areas
International Nuclear Information System (INIS)
Parvez, Imtiyaz A.; Vaccari, Franco; Panza, Giuliano
2001-09-01
A seismic hazard map of the territory of India and adjacent areas has been prepared using a deterministic approach based on the computation of synthetic seismograms complete of all main phases. The input data set consists of structural models, seismogenic zones, focal mechanisms and earthquake catalogue. The synthetic seismograms have been generated by the modal summation technique. The seismic hazard, expressed in terms of maximum displacement (DMAX), maximum velocity (VMAX), and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid of 0.2 deg. x 0.2 deg. over the studied territory. The estimated values of the peak ground acceleration are compared with the observed data available for the Himalayan region and found in good agreement. Many parts of the Himalayan region have the DGA values exceeding 0.6 g. The epicentral areas of the great Assam earthquakes of 1897 and 1950 represent the maximum hazard with DGA values reaching 1.2-1.3 g. (author)
Deterministic and fuzzy-based methods to evaluate community resilience
Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo
2018-04-01
Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.
Entrepreneurs, Chance, and the Deterministic Concentration of Wealth
Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Deterministic and Probabilistic Analysis against Anticipated Transient Without Scram
International Nuclear Information System (INIS)
Choi, Sun Mi; Kim, Ji Hwan; Seok, Ho
2016-01-01
An Anticipated Transient Without Scram (ATWS) is an Anticipated Operational Occurrences (AOOs) accompanied by a failure of the reactor trip when required. By a suitable combination of inherent characteristics and diverse systems, the reactor design needs to reduce the probability of the ATWS and to limit any Core Damage and prevent loss of integrity of the reactor coolant pressure boundary if it happens. This study focuses on the deterministic analysis for the ATWS events with respect to Reactor Coolant System (RCS) over-pressure and fuel integrity for the EU-APR. Additionally, this report presents the Probabilistic Safety Assessment (PSA) reflecting those diverse systems. The analysis performed for the ATWS event indicates that the NSSS could be reached to controlled and safe state due to the addition of boron into the core via the EBS pump flow upon the EBAS by DPS. Decay heat is removed through MSADVs and the auxiliary feedwater. During the ATWS event, RCS pressure boundary is maintained by the operation of primary and secondary safety valves. Consequently, the acceptance criteria were satisfied by installing DPS and EBS in addition to the inherent safety characteristics
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
Method to deterministically study photonic nanostructures in different experimental instruments.
Husken, B H; Woldering, L A; Blum, C; Vos, W L
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.
Prospects in deterministic three dimensional whole-core transport calculations
International Nuclear Information System (INIS)
Sanchez, Richard
2012-01-01
The point we made in this paper is that, although detailed and precise three-dimensional (3D) whole-core transport calculations may be obtained in the future with massively parallel computers, they would have an application to only some of the problems of the nuclear industry, more precisely those regarding multiphysics or for methodology validation or nuclear safety calculations. On the other hand, typical design reactor cycle calculations comprising many one-point core calculations can have very strict constraints in computing time and will not directly benefit from the advances in computations in large scale computers. Consequently, in this paper we review some of the deterministic 3D transport methods which in the very near future may have potential for industrial applications and, even with low-order approximations such as a low resolution in energy, might represent an advantage as compared with present industrial methodology, for which one of the main approximations is due to power reconstruction. These methods comprise the response-matrix method and methods based on the two-dimensional (2D) method of characteristics, such as the fusion method.
Conversion of dependability deterministic requirements into probabilistic requirements
International Nuclear Information System (INIS)
Bourgade, E.; Le, P.
1993-02-01
This report concerns the on-going survey conducted jointly by the DAM/CCE and NRE/SR branches on the inclusion of dependability requirements in control and instrumentation projects. Its purpose is to enable a customer (the prime contractor) to convert into probabilistic terms dependability deterministic requirements expressed in the form ''a maximum permissible number of failures, of maximum duration d in a period t''. The customer shall select a confidence level for each previously defined undesirable event, by assigning a maximum probability of occurrence. Using the formulae we propose for two repair policies - constant rate or constant time - these probabilized requirements can then be transformed into equivalent failure rates. It is shown that the same formula can be used for both policies, providing certain realistic assumptions are confirmed, and that for a constant time repair policy, the correct result can always be obtained. The equivalent failure rates thus determined can be included in the specifications supplied to the contractors, who will then be able to proceed to their previsional justification. (author), 8 refs., 3 annexes
Fisher-Wright model with deterministic seed bank and selection.
Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel
2017-04-01
Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.
Deterministic network interdiction optimization via an evolutionary approach
International Nuclear Information System (INIS)
Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel
2009-01-01
This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem
Is there a sharp phase transition for deterministic cellular automata?
International Nuclear Information System (INIS)
Wootters, W.K.
1990-01-01
Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider. 18 refs., 6 figs
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Deterministic and Probabilistic Analysis against Anticipated Transient Without Scram
Energy Technology Data Exchange (ETDEWEB)
Choi, Sun Mi; Kim, Ji Hwan [KHNP Central Research Institute, Daejeon (Korea, Republic of); Seok, Ho [KEPCO Engineering and Construction, Daejeon (Korea, Republic of)
2016-10-15
An Anticipated Transient Without Scram (ATWS) is an Anticipated Operational Occurrences (AOOs) accompanied by a failure of the reactor trip when required. By a suitable combination of inherent characteristics and diverse systems, the reactor design needs to reduce the probability of the ATWS and to limit any Core Damage and prevent loss of integrity of the reactor coolant pressure boundary if it happens. This study focuses on the deterministic analysis for the ATWS events with respect to Reactor Coolant System (RCS) over-pressure and fuel integrity for the EU-APR. Additionally, this report presents the Probabilistic Safety Assessment (PSA) reflecting those diverse systems. The analysis performed for the ATWS event indicates that the NSSS could be reached to controlled and safe state due to the addition of boron into the core via the EBS pump flow upon the EBAS by DPS. Decay heat is removed through MSADVs and the auxiliary feedwater. During the ATWS event, RCS pressure boundary is maintained by the operation of primary and secondary safety valves. Consequently, the acceptance criteria were satisfied by installing DPS and EBS in addition to the inherent safety characteristics.
Rapid detection of small oscillation faults via deterministic learning.
Wang, Cong; Chen, Tianrui
2011-08-01
Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Improving personality facet scores with multidimensional computer adaptive testing
DEFF Research Database (Denmark)
Makransky, Guido; Mortensen, Erik Lykke; Glas, Cees A W
2013-01-01
personality tests contain many highly correlated facets. This article investigates the possibility of increasing the precision of the NEO PI-R facet scores by scoring items with multidimensional item response theory and by efficiently administering and scoring items with multidimensional computer adaptive...
Multidimensional Computerized Adaptive Testing for Indonesia Junior High School Biology
Kuo, Bor-Chen; Daud, Muslem; Yang, Chih-Wei
2015-01-01
This paper describes a curriculum-based multidimensional computerized adaptive test that was developed for Indonesia junior high school Biology. In adherence to the Indonesian curriculum of different Biology dimensions, 300 items was constructed, and then tested to 2238 students. A multidimensional random coefficients multinomial logit model was…
The Tunneling Method for Global Optimization in Multidimensional Scaling.
Groenen, Patrick J. F.; Heiser, Willem J.
1996-01-01
A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)
Multidimensional Physical Self-Concept of Athletes with Physical Disabilities
Shapiro, Deborah R.; Martin, Jeffrey J.
2010-01-01
The purposes of this investigation were first to predict reported PA (physical activity) behavior and self-esteem using a multidimensional physical self-concept model and second to describe perceptions of multidimensional physical self-concept (e.g., strength, endurance, sport competence) among athletes with physical disabilities. Athletes (N =…
Multidimensional filter banks and wavelets research developments and applications
Levy, Bernard
1997-01-01
Multidimensional Filter Banks and Wavelets: Reserach Developments and Applications brings together in one place important contributions and up-to-date research results in this important area. Multidimensional Filter Banks and Wavelets: Research Developments and Applications serves as an excellent reference, providing insight into some of the most important research issues in the field.
Multidimensional First-Order Dominance Comparisons of Population Wellbeing
DEFF Research Database (Denmark)
Siersbæk, Nikolaj; Østerdal, Lars Peter Raahave; Arndt, Thomas Channing
2017-01-01
This chapter conveys the concept of first-order dominance (FOD) with particular focus on applications to multidimensional population welfare comparisons. It gives an account of the fundamental equivalent definitions of FOD both in the one-dimensional and multidimensional setting, illustrated...
Supervised and Unsupervised Learning of Multidimensional Acoustic Categories
Goudbeek, Martijn; Swingley, Daniel; Smits, Roel
2009-01-01
Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over 1 dimension are easy to learn and that learning multidimensional categories is…
Multidimensional quantum entanglement with large-scale integrated optics.
Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G
2018-04-20
The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Directory of Open Access Journals (Sweden)
Seyed Jalal Younesi
2015-06-01
Full Text Available Objective: The current research is to investigate the relation between deterministic thinking and mental health among drug abusers, in which the role of cognitive distortions is considered and clarified by focusing on deterministic thinking. Methods: The present study is descriptive and correlative. All individuals with experience of drug abuse who had been referred to the Shafagh Rehabilitation center (Kahrizak were considered as the statistical population. 110 individuals who were addicted to drugs (stimulants and Methamphetamine were selected from this population by purposeful sampling to answer questionnaires about deterministic thinking and general health. For data analysis Pearson coefficient correlation and regression analysis was used. Results: The results showed that there is a positive and significant relationship between deterministic thinking and the lack of mental health at the statistical level [r=%22, P<0.05], which had the closest relation to deterministic thinking among the factors of mental health, such as anxiety and depression. It was found that the two factors of deterministic thinking which function as the strongest variables that predict the lack of mental health are: definitiveness in predicting tragic events and future anticipation. Discussion: It seems that drug abusers suffer from deterministic thinking when they are confronted with difficult situations, so they are more affected by depression and anxiety. This way of thinking may play a major role in impelling or restraining drug addiction.
Deterministic one-way simulation of two-way, real-time cellular automata and its related problems
Energy Technology Data Exchange (ETDEWEB)
Umeo, H; Morita, K; Sugata, K
1982-06-13
The authors show that for any deterministic two-way, real-time cellular automaton, m, there exists a deterministic one-way cellular automation which can simulate m in twice real-time. Moreover the authors present a new type of deterministic one-way cellular automata, called circular cellular automata, which are computationally equivalent to deterministic two-way cellular automata. 7 references.
Optical Multidimensional Switching for Data Center Networks
DEFF Research Database (Denmark)
Kamchevska, Valerija
2017-01-01
. Software controlled switching using an on-chip integrated fiber switch is demonstrated and enabling of additional network functionalities such as multicast and optical grooming is experimentally confirmed. Altogether this work demonstrates the potential of optical switching technologies...... for the purpose of deploying optical switching within the network. First, the Hi-Ring data center architecture is proposed. It is based on optical multidimensional switching nodes that provide switching in hierarchically layered space, wavelength and time domain. The performance of the Hi-Ring architecture...... is evaluated experimentally and successful switching of both high capacity wavelength connections and time-shared subwavelengthconnections is demonstrated. Error-free performance is also achieved when transmitting 7 Tbit/s using multicore fiber, confirming the ability to scale the network. Moreover...
A complete set of multidimensional Bell inequalities
International Nuclear Information System (INIS)
Arnault, François
2012-01-01
We give a multidimensional generalization of the complete set of Bell-correlation inequalities given by Werner and Wolf (2001 Phys. Rev. A 64 032112) and by Zukowski and Brukner (2002 Phys. Rev. Lett. 88 210401), for the two-dimensional case. Our construction applies to the n-party, two-observable case, where each observable is d-valued. The d d n inequalities obtained involve homogeneous polynomials. They define the facets of a polytope in a complex vector space of dimension d n . We detail the inequalities obtained in the case d = 3 and, from them, we recover known inequalities. We finally explain how the violations of our inequalities by quantum mechanics can be computed and could be observed, when using unitary observables. (paper)
The simulation of multidimensional multiphase flows
International Nuclear Information System (INIS)
Lahey, Richard T.
2005-01-01
This paper presents an assessment of various models which can be used for the multidimensional simulation of multiphase flows, such as may occur in nuclear reactors. In particular, a model appropriate for the direct numerical simulation (DNS) of multiphase flows and a mechanistically based, three-dimensional, four-field, turbulent, two-fluid computational multiphase fluid dynamics (CMFD) model are discussed. A two-fluid bubbly flow model, which was derived using potential flow theory, can be extended to other flow regimes, but this will normally involve ensemble-averaging the results from direct numerical simulations (DNS) of various flow regimes to provide the detailed numerical data necessary for the development of flow-regime-specific interfacial and wall closure laws
Constraint theory multidimensional mathematical model management
Friedman, George J
2017-01-01
Packed with new material and research, this second edition of George Friedman’s bestselling Constraint Theory remains an invaluable reference for all engineers, mathematicians, and managers concerned with modeling. As in the first edition, this text analyzes the way Constraint Theory employs bipartite graphs and presents the process of locating the “kernel of constraint” trillions of times faster than brute-force approaches, determining model consistency and computational allowability. Unique in its abundance of topological pictures of the material, this book balances left- and right-brain perceptions to provide a thorough explanation of multidimensional mathematical models. Much of the extended material in this new edition also comes from Phan Phan’s PhD dissertation in 2011, titled “Expanding Constraint Theory to Determine Well-Posedness of Large Mathematical Models.” Praise for the first edition: "Dr. George Friedman is indisputably the father of the very powerful methods of constraint theory...
Path integral approach to multidimensional quantum tunnelling
International Nuclear Information System (INIS)
Balantekin, A.B.; Takigawa, N.
1985-01-01
Path integral formulation of the coupled channel problem in the case of multidimensional quantum tunneling is presented and two-time influence functionals are introduced. The two-time influence functionals are calculated explicitly for the three simplest cases: Harmonic oscillators linearly or quadratically coupled to the translational motion and a system with finite number of equidistant energy levels linearly coupled to the translational motion. The effects of these couplings on the transmission probability are studied for two limiting cases, adiabatic case and when the internal system has a degenerate energy spectrum. The condition for the transmission probability to show a resonant structure is discussed and exemplified. Finally, the properties of the dissipation factor in the adiabatic limit and its correlation with the friction coefficient in the classically accessible region are studied
Security Contents: Politico-Military or Multidimensional?
Directory of Open Access Journals (Sweden)
Pere Vilanova
1997-12-01
Full Text Available The description of security problems has dramatically changed since the end of the bipolar system, and there are difficulties in building new concepts to comprehend a new and not yet defined international system. In the bipolar world, based on the North-South and East-West axes, security was described as systemic stability built upon deterrence and the defense of the statu quo. After the end of the Cold War, a new concept of multidimensional security was formulated. It lay emphasis on political, social (economic development andinternational (peaceful international relations democracy and the rule of law, putting aside too rapidly the military dimension. Vilanova argues that what have been identified as sources of new threats –narcotrafficking, ecology, migration, terrorism and fundamentalism– are not really new. There is a need to formulate political responses to these risks factors by means of public policies and intergovernmental and supranational action.
Multidimensional splines for modeling FET nonlinearities
Energy Technology Data Exchange (ETDEWEB)
Barby, J A
1986-01-01
Circuit simulators like SPICE and timing simulators like MOTIS are used extensively for critical path verification of integrated circuits. MOSFET model evaluation dominates the run time of these simulators. Changes in technology results in costly updates, since modifications require reprogramming of the functions and their derivatives. The computational cost of MOSFET models can be reduced by using multidimensional polynomial splines. Since simulators based on the Newton Raphson algorithm require the function and first derivative, quadratic splines are sufficient for this purpose. The cost of updating the MOSFET model due to technology changes is greatly reduced since splines are derived from a set of points. Crucial for convergence speed of simulators is the fact that MOSFET characteristic equations are monotonic. This must be maintained by any simulation model. The splines the author designed do maintain monotonicity.
Multidimensional Scaling for Orthodontic Root Resorption
Directory of Open Access Journals (Sweden)
Cristina Teodora Preoteasa
2013-01-01
Full Text Available The paper investigates the risk factors for the severity of orthodontic root resorption. The multidimensional scaling (MDS visualization method is used to investigate the experimental data from patients who received orthodontic treatment at the Department of Orthodontics and Dentofacial Orthopedics, Faculty of Dentistry, “Carol Davila” University of Medicine and Pharmacy, during a period of 4 years. The clusters emerging in the MDS plots reveal features and properties not easily captured by classical statistical tools. The results support the adoption of MDS for tackling the dentistry information and overcoming noise embedded into the data. The method introduced in this paper is rapid, efficient, and very useful for treating the risk factors for the severity of orthodontic root resorption.
Multidimensional student skills with collaborative filtering
Bergner, Yoav; Rayyan, Saif; Seaton, Daniel; Pritchard, David E.
2013-01-01
Despite the fact that a physics course typically culminates in one final grade for the student, many instructors and researchers believe that there are multiple skills that students acquire to achieve mastery. Assessment validation and data analysis in general may thus benefit from extension to multidimensional ability. This paper introduces an approach for model determination and dimensionality analysis using collaborative filtering (CF), which is related to factor analysis and item response theory (IRT). Model selection is guided by machine learning perspectives, seeking to maximize the accuracy in predicting which students will answer which items correctly. We apply the CF to response data for the Mechanics Baseline Test and combine the results with prior analysis using unidimensional IRT.
Gender Ideologies in Europe: A Multidimensional Framework.
Grunow, Daniela; Begall, Katia; Buchler, Sandra
2018-02-01
The authors argue, in line with recent research, that operationalizing gender ideology as a unidimensional construct ranging from traditional to egalitarian is problematic and propose an alternative framework that takes the multidimensionality of gender ideologies into account. Using latent class analysis, they operationalize their gender ideology framework based on data from the 2008 European Values Study, of which eight European countries reflecting the spectrum of current work-family policies were selected. The authors examine the form in which gender ideologies cluster in the various countries. Five ideology profiles were identified: egalitarian, egalitarian essentialism, intensive parenting, moderate traditional, and traditional. The five ideology profiles were found in all countries, but with pronounced variation in size. Ideologies mixing gender essentialist and egalitarian views appear to have replaced traditional ideologies, even in countries offering some institutional support for gendered separate spheres.
Multidimensional scaling of musical time estimations.
Cocenas-Silva, Raquel; Bueno, José Lino Oliveira; Molin, Paul; Bigand, Emmanuel
2011-06-01
The aim of this study was to identify the psycho-musical factors that govern time evaluation in Western music from baroque, classic, romantic, and modern repertoires. The excerpts were previously found to represent variability in musical properties and to induce four main categories of emotions. 48 participants (musicians and nonmusicians) freely listened to 16 musical excerpts (lasting 20 sec. each) and grouped those that seemed to have the same duration. Then, participants associated each group of excerpts to one of a set of sine wave tones varying in duration from 16 to 24 sec. Multidimensional scaling analysis generated a two-dimensional solution for these time judgments. Musical excerpts with high arousal produced an overestimation of time, and affective valence had little influence on time perception. The duration was also overestimated when tempo and loudness were higher, and to a lesser extent, timbre density. In contrast, musical tension had little influence.
Multidimensional fractional Schrödinger equation
Rodrigues, M. M.; Vieira, N.
2012-11-01
This work is intended to investigate the multi-dimensional space-time fractional Schrödinger equation of the form (CDt0+αu)(t,x) = iħ/2m(C∇βu)(t,x), with ħ the Planck's constant divided by 2π, m is the mass and u(t,x) is a wave function of the particle. Here (CDt0+α,C∇β are operators of the Caputo fractional derivatives, where α ∈]0,1] and β ∈]1,2]. The wave function is obtained using Laplace and Fourier transforms methods and a symbolic operational form of solutions in terms of the Mittag-Leffler functions is exhibited. It is presented an expression for the wave function and for the quantum mechanical probability density. Using Banach fixed point theorem, the existence and uniqueness of solutions is studied for this kind of fractional differential equations.
Multidimensional evaluation on FR cycle systems
International Nuclear Information System (INIS)
Nakai, Ryodai; Fujii, Sumio; Takakuma, Katsuyuki; Katoh, Atsushi; Ono, Kiyoshi; Ohtaki, Akira; Shiotani, Hiroki
2004-01-01
This report explains some results of the multidimensional evaluation on various fast reactor cycle system concepts from an interim report of the 2nd phase of ''Feasibility Study on Commercialized FR Cycle System''. This method is designed to give more objective and more quantitative evaluations to clarify commercialized system candidate concepts. Here we brief current evaluation method from the five viewpoints of safety, economy, environment, resource and non-proliferation, with some trial evaluation results for some cycles consist of promising technologies in reactor, core and fuel, reprocessing and fuel manufacture. Moreover, we describe FR cycle deployment scenarios which describe advantages and disadvantages of the cycles from the viewpoints of uranium resource and radioactive waste based on long-term nuclear material mass flow analyses and advantages of the deployment of FR cycle itself from the viewpoints of the comparison with alternative power supplies as well as cost and benefit. (author)
Gender Ideologies in Europe: A Multidimensional Framework
Begall, Katia; Buchler, Sandra
2018-01-01
The authors argue, in line with recent research, that operationalizing gender ideology as a unidimensional construct ranging from traditional to egalitarian is problematic and propose an alternative framework that takes the multidimensionality of gender ideologies into account. Using latent class analysis, they operationalize their gender ideology framework based on data from the 2008 European Values Study, of which eight European countries reflecting the spectrum of current work–family policies were selected. The authors examine the form in which gender ideologies cluster in the various countries. Five ideology profiles were identified: egalitarian, egalitarian essentialism, intensive parenting, moderate traditional, and traditional. The five ideology profiles were found in all countries, but with pronounced variation in size. Ideologies mixing gender essentialist and egalitarian views appear to have replaced traditional ideologies, even in countries offering some institutional support for gendered separate spheres. PMID:29491532
Anti-deterministic behaviour of discrete systems that are less predictable than noise
Urbanowicz, Krzysztof; Kantz, Holger; Holyst, Janusz A.
2005-05-01
We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in recurrence plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact, less predictable than noise and hence is different from pseudo-random number generators.
Quantum deterministic key distribution protocols based on the authenticated entanglement channel
International Nuclear Information System (INIS)
Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua
2010-01-01
Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.
Quantum deterministic key distribution protocols based on the authenticated entanglement channel
Energy Technology Data Exchange (ETDEWEB)
Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua [Department of Electronic Information Engineering, Nanchang University, Nanchang 330031 (China)], E-mail: znr21@163.com, E-mail: znr21@hotmail.com
2010-04-15
Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.
Deterministic and heuristic models of forecasting spare parts demand
Directory of Open Access Journals (Sweden)
Ivan S. Milojević
2012-04-01
Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing
Activity modes selection for project crashing through deterministic simulation
Directory of Open Access Journals (Sweden)
Ashok Mohanty
2011-12-01
Full Text Available Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
Directory of Open Access Journals (Sweden)
Ortega Antonio
2005-01-01
Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
The necessity-concerns framework: a multidimensional theory benefits from multidimensional analysis.
Phillips, L Alison; Diefenbach, Michael A; Kronish, Ian M; Negron, Rennie M; Horowitz, Carol R
2014-08-01
Patients' medication-related concerns and necessity-beliefs predict adherence. Evaluation of the potentially complex interplay of these two dimensions has been limited because of methods that reduce them to a single dimension (difference scores). We use polynomial regression to assess the multidimensional effect of stroke-event survivors' medication-related concerns and necessity beliefs on their adherence to stroke-prevention medication. Survivors (n = 600) rated their concerns, necessity beliefs, and adherence to medication. Confirmatory and exploratory polynomial regression determined the best-fitting multidimensional model. As posited by the necessity-concerns framework (NCF), the greatest and lowest adherence was reported by those necessity weak concerns and strong concerns/weak Necessity-Beliefs, respectively. However, as could not be assessed using a difference-score model, patients with ambivalent beliefs were less adherent than those exhibiting indifference. Polynomial regression allows for assessment of the multidimensional nature of the NCF. Clinicians/Researchers should be aware that concerns and necessity dimensions are not polar opposites.
The Necessity-Concerns-Framework: A Multidimensional Theory Benefits from Multidimensional Analysis
Phillips, L. Alison; Diefenbach, Michael; Kronish, Ian M.; Negron, Rennie M.; Horowitz, Carol R.
2014-01-01
Background Patients’ medication-related concerns and necessity-beliefs predict adherence. Evaluation of the potentially complex interplay of these two dimensions has been limited because of methods that reduce them to a single dimension (difference scores). Purpose We use polynomial regression to assess the multidimensional effect of stroke-event survivors’ medication-related concerns and necessity-beliefs on their adherence to stroke-prevention medication. Methods Survivors (n=600) rated their concerns, necessity-beliefs, and adherence to medication. Confirmatory and exploratory polynomial regression determined the best-fitting multidimensional model. Results As posited by the Necessity-Concerns Framework (NCF), the greatest and lowest adherence was reported by those with strong necessity-beliefs/weak concerns and strong concerns/weak necessity-beliefs, respectively. However, as could not be assessed using a difference-score model, patients with ambivalent beliefs were less adherent than those exhibiting indifference. Conclusions Polynomial regression allows for assessment of the multidimensional nature of the NCF. Clinicians/Researchers should be aware that concerns and necessity dimensions are not polar opposites. PMID:24500078
International Nuclear Information System (INIS)
Chen, Chang-Kuo; Hou, Yi-You; Luo, Cheng-Long
2012-01-01
Highlights: ► An efficient design procedure for deterministic response time design of nuclear I and C system. ► We model the concurrent operations based on sequence diagrams and Petri nets. ► The model can achieve the deterministic behavior by using symbolic time representation. ► An illustrative example of the bistable processor logic is given. - Abstract: This study is concerned with a deterministic response time design for computer-based systems in the nuclear industry. In current approach, Petri nets are used to model the requirement of a system specified with sequence diagrams. Also, the linear logic is proposed to characterize the state of changes in the Petri net model accurately by using symbolic time representation for the purpose of acquiring deterministic behavior. An illustrative example of the bistable processor logic is provided to demonstrate the practicability of the proposed approach.
Recent achievements of the neo-deterministic seismic hazard assessment in the CEI region
International Nuclear Information System (INIS)
Panza, G.F.; Vaccari, F.; Kouteva, M.
2008-03-01
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales - regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown. (author)
Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data
U.S. Environmental Protection Agency — This dataset documents the source of the data analyzed in the manuscript " Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII...
Implemented state automorphisms within the logico-algebraic approach to deterministic mechanics
Energy Technology Data Exchange (ETDEWEB)
Barone, F [Naples Univ. (Italy). Ist. di Matematica della Facolta di Scienze
1981-01-31
The new notion of S/sub 1/-implemented state automorphism is introduced and characterized in quantum logic. Implemented pure state automorphisms are then characterized in deterministic mechanics as automorphisms of the Borel structure on the phase space.
International Nuclear Information System (INIS)
Azadeh, A.; Ghaderi, S.F.; Omrani, H.
2009-01-01
This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran
Daciuk, J; Champarnaud, JM; Maurel, D
2003-01-01
This paper compares various methods for constructing minimal, deterministic, acyclic, finite-state automata (recognizers) from sets of words. Incremental, semi-incremental, and non-incremental methods have been implemented and evaluated.
Handbook of EOQ inventory problems stochastic and deterministic models and applications
Choi, Tsan-Ming
2013-01-01
This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.
National Research Council Canada - National Science Library
Michalowicz, Joseph V; Nichols, Jonathan M; Bucholtz, Frank
2008-01-01
Understanding the limitations to detecting deterministic signals in the presence of noise, especially additive, white Gaussian noise, is of importance for the design of LPI systems and anti-LPI signal defense...
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A.F.; Roussin, R.W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A. F.; Roussin, R. W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Phase conjugation with random fields and with deterministic and random scatterers
International Nuclear Information System (INIS)
Gbur, G.; Wolf, E.
1999-01-01
The theory of distortion correction by phase conjugation, developed since the discovery of this phenomenon many years ago, applies to situations when the field that is conjugated is monochromatic and the medium with which it interacts is deterministic. In this Letter a generalization of the theory is presented that applies to phase conjugation of partially coherent waves interacting with either deterministic or random weakly scattering nonabsorbing media. copyright 1999 Optical Society of America
Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads
Directory of Open Access Journals (Sweden)
Králik Juraj
2014-12-01
Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented
A study of multidimensional modeling approaches for data warehouse
Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani
2016-08-01
Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.
A Conceptual Model for Multidimensional Analysis of Documents
Ravat, Franck; Teste, Olivier; Tournier, Ronan; Zurlfluh, Gilles
Data warehousing and OLAP are mainly used for the analysis of transactional data. Nowadays, with the evolution of Internet, and the development of semi-structured data exchange format (such as XML), it is possible to consider entire fragments of data such as documents as analysis sources. As a consequence, an adapted multidimensional analysis framework needs to be provided. In this paper, we introduce an OLAP multidimensional conceptual model without facts. This model is based on the unique concept of dimensions and is adapted for multidimensional document analysis. We also provide a set of manipulation operations.
Deterministic Modeling of the High Temperature Test Reactor
International Nuclear Information System (INIS)
Ortensi, J.; Cogliati, J.J.; Pope, M.A.; Ferrer, R.M.; Ougouag, A.M.
2010-01-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL's current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green's Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2-3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control
A DYNAMIC INDEXING SCHEME FOR MULTIDIMENSIONAL DATA
Directory of Open Access Journals (Sweden)
Manuk G. Manukyan
2018-03-01
Full Text Available We present a new dynamic index structure for multidimensional data. The considered index structure is based on an extended grid file concept. Strengths and weaknesses of the grid files were analyzed. Based on that analysis we proposed to strengthen the concept of grid files by considering their stripes as linear hash tables, introducing the concept of chunk and representing the grid file structure as a graph. As a result we significantly reduced the amount of disk operations. Efficient algorithms for storage and access of index directory are proposed, in order to minimize memory usage and lookup operations complexities. Estimations of complexities for these algorithms are presented. A comparison of our approach to support effective grid file structure with other known approaches is presented. This comparison shows effectiveness of suggested metadata storage environment. An estimation of directory size is presented. A prototype to support of our grid file concept has been created and experimentally compared with MongoDB (a renowned NoSQL database. Comparison results show effectiveness of our approach in the cases of given point lookup, lookup by wide ranges and closest objects lookup when considering more than one dimension, and also better memory usage.
Statistical segmentation of multidimensional brain datasets
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Proposed empirical gas geothermometer using multidimensional approach
Energy Technology Data Exchange (ETDEWEB)
Supranto; Sudjatmiko; Toha, Budianto; Wintolo, Djoko; Alhamid, Idrus
1996-01-24
Several formulas of surface gas geothermometer have been developed to utilize in geothermal exploration, i.e. by D'Amore and Panichi (1980) and by Darling and Talbot (1992). This paper presents an empirical gas geothermometer formula using multidimensional approach. The formula was derived from 37 selected chemical data of the 5 production wells from the Awibengkok Geothermal Volcanic Field in West Java. Seven components, i.e., gas volume percentage, CO_{2}, H_{2}S, CH_{4}, H_{2}, N_{2}, and NH_{3}, from these data are utilize to developed three model equations which represent relationship between temperature and gas compositions. These formulas are then tested by several fumarolic chemical data from Sibual-buali Area (North Sumatera) and from Ringgit Area (South Sumatera). Preliminary result indicated that gas volume percentage, H_{2}S and CO_{2} concentrations have a significant role in term of gas geothermometer. Further verification is currently in progress.
Multi-dimensional cosmology and GUP
Energy Technology Data Exchange (ETDEWEB)
Zeynali, K.; Motavalli, H. [Department of Theoretical Physics and Astrophysics, University of Tabriz, 51666-16471, Tabriz (Iran, Islamic Republic of); Darabi, F., E-mail: k.zeinali@arums.ac.ir, E-mail: f.darabi@azaruniv.edu, E-mail: motavalli@tabrizu.ac.ir [Department of Physics, Azarbaijan Shahid Madani University, 53714-161, Tabriz (Iran, Islamic Republic of)
2012-12-01
We consider a multidimensional cosmological model with FRW type metric having 4-dimensional space-time and d-dimensional Ricci-flat internal space sectors with a higher dimensional cosmological constant. We study the classical cosmology in commutative and GUP cases and obtain the corresponding exact solutions for negative and positive cosmological constants. It is shown that for negative cosmological constant, the commutative and GUP cases result in finite size universes with smaller size and longer ages, and larger size and shorter age, respectively. For positive cosmological constant, the commutative and GUP cases result in infinite size universes having late time accelerating behavior in good agreement with current observations. The accelerating phase starts in the GUP case sooner than the commutative case. In both commutative and GUP cases, and for both negative and positive cosmological constants, the internal space is stabilized to the sub-Planck size, at least within the present age of the universe. Then, we study the quantum cosmology by deriving the Wheeler-DeWitt equation, and obtain the exact solutions in the commutative case and the perturbative solutions in GUP case, to first order in the GUP small parameter, for both negative and positive cosmological constants. It is shown that good correspondence exists between the classical and quantum solutions.
Multi-dimensional cosmology and GUP
International Nuclear Information System (INIS)
Zeynali, K.; Motavalli, H.; Darabi, F.
2012-01-01
We consider a multidimensional cosmological model with FRW type metric having 4-dimensional space-time and d-dimensional Ricci-flat internal space sectors with a higher dimensional cosmological constant. We study the classical cosmology in commutative and GUP cases and obtain the corresponding exact solutions for negative and positive cosmological constants. It is shown that for negative cosmological constant, the commutative and GUP cases result in finite size universes with smaller size and longer ages, and larger size and shorter age, respectively. For positive cosmological constant, the commutative and GUP cases result in infinite size universes having late time accelerating behavior in good agreement with current observations. The accelerating phase starts in the GUP case sooner than the commutative case. In both commutative and GUP cases, and for both negative and positive cosmological constants, the internal space is stabilized to the sub-Planck size, at least within the present age of the universe. Then, we study the quantum cosmology by deriving the Wheeler-DeWitt equation, and obtain the exact solutions in the commutative case and the perturbative solutions in GUP case, to first order in the GUP small parameter, for both negative and positive cosmological constants. It is shown that good correspondence exists between the classical and quantum solutions
Convergence almost everywhere of multidimensional vectors
International Nuclear Information System (INIS)
El Berdan, Kassem; Zeineddine, Hassan
2000-01-01
Let X be a reflexive Banach space, Ω a measure space, T 1 ,...,T d be linear not commuting operators on L 1 (Ω,X)=L 1 (X) which are strictly contracting in L 1 (X) (i.e. there exist αjbelong to ]0,1[ such that ||T j f||≤αj||f|| for all j=1....,d and f belong to L 1 (X), and contracting in L ∞ (X). We prove a maximal equality for the averages: A n (T 1 ,...,T d )f= n d /1 i1=0 Σ n-1 ... id=0 Σ n-1 T 1 i1 ...T d id f and the convergence almost everywhere of it for all f in L 1 (X). This result generalizes Chacon's theorem (Chacon 19620 to the multidimensional case for this operators class. Finally, we give two operators which are strictly contracting in L 1 (X) and contracting in L ∞ (X) such that the convergence of the averages is not trivial. (author)
Multidimensional Scaling Visualization Using Parametric Similarity Indices
Directory of Open Access Journals (Sweden)
J. A. Tenreiro Machado
2015-03-01
Full Text Available In this paper, we apply multidimensional scaling (MDS and parametric similarity indices (PSI in the analysis of complex systems (CS. Each CS is viewed as a dynamical system, exhibiting an output time-series to be interpreted as a manifestation of its behavior. We start by adopting a sliding window to sample the original data into several consecutive time periods. Second, we define a given PSI for tracking pieces of data. We then compare the windows for different values of the parameter, and we generate the corresponding MDS maps of ‘points’. Third, we use Procrustes analysis to linearly transform the MDS charts for maximum superposition and to build a globalMDS map of “shapes”. This final plot captures the time evolution of the phenomena and is sensitive to the PSI adopted. The generalized correlation, theMinkowski distance and four entropy-based indices are tested. The proposed approach is applied to the Dow Jones Industrial Average stock market index and the Europe Brent Spot Price FOB time-series.
Energy Poverty in Europe: A Multidimensional Approach
Directory of Open Access Journals (Sweden)
Carlo Andrea Bollino
2017-12-01
Full Text Available With the European Commission’s “Third Energy Package”, the challenges posed by energy poverty have been recently acknowledged by European legislation. The paper develops a synthetic indicator of energy poverty for the purpose of assessing households’ well-being across different domains of inequality in access to energy services and to a healthy domestic environment. These dimensions are broadly defined in terms of energy affordability and thermal efficiency, two of the main manifestations of energy poverty. The analysis focuses on Europe and expands on existing economic literature by employing a fuzzy analysis for the definition of a multidimensional energy poverty index, which is then used to investigate the role of individual and household characteristics in shaping energy poverty. We find that during the European crisis energy poverty has been more stable than monetary poverty, and that thermal efficiency plays a crucial role in shaping individual and countries’ average degrees of energy poverty. JEL codes: I32; Q41; D10; D63
Control of multidimensional systems on complex network
Bagnoli, Franco; Battistelli, Giorgio; Chisci, Luigi; Fanelli, Duccio
2017-01-01
Multidimensional systems coupled via complex networks are widespread in nature and thus frequently invoked for a large plethora of interesting applications. From ecology to physics, individual entities in mutual interactions are grouped in families, homogeneous in kind. These latter interact selectively, through a sequence of self-consistently regulated steps, whose deeply rooted architecture is stored in the assigned matrix of connections. The asymptotic equilibrium eventually attained by the system, and its associated stability, can be assessed by employing standard nonlinear dynamics tools. For many practical applications, it is however important to externally drive the system towards a desired equilibrium, which is resilient, hence stable, to external perturbations. To this end we here consider a system made up of N interacting populations which evolve according to general rate equations, bearing attributes of universality. One species is added to the pool of interacting families and used as a dynamical controller to induce novel stable equilibria. Use can be made of the root locus method to shape the needed control, in terms of intrinsic reactivity and adopted protocol of injection. The proposed method is tested on both synthetic and real data, thus enabling to demonstrate its robustness and versatility. PMID:28892493
Indexación multidimensional configurable
Directory of Open Access Journals (Sweden)
José L. Zechinelli M.
2004-01-01
Full Text Available Existe una gran cantidad de métodos de indexado para datos multidimensionales. La idea fundamental de éstos es generar estructuras dinámicas para organizar objetos complejos, de tal manera que se puedan consultar de forma rápida y efectiva. Aunque existen taxonomías que definen las propiedades de cada método de indexado. A un usuario no experto le es difícil decidir cuál método podría ser apropiado para un conjunto particular de datos. En este artículo describimos la arquitectura de un framework el cual ofrece herramientas de análisis e implementación de diversos métodos de indexado multidimensional y que ayuda a un usuario a determinar el método más adecuado, para un conjunto de datos. Además se analizan ciertas propiedades de los mismos y el tipo de consultas que se llevarán a cabo en ellos.
Phase space eigenfunctions of multidimensional quadratic Hamiltonians
International Nuclear Information System (INIS)
Dodonov, V.V.; Man'ko, V.I.
1986-01-01
We obtain the explicit expressions for phace space eigenfunctions (PSE),i.e. Weyl's symbols of dyadic operators like vertical stroken> ,vertical strokem>, being the solution of the Schroedinger equation with the Hamiltonian which is a quite arbitrary multidimensional quadratic form of the operators of Cartesian coordinates and conjugated to them momenta with time-dependent coefficients. It is shown that for an arbitrary quadratic Hamiltonian one can always construct the set of completely factorized PSE which are products of N factors, each factor being dependent only on two arguments for nnot=m and on a single argument for n=m. These arguments are nothing but constants of motion of the correspondent classical system. PSE are expressed in terms of the associated Laguerre polynomials in the case of a discrete spectrum and in terms of the Airy functions in the continuous spectrum case. Three examples are considered: a harmonic oscillator with a time-dependent frequency, a charged particle in a nonstationary uniform magnetic field, and a particle in a time-dependent uniform potential field. (orig.)
Experimental verification of multidimensional quantum steering
Li, Che-Ming; Lo, Hsin-Pin; Chen, Liang-Yu; Yabushita, Atsushi
2018-03-01
Quantum steering enables one party to communicate with another remote party even if the sender is untrusted. Such characteristics of quantum systems not only provide direct applications to quantum information science, but are also conceptually important for distinguishing between quantum and classical resources. While concrete illustrations of steering have been shown in several experiments, quantum steering has not been certified for higher dimensional systems. Here, we introduce a simple method to experimentally certify two different kinds of quantum steering: Einstein-Podolsky-Rosen (EPR) steering and single-system (SS) steering (i.e., temporal steering), for dimensionality (d) up to d = 16. The former reveals the steerability among bipartite systems, whereas the latter manifests itself in single quantum objects. We use multidimensional steering witnesses to verify EPR steering of polarization-entangled pairs and SS steering of single photons. The ratios between the measured witnesses and the maximum values achieved by classical mimicries are observed to increase with d for both EPR and SS steering. The designed scenario offers a new method to study further the genuine multipartite steering of large dimensionality and potential uses in quantum information processing.
Analysis of Multidimensional Poverty: Theory and Case Studies ...
International Development Research Centre (IDRC) Digital Library (Canada)
2009-08-18
Aug 18, 2009 ... ... of applying a factorial technique, Multiple Correspondence Analysis, to poverty analysis. ... Analysis of Multidimensional Poverty: Theory and Case Studies ... agreement to support joint research projects in December 2017.
Van der Zee, KI; Van Oudenhoven, JP
2000-01-01
In today's global business environment, executive work is becoming more international in orientation. Several skills and traits may underlie executive success in an inter national environment. The Multicultural Personality Questionnaire was developed as a multidimensional instrument aimed at
Capturing Complex Multidimensional Data in Location-Based Data Warehouses
DEFF Research Database (Denmark)
Timko, Igor; Pedersen, Torben Bach
2004-01-01
Motivated by the increasing need to handle complex multidimensional data inlocation-based data warehouses, this paper proposes apowerful data model that is able to capture the complexities of such data. The model provides a foundation for handling complex transportationinfrastructures...
Benefits of Multidimensional Measures of Child Well Being in China.
Gatenio Gabel, Shirley; Zhang, Yiwei
2017-11-06
In recent decades, measures of child well-being have evolved from single dimension to multidimensional measures. Multi-dimensional measures deepen and broaden our understanding of child well-being and inform us of areas of neglect. Child well-being in China today is measured through proxy measures of household need. This paper discusses the evolution of child well-being measures more generally, explores the benefits of positive indicators and multiple dimensions in formulating policy, and then reviews efforts to date by the Chinese government, researchers, and non-governmental and intergovernmental organizations to develop comprehensive multidimensional measures of child well-being in China. The domains and their potential interactions, as well as data sources and availability, are presented. The authors believe that child well-being in China would benefit from the development of a multidimensional index and that there is sufficient data to develop such an index.
A multidimensional subdiffusion model: An arbitrage-free market
International Nuclear Information System (INIS)
Li Guo-Hua; Zhang Hong; Luo Mao-Kang
2012-01-01
To capture the subdiffusive characteristics of financial markets, the subordinated process, directed by the inverse α-stale subordinator S α (t) for 0 < α < 1, has been employed as the model of asset prices. In this article, we introduce a multidimensional subdiffusion model that has a bond and K correlated stocks. The stock price process is a multidimensional subdiffusion process directed by the inverse α-stable subordinator. This model describes the period of stagnation for each stock and the behavior of the dependency between multiple stocks. Moreover, we derive the multidimensional fractional backward Kolmogorov equation for the subordinated process using the Laplace transform technique. Finally, using a martingale approach, we prove that the multidimensional subdiffusion model is arbitrage-free, and also gives an arbitrage-free pricing rule for contingent claims associated with the martingale measure. (interdisciplinary physics and related areas of science and technology)
On new physics searches with multidimensional differential shapes
Ferreira, Felipe; Fichet, Sylvain; Sanz, Veronica
2018-03-01
In the context of upcoming new physics searches at the LHC, we investigate the impact of multidimensional differential rates in typical LHC analyses. We discuss the properties of shape information, and argue that multidimensional rates bring limited information in the scope of a discovery, but can have a large impact on model discrimination. We also point out subtleties about systematic uncertainties cancellations and the Cauchy-Schwarz bound on interference terms.
An Analysis of Multi-dimensional Gender Inequality in Pakistan
Abdul Hamid; Aisha M. Ahmed
2011-01-01
Women make almost half of the population of Pakistan. They also contribute significantly to economic and social growth. However, in developing countries like Pakistan, women usually suffer from multidimensional inequality of opportunities leading to multidimensional poverty. The dimensions of family, women identity, health, education and women access to economic resources and employment contribute significantly to the discrimination of women. The provision of more opportunities to women in th...
On multidimensional item response theory -- a coordinate free approach
Antal, Tamás
2007-01-01
A coordinate system free definition of complex structure multidimensional item response theory (MIRT) for dichotomously scored items is presented. The point of view taken emphasizes the possibilities and subtleties of understanding MIRT as a multidimensional extension of the ``classical'' unidimensional item response theory models. The main theorem of the paper is that every monotonic MIRT model looks the same; they are all trivial extensions of univariate item response theory.
Background elimination methods for multidimensional coincidence γ-ray spectra
International Nuclear Information System (INIS)
Morhac, M.
1997-01-01
In the paper new methods to separate useful information from background in one, two, three and multidimensional spectra (histograms) measured in large multidetector γ-ray arrays are derived. The sensitive nonlinear peak clipping algorithm is the basis of the methods for estimation of the background in multidimensional spectra. The derived procedures are simple and therefore have a very low cost in terms of computing time. (orig.)
Modelling of multidimensional quantum systems by the numerical functional integration
International Nuclear Information System (INIS)
Lobanov, Yu.Yu.; Zhidkov, E.P.
1990-01-01
The employment of the numerical functional integration for the description of multidimensional systems in quantum and statistical physics is considered. For the multiple functional integrals with respect to Gaussian measures in the full separable metric spaces the new approximation formulas exact on a class of polynomial functionals of a given summary degree are constructed. The use of the formulas is demonstrated on example of computation of the Green function and the ground state energy in multidimensional Calogero model. 15 refs.; 2 tabs
Fatigue and multidimensional disease severity in chronic obstructive pulmonary disease
Directory of Open Access Journals (Sweden)
Inal-Ince Deniz
2010-06-01
Full Text Available Abstract Background and aims Fatigue is associated with longitudinal ratings of health in patients with chronic obstructive pulmonary disease (COPD. Although the degree of airflow obstruction is often used to grade disease severity in patients with COPD, multidimensional grading systems have recently been developed. The aim of this study was to investigate the relationship between perceived and actual fatigue level and multidimensional disease severity in patients with COPD. Materials and methods Twenty-two patients with COPD (aged 52-74 years took part in the study. Multidimensional disease severity was measured using the SAFE and BODE indices. Perceived fatigue was assessed using the Fatigue Severity Scale (FSS and the Fatigue Impact Scale (FIS. Peripheral muscle endurance was evaluated using the number of sit-ups, squats, and modified push-ups that each patient could do. Results Thirteen patients (59% had severe fatigue, and their St George's Respiratory Questionnaire scores were significantly higher (p Conclusions Peripheral muscle endurance and fatigue perception in patients with COPD was related to multidimensional disease severity measured with both the SAFE and BODE indices. Improvements in perceived and actual fatigue levels may positively affect multidimensional disease severity and health status in COPD patients. Further research is needed to investigate the effects of fatigue perception and exercise training on patients with different stages of multidimensional COPD severity.
Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.
Kang, Yun; Lanchier, Nicolas
2011-06-01
We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the
When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.
Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko
2015-08-01
When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.
Wild immunology assessed by multidimensional mass cytometry.
Japp, Alberto Sada; Hoffmann, Kerstin; Schlickeiser, Stephan; Glauben, Rainer; Nikolaou, Christos; Maecker, Holden T; Braun, Julian; Matzmohr, Nadine; Sawitzki, Birgit; Siegmund, Britta; Radbruch, Andreas; Volk, Hans-Dieter; Frentsch, Marco; Kunkel, Desiree; Thiel, Andreas
2017-01-01
A great part of our knowledge on mammalian immunology has been established in laboratory settings. The use of inbred mouse strains enabled controlled studies of immune cell and molecule functions in defined settings. These studies were usually performed in specific-pathogen free (SPF) environments providing standardized conditions. In contrast, mammalians including humans living in their natural habitat are continuously facing pathogen encounters throughout their life. The influences of environmental conditions on the signatures of the immune system and on experimental outcomes are yet not well defined. Thus, the transferability of results obtained in current experimental systems to the physiological human situation has always been a matter of debate. Studies elucidating the diversity of "wild immunology" imprintings in detail and comparing it with those of "clean" lab mice are sparse. Here, we applied multidimensional mass cytometry to dissect phenotypic and functional differences between distinct groups of laboratory and pet shop mice as a source for "wild mice". For this purpose, we developed a 31-antibody panel for murine leukocyte subsets identification and a 35-antibody panel assessing various cytokines. Established murine leukocyte populations were easily identified and diverse immune signatures indicative of numerous pathogen encounters were classified particularly in pet shop mice and to a lesser extent in quarantine and non-SPF mice as compared to SPF mice. In addition, unsupervised analysis identified distinct clusters that associated strongly with the degree of pathogenic priming, including increased frequencies of activated NK cells and antigen-experienced B- and T-cell subsets. Our study unravels the complexity of immune signatures altered under physiological pathogen challenges and highlights the importance of carefully adapting laboratory settings for immunological studies in mice, including drug and therapy testing. © 2016 International Society
Multidimensional analysis: B-tagging at LEP
International Nuclear Information System (INIS)
de la Vaissiere, C.; Palma-Lopes, S.
1989-01-01
At the Z 0 , the cross-section for e + e - → b anti b is large (6.5 nb), as is the fraction of hadronic events leading to b anti b (22%). A jet topology allows to distinguish naturally the products of the b and anti b fragmentation and decays. The Z 0 looks therefore an attractive place to pursue B physics. Techniques previously used at PEP and PETRA to tag the b-flavor, have provided reasonable b-purities, at the cost of poor efficiencies. A first technique originally proposed to measure the b-lifetime was to use leptonic decays, but the corresponding branching ratios are at the 10% level. At Z 0 energies, P. Roudeau shows that a 91% purity and 6% efficiency can be obtained. The TASSO collaboration was the first to use a vertex detector for b-enrichment. They achieved a b-purity of about 68%, with a 16%-efficiency. The best way to increase these low yields is to improve the resolution of vertex detectors on impact parameters. DELPHI will be equipped with a silicon microstrip vertex detector which will provide an asymptotic accuracy of 20 μm on impact parameters in the plane transverse to the beam, to be compared with the 150 μm quoted by TASSO. However this 20 μm, combined with limited coverage, can not disentangle the multiple decays occurring in a b anti b event. In this intermediate situation multidimensional analysis may provide tagging of b anti b events with high purity and good efficiency. 11 refs., 2 figs., 2 tabs
PERSPECTIVA MULTIDIMENSIONAL DO TRABALHO NA CONTEMPORANEIDADE
Directory of Open Access Journals (Sweden)
Lilian Carminatti
2015-12-01
Full Text Available O objetivo deste estudo é analisar a evolução do tema trabalho e novas formas de flexibilização das relações laborais que proporcionam identidade, significado e bem-estar. O trabalho, como uma atividade prazerosa, passa por uma mudança de postura das pessoas frente à atividade ocupacional, a partir do autoconhecimento, planejamento de carreira e de uma jornada de trabalho que vislumbre a melhora na qualidade de vida. A promoção do bem-estar presume a articulação entre as relações de trabalho atravessadas pelo sistema capitalista e a importância de equilibrar necessidades individuais e a competitividade organizacional. Como elementos balizadores surgem: a relação com os significados do trabalho, o planejamento de carreira e pós-carreira e a conexão entre valores pessoais e organizacionais. Adotou-se para a pesquisa uma estratégia metodológica qualitativa que se propõe a investigar a partir da exploração e descrição as dimensões do tema trabalho na sociedade contemporânea. Dessa forma, a promoção do bem-estar e o planejamento de carreira devem ser tratados como planejamento de vida durante todo o período funcional, a partir de uma visão multidimensional do colaborador inserida em um amplo contexto cultural e social que ultrapassa o ambiente organizacional.
Timing and related artifacts in multidimensional NMR
International Nuclear Information System (INIS)
Marion, Dominique
2012-01-01
The information content of multidimensional NMR spectra is limited by the presence of several kinds of artifacts that originate from incorrect timing of evolution periods. The objective of this review is to provide tools for successful implementation of published pulse sequences, in which timing and pulse compensations are often implicit. We will analyze the constraints set by the use of Fourier transformation, the spin precession during rectangular or shaped pulses, the Bloch-Siegert effects due to pulse on other spins and the delay introduced by the filters for the acquisition dimension. A frequency dependent phase correction or an incorrect scaling of the first data point leads to baseline offsets or curvature due to the properties of the Fourier transform. Because any r.f. pulse has a finite length, chemical shift is always active during excitation, flip-back, inversion, and refocusing pulses. Rectangular or selective shaped pulses can be split into three periods: an ideal rotation surrounded by two chemical shift evolution periods, which should be subtracted from the adjacent delays to avoid linear phase correction. Bloch-Siegert effects originate from irradiation at frequencies near those observed in the spectrum and can lead to phase or frequency shifts. They can be minimized by simultaneous irradiation on both sides of the observed spins. In terms of timing, the very end of the pulse sequence the acquisition behaves differently since the data are filtered by either analog or digital means. This additional delay is filter and spectrometer specific and should be tuned to minimize the required phase correction. Combined together, all these adjustments lead to perfectly phased spectra with flat baseline and no peak shifts or distortion. (author)
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
International Nuclear Information System (INIS)
Wang Zhi-Gang; Gao Rui-Mei; Fan Xiao-Ming; Han Qi-Xing
2014-01-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ 0 , a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ 0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ 0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ 0 , when the stochastic system obeys some conditions and ℛ 0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations. (general)
Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.
Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R
2018-05-01
A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Optimization of structures subjected to dynamic load: deterministic and probabilistic methods
Directory of Open Access Journals (Sweden)
Élcio Cassimiro Alves
Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.
Deterministic and stochastic evolution equations for fully dispersive and weakly nonlinear waves
DEFF Research Database (Denmark)
Eldeberky, Y.; Madsen, Per A.
1999-01-01
and stochastic formulations are solved numerically for the case of cross shore motion of unidirectional waves and the results are verified against laboratory data for wave propagation over submerged bars and over a plane slope. Outside the surf zone the two model predictions are generally in good agreement......This paper presents a new and more accurate set of deterministic evolution equations for the propagation of fully dispersive, weakly nonlinear, irregular, multidirectional waves. The equations are derived directly from the Laplace equation with leading order nonlinearity in the surface boundary...... is significantly underestimated for larger wave numbers. In the present work we correct this inconsistency. In addition to the improved deterministic formulation, we present improved stochastic evolution equations in terms of the energy spectrum and the bispectrum for multidirectional waves. The deterministic...
International Nuclear Information System (INIS)
Kutkov, V; Buglova, E; McKenna, T
2011-01-01
Lessons learned from responses to past events have shown that more guidance is needed for the response to radiation emergencies (in this context, a 'radiation emergency' means the same as a 'nuclear or radiological emergency') which could lead to severe deterministic effects. The International Atomic Energy Agency (IAEA) requirements for preparedness and response for a radiation emergency, inter alia, require that arrangements shall be made to prevent, to a practicable extent, severe deterministic effects and to provide the appropriate specialised treatment for these effects. These requirements apply to all exposure pathways, both internal and external, and all reasonable scenarios, to include those resulting from malicious acts (e.g. dirty bombs). This paper briefly describes the approach used to develop the basis for emergency response criteria for protective actions to prevent severe deterministic effects in the case of external exposure and intake of radioactive material.
Multidimensional scaling for large genomic data sets
Directory of Open Access Journals (Sweden)
Lu Henry
2008-04-01
Full Text Available Abstract Background Multi-dimensional scaling (MDS is aimed to represent high dimensional data in a low dimensional space with preservation of the similarities between data points. This reduction in dimensionality is crucial for analyzing and revealing the genuine structure hidden in the data. For noisy data, dimension reduction can effectively reduce the effect of noise on the embedded structure. For large data set, dimension reduction can effectively reduce information retrieval complexity. Thus, MDS techniques are used in many applications of data mining and gene network research. However, although there have been a number of studies that applied MDS techniques to genomics research, the number of analyzed data points was restricted by the high computational complexity of MDS. In general, a non-metric MDS method is faster than a metric MDS, but it does not preserve the true relationships. The computational complexity of most metric MDS methods is over O(N2, so that it is difficult to process a data set of a large number of genes N, such as in the case of whole genome microarray data. Results We developed a new rapid metric MDS method with a low computational complexity, making metric MDS applicable for large data sets. Computer simulation showed that the new method of split-and-combine MDS (SC-MDS is fast, accurate and efficient. Our empirical studies using microarray data on the yeast cell cycle showed that the performance of K-means in the reduced dimensional space is similar to or slightly better than that of K-means in the original space, but about three times faster to obtain the clustering results. Our clustering results using SC-MDS are more stable than those in the original space. Hence, the proposed SC-MDS is useful for analyzing whole genome data. Conclusion Our new method reduces the computational complexity from O(N3 to O(N when the dimension of the feature space is far less than the number of genes N, and it successfully
International Nuclear Information System (INIS)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-01-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis
2018-01-01
Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more
International Nuclear Information System (INIS)
Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David
2006-01-01
We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)
Directory of Open Access Journals (Sweden)
Tim ePalmer
2015-10-01
Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
A continuous variable quantum deterministic key distribution based on two-mode squeezed states
International Nuclear Information System (INIS)
Gong, Li-Hua; Song, Han-Chong; Liu, Ye; Zhou, Nan-Run; He, Chao-Sheng
2014-01-01
The distribution of deterministic keys is of significance in personal communications, but the existing continuous variable quantum key distribution protocols can only generate random keys. By exploiting the entanglement properties of two-mode squeezed states, a continuous variable quantum deterministic key distribution (CVQDKD) scheme is presented for handing over the pre-determined key to the intended receiver. The security of the CVQDKD scheme is analyzed in detail from the perspective of information theory. It shows that the scheme can securely and effectively transfer pre-determined keys under ideal conditions. The proposed scheme can resist both the entanglement and beam splitter attacks under a relatively high channel transmission efficiency. (paper)
International Nuclear Information System (INIS)
Yokose, Yoshio; Noguchi, So; Yamashita, Hideo
2002-01-01
Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)
International Nuclear Information System (INIS)
Jepps, Owen G; Rondoni, Lamberto
2010-01-01
Deterministic 'thermostats' are mathematical tools used to model nonequilibrium steady states of fluids. The resulting dynamical systems correctly represent the transport properties of these fluids and are easily simulated on modern computers. More recently, the connection between such thermostats and entropy production has been exploited in the development of nonequilibrium fluid theories. The purpose and limitations of deterministic thermostats are discussed in the context of irreversible thermodynamics and the development of theories of nonequilibrium phenomena. We draw parallels between the development of such nonequilibrium theories and the development of notions of ergodicity in equilibrium theories. (topical review)
Palmer, Tim N; O'Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
Visual modeling in an analysis of multidimensional data
Zakharova, A. A.; Vekhter, E. V.; Shklyar, A. V.; Pak, A. J.
2018-01-01
The article proposes an approach to solve visualization problems and the subsequent analysis of multidimensional data. Requirements to the properties of visual models, which were created to solve analysis problems, are described. As a perspective direction for the development of visual analysis tools for multidimensional and voluminous data, there was suggested an active use of factors of subjective perception and dynamic visualization. Practical results of solving the problem of multidimensional data analysis are shown using the example of a visual model of empirical data on the current state of studying processes of obtaining silicon carbide by an electric arc method. There are several results of solving this problem. At first, an idea of possibilities of determining the strategy for the development of the domain, secondly, the reliability of the published data on this subject, and changes in the areas of attention of researchers over time.
A new multidimensional model with text dimensions: definition and implementation
Directory of Open Access Journals (Sweden)
MariaJ. Martin-Bautista
2013-02-01
Full Text Available We present a new multidimensional model with textual dimensions based on a knowledge structure extracted from the texts, where any textual attribute in a database can be processed, and not only XML texts. This dimension allows to treat the textual data in the same way as the non-textual one in an automatic way, without user's intervention, so all the classical operations in the multidimensional model can been defined for this textual dimension. While most of the models dealing with texts that can be found in the literature are not implemented, in this proposal, the multidimensional model and the OLAP system have been implemented in a software tool, so it can be tested on real data. A case study with medical data is included in this work.
Multidimensional poverty: an alternative measurement approach for the United States?
Waglé, Udaya R
2008-06-01
International poverty research has increasingly underscored the need to use multidimensional approaches to measure poverty. Largely embraced in Europe and elsewhere, this has not had much impact on the way poverty is measured in the United States. In this paper, I use a comprehensive multidimensional framework including economic well-being, capability, and social inclusion to examine poverty in the US. Data from the 2004 General Social Survey support the interconnectedness among these poverty dimensions, indicating that the multidimensional framework utilizing a comprehensive set of information provides a compelling value added to poverty measurement. The suggested demographic characteristics of the various categories of the poor are somewhat similar between this approach and other traditional approaches. But the more comprehensive and accurate measurement outcomes from this approach help policymakers target resources at the specific groups.
Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Zhang Dongyang
2014-02-01
Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.
Quantum and Multidimensional Explanations in a Neurobiological Context of Mind.
Korf, Jakob
2015-08-01
This article examines the possible relevance of physical-mathematical multidimensional or quantum concepts aiming at understanding the (human) mind in a neurobiological context. Some typical features of the quantum and multidimensional concepts are briefly introduced, including entanglement, superposition, holonomic, and quantum field theories. Next, we consider neurobiological principles, such as the brain and its emerging (physical) mind, evolutionary and ontological origins, entropy, syntropy/neg-entropy, causation, and brain energy metabolism. In many biological processes, including biochemical conversions, protein folding, and sensory perception, the ubiquitous involvement of quantum mechanisms is well recognized. Quantum and multidimensional approaches might be expected to help describe and model both brain and mental processes, but an understanding of their direct involvement in mental activity, that is, without mediation by molecular processes, remains elusive. More work has to be done to bridge the gap between current neurobiological and physical-mathematical concepts with their associated quantum-mind theories. © The Author(s) 2014.
Conservative Initial Mapping For Multidimensional Simulations of Stellar Explosions
International Nuclear Information System (INIS)
Chen, Ke-Jung; Heger, Alexander; Almgren, Ann
2012-01-01
Mapping one-dimensional stellar profiles onto multidimensional grids as initial conditions for hydrodynamics calculations can lead to numerical artifacts, one of the most severe of which is the violation of conservation laws for physical quantities such as energy and mass. Here we introduce a numerical scheme for mapping one-dimensional spherically-symmetric data onto multidimensional meshes so that these physical quantities are conserved. We validate our scheme by porting a realistic 1D Lagrangian stellar profile to the new multidimensional Eulerian hydro code CASTRO. Our results show that all important features in the profiles are reproduced on the new grid and that conservation laws are enforced at all resolutions after mapping.
SM4MQ: A Semantic Model for Multidimensional Queries
DEFF Research Database (Denmark)
Varga, Jovan; Dobrokhotova, Ekaterina; Romero, Oscar
2017-01-01
On-Line Analytical Processing (OLAP) is a data analysis approach to support decision-making. On top of that, Exploratory OLAP is a novel initiative for the convergence of OLAP and the Semantic Web (SW) that enables the use of OLAP techniques on SW data. Moreover, OLAP approaches exploit different......, sharing, and reuse on the SW. As OLAP is based on the underlying multidimensional (MD) data model we denote such queries as MD queries and define SM4MQ: A Semantic Model for Multidimensional Queries. Furthermore, we propose a method to automate the exploitation of queries by means of SPARQL. We apply...
Multidimensional quantum entanglement with large-scale integrated optics
DEFF Research Database (Denmark)
Wang, Jianwei; Paesani, Stefano; Ding, Yunhong
2018-01-01
-dimensional entanglement. A programmable bipartite entangled system is realized with dimension up to 15 × 15 on a large-scale silicon-photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality......The ability to control multidimensional quantum systems is key for the investigation of fundamental science and for the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control and analyze high...
Multi-Dimensional Customer Data Analysis in Online Auctions
Institute of Scientific and Technical Information of China (English)
LAO Guoling; XIONG Kuan; QIN Zheng
2007-01-01
In this paper, we designed a customer-centered data warehouse system with five subjects: listing, bidding, transaction,accounts, and customer contact based on the business process of online auction companies. For each subject, we analyzed its fact indexes and dimensions. Then take transaction subject as example,analyzed the data warehouse model in detail, and got the multi-dimensional analysis structure of transaction subject. At last, using data mining to do customer segmentation, we divided customers into four types: impulse customer, prudent customer, potential customer, and ordinary customer. By the result of multi-dimensional customer data analysis, online auction companies can do more target marketing and increase customer loyalty.
International Nuclear Information System (INIS)
Soriano Pena, A.; Lopez Arroyo, A.; Roesset, J.M.
1976-01-01
The probabilistic and deterministic approaches for calculating the seismic risk of nuclear power plants are both applied to a particular case in Southern Spain. The results obtained by both methods, when varying the input data, are presented and some conclusions drawn in relation to the applicability of the methods, their reliability and their sensitivity to change
Degli Esposti, M.; Giardinà, C.; Graffi, S.; Isola, S.
2001-01-01
We consider the zero-temperature dynamics for the infinite-range, non translation invariant one-dimensional spin model introduced by Marinari, Parisi and Ritort to generate glassy behaviour out of a deterministic interaction. It is argued that there can be a large number of metastable (i.e.,
DEFF Research Database (Denmark)
Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto
1974-01-01
The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...
On competition in a Stackelberg location-design model with deterministic supplier choice
Hendrix, E.M.T.
2016-01-01
We study a market situation where two firms maximize market capture by deciding on the location in the plane and investing in a competing quality against investment cost. Clients choose one of the suppliers; i.e. deterministic supplier choice. To study this situation, a game theoretic model is
DEFF Research Database (Denmark)
Hansen, Lisbet Sneftrup; Borup, Morten; Moller, Arne
2014-01-01
drainage models and reduce a number of unavoidable discrepancies between the model and reality. The latter can be achieved partly by inserting measured water levels from the sewer system into the model. This article describes how deterministic updating of model states in this manner affects a simulation...
The development of the deterministic nonlinear PDEs in particle physics to stochastic case
Abdelrahman, Mahmoud A. E.; Sohaly, M. A.
2018-06-01
In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment
Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.
Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.
1996-01-01
Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-01-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
2D deterministic radiation transport with the discontinuous finite element method
International Nuclear Information System (INIS)
Kershaw, D.; Harte, J.
1993-01-01
This report provides a complete description of the analytic and discretized equations for 2D deterministic radiation transport. This computational model has been checked against a wide variety of analytic test problems and found to give excellent results. We make extensive use of the discontinuous finite element method
On the effect of deterministic terms on the bias in stable AR models
van Giersbergen, N.P.A.
2004-01-01
This paper compares the first-order bias approximation for the autoregressive (AR) coefficients in stable AR models in the presence of deterministic terms. It is shown that the bias due to inclusion of an intercept and trend is twice as large as the bias due to an intercept. For the AR(1) model, the
Czech Academy of Sciences Publication Activity Database
Lin, Qiang; De Vrieze, J.; Li, Ch.; Li, J.; Li, J.; Yao, M.; Heděnec, Petr; Li, H.; Li, T.; Rui, J.; Frouz, Jan; Li, X.
2017-01-01
Roč. 123, October (2017), s. 134-143 ISSN 0043-1354 Institutional support: RVO:60077344 Keywords : anaerobic digestion * deterministic process * microbial interactions * modularity * temperature gradient Subject RIV: DJ - Water Pollution ; Quality OBOR OECD: Water resources Impact factor: 6.942, year: 2016
In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...
Using the deterministic factor systems in the analysis of return on ...
African Journals Online (AJOL)
Using the deterministic factor systems in the analysis of return on equity. ... or equal the profitability of bank deposits, the business of the organization is not efficient. ... Application of quantitative and qualitative indicators in the analysis allows to ... By Country · List All Titles · Free To Read Titles This Journal is Open Access.
Deterministic linear-optics quantum computing based on a hybrid approach
International Nuclear Information System (INIS)
Lee, Seung-Woo; Jeong, Hyunseok
2014-01-01
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources
Deterministic linear-optics quantum computing based on a hybrid approach
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung-Woo; Jeong, Hyunseok [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742 (Korea, Republic of)
2014-12-04
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
A new recursive incremental algorithm for building minimal acyclic deterministic finite automata
Watson, B.W.; Martin-Vide, C.; Mitrana, V.
2003-01-01
This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcao
2015-01-01
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with
Deterministic Model for Rubber-Metal Contact Including the Interaction Between Asperities
Deladi, E.L.; de Rooij, M.B.; Schipper, D.J.
2005-01-01
Rubber-metal contact involves relatively large deformations and large real contact areas compared to metal-metal contact. Here, a deterministic model is proposed for the contact between rubber and metal surfaces, which takes into account the interaction between neighboring asperities. In this model,
Pfaff, W.; Vos, A.; Hanson, R.
2013-01-01
Metal nanostructures can be used to harvest and guide the emission of single photon emitters on-chip via surface plasmon polaritons. In order to develop and characterize photonic devices based on emitter-plasmon hybrid structures, a deterministic and scalable fabrication method for such structures
From Ordinary Differential Equations to Structural Causal Models: the deterministic case
Mooij, J.M.; Janzing, D.; Schölkopf, B.; Nicholson, A.; Smyth, P.
2013-01-01
We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM). Our exposition sheds more light on the concept of causality as expressed within the framework of
Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha
2015-01-01
To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.
Deterministic factor analysis: methods of integro-differentiation of non-integral order
Directory of Open Access Journals (Sweden)
Valentina V. Tarasova
2016-12-01
Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes
Directory of Open Access Journals (Sweden)
Feng HE
2017-12-01
Full Text Available The state of the art avionics system adopts switched networks for airborne communications. A major concern in the design of the networks is the end-to-end guarantee ability. Analytic methods have been developed to compute the worst-case delays according to the detailed configurations of flows and networks within avionics context, such as network calculus and trajectory approach. It still lacks a relevant method to make a rapid performance estimation according to some typically switched networking features, such as networking scale, bandwidth utilization and average flow rate. The goal of this paper is to establish a deterministic upper bound analysis method by using these networking features instead of the complete network configurations. Two deterministic upper bounds are proposed from network calculus perspective: one is for a basic estimation, and another just shows the benefits from grouping strategy. Besides, a mathematic expression for grouping ability is established based on the concept of network connecting degree, which illustrates the possibly minimal grouping benefit. For a fully connected network with 4 switches and 12 end systems, the grouping ability coming from grouping strategy is 15â20%, which just coincides with the statistical data (18â22% from the actual grouping advantage. Compared with the complete network calculus analysis method for individual flows, the effectiveness of the two deterministic upper bounds is no less than 38% even with remarkably varied packet lengths. Finally, the paper illustrates the design process for an industrial Avionics Full DupleX switched Ethernet (AFDX networking case according to the two deterministic upper bounds and shows that a better control for network connecting, when designing a switched network, can improve the worst-case delays dramatically. Keywords: Deterministic bound, Grouping ability, Network calculus, Networking features, Switched networks
Balanced sensitivity functions for tuning multi-dimensional Bayesian network classifiers
Bolt, J.H.; van der Gaag, L.C.
Multi-dimensional Bayesian network classifiers are Bayesian networks of restricted topological structure, which are tailored to classifying data instances into multiple dimensions. Like more traditional classifiers, multi-dimensional classifiers are typically learned from data and may include
Analysis of Local Dependence and Multidimensionality in Graphical Loglinear Rasch Models
DEFF Research Database (Denmark)
Kreiner, Svend; Christensen, Karl Bang
2004-01-01
Local independence; Multidimensionality; Differential item functioning; Uniform local dependence and DIF; Graphical Rasch models; Loglinear Rasch model......Local independence; Multidimensionality; Differential item functioning; Uniform local dependence and DIF; Graphical Rasch models; Loglinear Rasch model...
Energy Technology Data Exchange (ETDEWEB)
Morhac, M. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)]. E-mail: fyzimiro@savba.sk; Matousek, V. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia); Turzo, I. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia); Kliman, J. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)
2006-04-01
Multidimensional data acquisition, processing and visualization system to analyze experimental data in nuclear physics is described. It includes a large number of sophisticated algorithms of the multidimensional spectra processing, including background elimination, deconvolution, peak searching and fitting.
International Nuclear Information System (INIS)
Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad
2016-01-01
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Energy Technology Data Exchange (ETDEWEB)
Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.
2016-12-15
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Conservation laws for multidimensional systems and related linear algebra problems
Igonine, Sergei
2002-01-01
We consider multidimensional systems of PDEs of generalized evolution form with t-derivatives of arbitrary order on the left-hand side and with the right-hand side dependent on lower order t-derivatives and arbitrary space derivatives. For such systems we find an explicit necessary condition for the
Conservation laws for multidimensional systems and related linear algebra problems
Igonin, S.
2002-01-01
We consider multidimensional systems of PDEs of generalized evolution form with $t$-derivatives of arbitrary order on the left-hand side and with the right-hand side dependent on lower order $t$-derivatives and arbitrary space derivatives. For such systems we find an explicit necessary condition for
The Measurement of Multidimensional Gender Inequality: Continuing the Debate
Permanyer, Inaki
2010-01-01
The measurement of multidimensional gender inequality is an increasingly important topic that has very relevant policy applications and implications but which has not received much attention from the academic literature. In this paper I make a comprehensive and critical review of the indices proposed in recent years in order to systematise the…
The Structure and Validity of the Multidimensional Social Support Questionnaire
Hardesty, Patrick H.; Richardson, George B.
2012-01-01
The factor structure and concurrent validity of the Multidimensional Social Support Questionnaire, a brief measure of perceived social support for use with adolescents, was examined. Findings suggest that four dimensions of perceived social support may yield more information than assessments of the unitary construct of support. (Contains 8 tables…
Multidimensional Poverty in China: Findings Based on the CHNS
Yu, Jiantuo
2013-01-01
This paper estimates multidimensional poverty in China by applying the Alkire-Foster methodology to the China Health and Nutrition Survey 2000-2009 data. Five dimensions are included: income, living standard, education, health and social security. Results suggest that rapid economic growth has resulted not only in a reduction in income poverty but…
Integral and Multidimensional Linear Distinguishers with Correlation Zero
DEFF Research Database (Denmark)
Bogdanov, Andrey; Leander, Gregor; Nyberg, Kaisa
2012-01-01
Zero-correlation cryptanalysis uses linear approximations holding with probability exactly 1/2. In this paper, we reveal fundamental links of zero-correlation distinguishers to integral distinguishers and multidimensional linear distinguishers. We show that an integral implies zero-correlation li...... weak key assumptions. © International Association for Cryptologic Research 2012....
Theme section: Multi-dimensional modelling, analysis and visualization
DEFF Research Database (Denmark)
Guilbert, Éric; Coltekin, Arzu; Antón Castro, Francesc/François
2016-01-01
(Biljecki et al., 2015) as well as the temporal, but also the scale dimension (Van Oosterom and Stoter, 2010) or, as mentioned by(Lu et al., 2016), multi-spectral and multi-sensor data. Such a view provides an organisation of multidimensional data around these different axes and it is time to explore each...
Income and beyond: Multidimensional Poverty in Six Latin American Countries
Battiston, Diego; Cruces, Guillermo; Lopez-Calva, Luis Felipe; Lugo, Maria Ana; Santos, Maria Emma
2013-01-01
This paper studies multidimensional poverty for Argentina, Brazil, Chile, El Salvador, Mexico and Uruguay for the period 1992-2006. The approach overcomes the limitations of the two traditional methods of poverty analysis in Latin America (income-based and unmet basic needs) by combining income with five other dimensions: school attendance for…
Nonparametric Bayesian inference for multidimensional compound Poisson processes
Gugushvili, S.; van der Meulen, F.; Spreij, P.
2015-01-01
Given a sample from a discretely observed multidimensional compound Poisson process, we study the problem of nonparametric estimation of its jump size density r0 and intensity λ0. We take a nonparametric Bayesian approach to the problem and determine posterior contraction rates in this context,
Evidence for a Multidimensional Self-Efficacy for Exercise Scale
Rodgers, W. M.; Wilson, P. M.; Hall, C. R.; Fraser, S. N.; Murray, T. C.
2008-01-01
This series of three studies considers the multidimensionality of exercise self-efficacy by examining the psychometric characteristics of an instrument designed to assess three behavioral subdomains: task, scheduling, and coping. In Study 1, exploratory factor analysis revealed the expected factor structure in a sample of 395 students.…
Multidimensional adaptive testing with a minimum error-variance criterion
van der Linden, Willem J.
1997-01-01
The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple
Psychometric properties of the Multidimensional Anxiety Scale for ...
African Journals Online (AJOL)
Aim: To determine the psychometric properties of the Multidimensional Anxiety Scale for Children (MASC) in Nairobi public secondary school children, Kenya. Method: Concurrent self-administration of the MASC and Children's Depression Inventory (CDI) to students in Nairobi public secondary schools. Results: The MASC ...
Cognitive Age: A New Multidimensional Approach to Measuring Age Identity.
Barak, Benny
1987-01-01
Conducted exploratory field study to examine how age-concepts are experienced and to assess relationship of age identities to each other. Proposes Cognitive Age as a new multidimensional age scale that merges the standard scale, Identity Age, and Personal Age. Study results attest to Cognitive Age scale's reliability and validity. (Author/NB)
Decay rate in a multi-dimensional fission problem
Energy Technology Data Exchange (ETDEWEB)
Brink, D M; Canto, L F
1986-06-01
The multi-dimensional diffusion approach of Zhang Jing Shang and Weidenmueller (1983 Phys. Rev. C28, 2190) is used to study a simplified model for induced fission. In this model it is shown that the coupling of the fission coordinate to the intrinsic degrees of freedom is equivalent to an extra friction and a mass correction in the corresponding one-dimensional problem.
A comparison of multidimensional scaling methods for perceptual mapping
Bijmolt, T.H.A.; Wedel, M.
Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare
The Multidimensionality of Child Poverty: Evidence from Afghanistan
Trani, Jean-Francois; Biggeri, Mario; Mauro, Vincenzo
2013-01-01
This paper examines multidimensional poverty among children in Afghanistan using the Alkire-Foster method. Several previous studies have underlined the need to separate children from their adult nexus when studying poverty and treat them according to their own specificities. From the capability approach, child poverty is understood to be the lack…
Multidimensional Data Modeling For Location-Based Services
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Kligys, Augustas; Pedersen, Torben Bach
2004-01-01
and requests of their users in multidimensional databases, i.e., data warehouses, and content delivery may be based on the results of complex queries on these data warehouses. Such queries aggregate detailed data in order to find useful patterns, e.g., in the interaction of a particular user with the services...