International Nuclear Information System (INIS)
Hoisie, A.; Lubeck, O.; Wasserman, H.
1998-01-01
The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, the authors analyze two problem sizes. Their model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor
The pointwise Hellmann-Feynman theorem
Directory of Open Access Journals (Sweden)
David Carfì
2010-02-01
Full Text Available In this paper we study from a topological point of view the Hellmann-Feynman theorem of Quantum Mechanics. The goal of the paper is twofold: On one hand we emphasize the role of the strong topology in the classic version of the theorem in Hilbert spaces, for what concerns the kind of convergence required on the space of continuous linear endomorphisms, which contains the space of (continuous observables.On the other hand we state and prove a new pointwise version of the classic Hellmann-Feynman theorem. This new version is not yet present in the literature and follows the idea of A. Bohm concerning the topology which is desiderable to use in Quantum Mechanics. It is indeed out of question that this non-trivial new version of the Hellmann-Feynman theorem is the ideal one - for what concerns the continuous observables on Hilbert spaces, both from a theoretical point of view, since it is the strongest version obtainable in this context - we recall that the pointwise topology is the coarsest one compatible with the linear structure of the space of continuous observables -, and from a practical point of view, because the pointwise topology is the easiest to use among topologies: it brings back the problems to the Hilbert space topology. Moreover, we desire to remark that this basic theorem of Quantum Mechanics, in his most desiderable form, is deeply interlaced with two cornerstones of Functional Analysis: the Banach-Steinhaus theorem and the Baire theorem.
Pointwise probability reinforcements for robust statistical inference.
Frénay, Benoît; Verleysen, Michel
2014-02-01
Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Eckmann, Jean-Pierre
1999-01-01
In these lectures, I will give an overview of the mathematical and physical aspects of deterministic chaotic systems. Starting from simple examples, I plan to cover some crucial notions of the theory such as : Hyperbolicity, shadowing and ergodic properties.
Rapid pointwise stabilization of vibrating strings and beams
Directory of Open Access Journals (Sweden)
Alia BARHOUMI
2009-11-01
Full Text Available Applying a general construction and using former results on the observability we prove, under rather general assumptions, a rapid pointwise stabilization of vibrating strings and beams.
Morales, Esteban; de Leon, John Mark S; Abdollahi, Niloufar; Yu, Fei; Nouri-Mahdavi, Kouros; Caprioli, Joseph
2016-03-01
The study was conducted to evaluate threshold smoothing algorithms to enhance prediction of the rates of visual field (VF) worsening in glaucoma. We studied 798 patients with primary open-angle glaucoma and 6 or more years of follow-up who underwent 8 or more VF examinations. Thresholds at each VF location for the first 4 years or first half of the follow-up time (whichever was greater) were smoothed with clusters defined by the nearest neighbor (NN), Garway-Heath, Glaucoma Hemifield Test (GHT), and weighting by the correlation of rates at all other VF locations. Thresholds were regressed with a pointwise exponential regression (PER) model and a pointwise linear regression (PLR) model. Smaller root mean square error (RMSE) values of the differences between the observed and the predicted thresholds at last two follow-ups indicated better model predictions. The mean (SD) follow-up times for the smoothing and prediction phase were 5.3 (1.5) and 10.5 (3.9) years. The mean RMSE values for the PER and PLR models were unsmoothed data, 6.09 and 6.55; NN, 3.40 and 3.42; Garway-Heath, 3.47 and 3.48; GHT, 3.57 and 3.74; and correlation of rates, 3.59 and 3.64. Smoothed VF data predicted better than unsmoothed data. Nearest neighbor provided the best predictions; PER also predicted consistently more accurately than PLR. Smoothing algorithms should be used when forecasting VF results with PER or PLR. The application of smoothing algorithms on VF data can improve forecasting in VF points to assist in treatment decisions.
A computer program for the pointwise functions generation
International Nuclear Information System (INIS)
Caldeira, Alexandre D.
1995-01-01
A computer program that was developed with the objective of generating pointwise functions, by a combination of tabulated values and/or mathematical expressions, to be used as weighting functions for nuclear data is presented. This simple program can be an important tool for researchers involved in group constants generation. (author). 5 refs, 2 figs
Pointwise intensity-based dynamic speckle analysis with binary patterns
Stoykova, Elena; Mateev, Georgy; Nazarova, Dimana; Berberova, Nataliya; Ivanov, Branimir
2017-06-01
Non-destructive detection of physical or biological activity through statistical processing of speckle patterns on the surface of diffusely reflecting objects is an area of active research. A lot of pointwise intensity-based algorithms have been proposed over the recent years. Efficiency of these algorithms is deteriorated by the signal-dependent speckle data, non-uniform illumination or varying reflectivity across the object, especially when the number of the acquired speckle patterns is limited. Pointwise processing of a sequence of 2D images is also time-consuming. In this paper, we propose to transform the acquired speckle images into binary patterns by using for a sign threshold the mean intensity value estimated at each spatial point from the temporal sequence of intensities at this point. Activity is characterized by the 2D distribution of a temporal polar correlation function estimated at a given time lag from the binary patterns. Processing of synthetic and experimental data confirmed that the algorithm provided correct activity determination with the same accuracy as the temporal normalized correlation function. It is efficient without the necessity to apply normalization at non-uniform distribution of intensity in the illuminating laser beam and offers acceleration of computation.
Refinement of pointwise linear regression criteria for determining glaucoma progression.
Kummet, Colleen M; Zamba, K D; Doyle, Carrie K; Johnson, Chris A; Wall, Michael
2013-09-19
A variety of pointwise linear regression (PLR) criteria have been proposed for determining glaucomatous visual field progression. However, alternative PLR criteria have only been assessed on a limited basis. The purpose of this study was to evaluate a range of PLR slope and significance criteria to define a clinically useful progression decision rule for longitudinal visual field examinations. Visual field data for each of 140 eyes (one per participant among 96 cases and 44 controls) were evaluated using the Humphrey Field Analyzer II program 24-2 Swedish interactive thresholding algorithm standard test strategy and Goldmann size III stimuli. The pointwise linear regression A2 (PLRA2) method was used to analyze the data, which included nine visual field examinations performed every 6 months for 4 years. Data from the Ocular Hypertension Treatment Study (OHTS) were used to validate the decision rule. Several slope criteria produced specificities of 0.90 or higher, particularly slope criteria of less than -1.2 dB/y. The use of the slope criterion less than -1.2 dB/y at a significance level of P refined by requiring a stricter slope criterion such as less than -1.2 dB/y and relaxing the significance criterion to P < 0.04. Increasing the hit rate of PLR will be useful for early detection and treatment of glaucoma.
Directory of Open Access Journals (Sweden)
W. Łenski
2015-01-01
Full Text Available The results generalizing some theorems on N, pnE, γ summability are shown. The same degrees of pointwise approximation as in earlier papers by weaker assumptions on considered functions and examined summability methods are obtained. From presented pointwise results, the estimation on norm approximation is derived. Some special cases as corollaries are also formulated.
A posteriori pointwise error estimates for the boundary element method
Energy Technology Data Exchange (ETDEWEB)
Paulino, G.H. [Cornell Univ., Ithaca, NY (United States). School of Civil and Environmental Engineering; Gray, L.J. [Oak Ridge National Lab., TN (United States); Zarikian, V. [Univ. of Central Florida, Orlando, FL (United States). Dept. of Mathematics
1995-01-01
This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.
A Point-Wise Quantification of Asymmetry Using Deformation Fields
DEFF Research Database (Denmark)
Ólafsdóttir, Hildur; Lanche, Stephanie; Darvann, Tron Andre
2007-01-01
of the resulting displacement vectors on the left and right side of the symmetry plane, gives a point-wise measure of asymmetry. The asymmetry measure was applied to the study of Crouzon syndrome using Micro CT scans of genetically modified mice. Crouzon syndrome is characterised by the premature fusion of cranial...... sutures, which gives rise to a highly asymmetric growth. Quantification and localisation of this asymmetry is of high value with respect to surgery planning and treatment evaluation. Using the proposed method, asymmetry was calculated in each point of the surface of Crouzon mice and wild-type mice...... (controls). Asymmetry appeared in similar regions for the two groups but the Crouzon mice were found significantly more asymmetric. The localisation ability of the method was in good agreement with ratings from a clinical expert. Validating the quantification ability is a less trivial task due to the lack...
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
Zanni, Martin Thomas; Damrauer, Niels H.
2010-07-20
A multidimensional spectrometer for the infrared, visible, and ultraviolet regions of the electromagnetic spectrum, and a method for making multidimensional spectroscopic measurements in the infrared, visible, and ultraviolet regions of the electromagnetic spectrum. The multidimensional spectrometer facilitates measurements of inter- and intra-molecular interactions.
A Pointwise Dimension Analysis of the Las Campanas Redshift Survey
Best, J. S.
1999-12-01
The modern motivation for fractal geometry may best be summed up by this quote of Benoit Mandelbrot: ``Mountains are not cones, clouds are not spheres, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.'' Fractals are, in simplest terms, ``objects which are (approximately) self-similar on all scales.'' The renewed modern interest in fractals has found as one of its applications the study of large-scale structure, giving a quantitative descriptive scheme to ideas that had been expressed qualitatively as early as the 1920s. This paper presents the preliminary results of an analysis of the structure of the Las Campanas Redshift Survey, or LCRS. LCRS is an approximately 26000 galaxy survey (surveyed as six declination slices) that has been studied extensively over the past few years, with an eye towards understanding large-scale structure. For this analysis, I have used the pointwise dimension, an easy-to-apply fractal statistic which has been previously used to study cluster interiors, galactic distributions, and cluster distributions. The present analysis has been performed to serve as a guide for the study of future large redshift surveys. This research has been funded by National Science Foundation grant AST-9808608.
Czech Academy of Sciences Publication Activity Database
Světlák, M.; Bob, P.; Roman, R.; Ježek, S.; Damborská, A.; Chládek, Jan; Shaw, D. J.; Kukleta, M.
2013-01-01
Roč. 62, č. 6 (2013), s. 711-719 ISSN 0862-8408 Institutional support: RVO:68081731 Keywords : electrodermal activity * pointwise trasinformation * autonomic nervous system * asymmetry * stress Subject RIV: CE - Biochemistry Impact factor: 1.487, year: 2013
Advances in stochastic and deterministic global optimization
Zhigljavsky, Anatoly; Žilinskas, Julius
2016-01-01
Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...
Pointwise functions for flexible implementation of crustal deformation physics in PyLith
Aagaard, B.; Knepley, M.; Williams, C. A.
2015-12-01
The next stage of development for PyLith, a flexible, open-source finite-element code (http://geodynamics.org/cig/software/pylith/) for modeling quasi-static and dynamic crustal deformation with an emphasis earthquake faulting, focuses on refactoring the code to provide greater flexibility in support of a broader range of physics, discretizations, and optimizations for a variety of computer hardware. We separate the finite-element integration into a discretization-specific portion and discretization-independent pointwise functions associated with the governing equations. The discretization-specific portion is designed to accommodate arbitrary order finite elements and multiple implementations for optimization targeting specific hardware (e.g., CPU and GPU). The pointwise functions encapsulate the physics, including the governing equations and rheologies. Users can easily extend the code by adding new pointwise functions to implement different rheologies and/or governing equations. PyLith currently includes pointwise functions for quasi-static and dynamic elasticity for several elastic, viscoelastic, and elastoplastic rheologies. We plan to add pointwise functions for coupling of elasticity with fluid flow and incompressible elasticity. Tight integration with the Portable, Extensible Toolkit for Scientific Computation (PETSc) provides support for a wide range of linear and nonlinear solvers and time-stepping algorithms.
International Nuclear Information System (INIS)
1990-01-01
In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)
Pointwise Multipliers on Spaces of Homogeneous Type in the Sense of Coifman and Weiss
Directory of Open Access Journals (Sweden)
Yanchang Han
2014-01-01
homogeneous type in the sense of Coifman and Weiss, pointwise multipliers of inhomogeneous Besov and Triebel-Lizorkin spaces are obtained. We make no additional assumptions on the quasi-metric or the doubling measure. Hence, the results of this paper extend earlier related results to a more general setting.
Deterministic Global Optimization
Scholz, Daniel
2012-01-01
This monograph deals with a general class of solution approaches in deterministic global optimization, namely the geometric branch-and-bound methods which are popular algorithms, for instance, in Lipschitzian optimization, d.c. programming, and interval analysis.It also introduces a new concept for the rate of convergence and analyzes several bounding operations reported in the literature, from the theoretical as well as from the empirical point of view. Furthermore, extensions of the prototype algorithm for multicriteria global optimization problems as well as mixed combinatorial optimization
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...... for finding optimal strategies in such games. The existence of a linear time comparison-based algorithm remains an open problem....
Directory of Open Access Journals (Sweden)
Alexis Cedeño Trujillo
2006-04-01
Full Text Available
Data Warehousing, es una tecnología para el almacenamiento de grandes volúmenes de datos en una amplia perspectiva de tiempo para el soporte a la toma de decisiones. Debido a su orientación analítica, impone un procesamiento distinto al de los sistemas operacionales y requiere de un diseño de base de datos más cercano a la visión de los usuarios finales, permitiendo que sea más fácil la recuperación de información y la navegación. Este diseño de base de datos se conoce como modelo multidimensional, este artículo, abordará sus características principales.
Height-Deterministic Pushdown Automata
DEFF Research Database (Denmark)
Nowotka, Dirk; Srba, Jiri
2007-01-01
We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
Deterministic methods in radiation transport
International Nuclear Information System (INIS)
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
Balsara, Dinshaw S.; Nkonga, Boniface
2017-10-01
Just as the quality of a one-dimensional approximate Riemann solver is improved by the inclusion of internal sub-structure, the quality of a multidimensional Riemann solver is also similarly improved. Such multidimensional Riemann problems arise when multiple states come together at the vertex of a mesh. The interaction of the resulting one-dimensional Riemann problems gives rise to a strongly-interacting state. We wish to endow this strongly-interacting state with physically-motivated sub-structure. The fastest way of endowing such sub-structure consists of making a multidimensional extension of the HLLI Riemann solver for hyperbolic conservation laws. Presenting such a multidimensional analogue of the HLLI Riemann solver with linear sub-structure for use on structured meshes is the goal of this work. The multidimensional MuSIC Riemann solver documented here is universal in the sense that it can be applied to any hyperbolic conservation law. The multidimensional Riemann solver is made to be consistent with constraints that emerge naturally from the Galerkin projection of the self-similar states within the wave model. When the full eigenstructure in both directions is used in the present Riemann solver, it becomes a complete Riemann solver in a multidimensional sense. I.e., all the intermediate waves are represented in the multidimensional wave model. The work also presents, for the very first time, an important analysis of the dissipation characteristics of multidimensional Riemann solvers. The present Riemann solver results in the most efficient implementation of a multidimensional Riemann solver with sub-structure. Because it preserves stationary linearly degenerate waves, it might also help with well-balancing. Implementation-related details are presented in pointwise fashion for the one-dimensional HLLI Riemann solver as well as the multidimensional MuSIC Riemann solver.
Identifiability for the pointwise source detection in Fisher’s reaction–diffusion equation
International Nuclear Information System (INIS)
Belgacem, Faker Ben
2012-01-01
We are interested in the detection of a pointwise source in a class of semi-linear advection–diffusion–reaction equations of Fisher type. The source is determined by its location, which may be steady or unsteady, and its time-dependent intensity. Observations recorded at a couple of points are the available data. One observing station is located upstream of the source and the other downstream. This is a severely ill-posed nonlinear inverse problem. In this paper, we pursue an identifiability result. The process we follow has been developed earlier for the linear model and may be sharpened to operate for the semi-linear equation. It is based on the uniqueness for a parabolic (semi-linear) sideways problem, which is obtained by a suitable unique continuation theorem. We state a maximum principle that turns out to be necessary for our proof. The identifiability is finally obtained for a stationary or a moving source. Many applications may be found in biology, chemical physiology or environmental science. The problem we deal with is the detection of pointwise organic pollution sources in rivers and channels. The basic equation to consider is the one-dimensional biochemical oxygen demand equation, with a nonlinear power growth inhibitor and/or the Michaelis–Menten reaction coefficient. (paper)
Converting point-wise nuclear cross sections to pole representation using regularized vector fitting
Peng, Xingjie; Ducru, Pablo; Liu, Shichang; Forget, Benoit; Liang, Jingang; Smith, Kord
2018-03-01
Direct Doppler broadening of nuclear cross sections in Monte Carlo codes has been widely sought for coupled reactor simulations. One recent approach proposed analytical broadening using a pole representation of the commonly used resonance models and the introduction of a local windowing scheme to improve performance (Hwang, 1987; Forget et al., 2014; Josey et al., 2015, 2016). This pole representation has been achieved in the past by converting resonance parameters in the evaluation nuclear data library into poles and residues. However, cross sections of some isotopes are only provided as point-wise data in ENDF/B-VII.1 library. To convert these isotopes to pole representation, a recent approach has been proposed using the relaxed vector fitting (RVF) algorithm (Gustavsen and Semlyen, 1999; Gustavsen, 2006; Liu et al., 2018). This approach however needs to specify ahead of time the number of poles. This article addresses this issue by adding a poles and residues filtering step to the RVF procedure. This regularized VF (ReV-Fit) algorithm is shown to efficiently converge the poles close to the physical ones, eliminating most of the superfluous poles, and thus enabling the conversion of point-wise nuclear cross sections.
Identifiability for the pointwise source detection in Fisher’s reaction-diffusion equation
Ben Belgacem, Faker
2012-06-01
We are interested in the detection of a pointwise source in a class of semi-linear advection-diffusion-reaction equations of Fisher type. The source is determined by its location, which may be steady or unsteady, and its time-dependent intensity. Observations recorded at a couple of points are the available data. One observing station is located upstream of the source and the other downstream. This is a severely ill-posed nonlinear inverse problem. In this paper, we pursue an identifiability result. The process we follow has been developed earlier for the linear model and may be sharpened to operate for the semi-linear equation. It is based on the uniqueness for a parabolic (semi-linear) sideways problem, which is obtained by a suitable unique continuation theorem. We state a maximum principle that turns out to be necessary for our proof. The identifiability is finally obtained for a stationary or a moving source. Many applications may be found in biology, chemical physiology or environmental science. The problem we deal with is the detection of pointwise organic pollution sources in rivers and channels. The basic equation to consider is the one-dimensional biochemical oxygen demand equation, with a nonlinear power growth inhibitor and/or the Michaelis-Menten reaction coefficient.
Multidimensional Heat Conduction
DEFF Research Database (Denmark)
Rode, Carsten
1998-01-01
Analytical theory of multidimensional heat conduction. General heat conduction equation in three dimensions. Steay state, analytical solutions. The Laplace equation. Method of separation of variables. Principle of superposition. Shape factors. Transient, multidimensional heat conduction....
Deterministic indexing for packed strings
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye
2017-01-01
Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ......, we show how to preprocess S in O(n) (deterministic) time and space O(n) such that given a packed pattern string of length m we can support queries in (deterministic) time O (m/α + log m + log log σ), where α = w/log σ is the number of characters packed in a word of size w = θ(log n). Our query time...
Nonlinear Markov processes: Deterministic case
International Nuclear Information System (INIS)
Frank, T.D.
2008-01-01
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
Directory of Open Access Journals (Sweden)
Uysal Gumrah
2016-01-01
Full Text Available In this paper we present some theorems concerning existence and Fatou type weighted pointwise convergence of nonlinear singular integral operators of the form: (Tλf(x=∫RKλ(t−x; f(tdt, x∈R, λ∈Λ$({T_\\lambda }f(x = \\int\\limits_R {{K_\\lambda }} (t - x;{\\rm{ }}f(tdt,{\\rm{ x}} \\in R,{\\rm{ }}\\lambda \\in \\Lambda $ where Λ ≠ ∅ is a set of non-negative indices, at a common generalized Lebesgue point of the functions f ∈ L1,ϕ (R and positive weight function φ. Here, L1,ϕ (R is the space of all measurable functions for which |fϕ|$\\left| {{f \\over \\phi }} \\right|$ is integrable on R.
Frankowska, Hélène; Hoehener, Daniel
2017-06-01
This paper is devoted to pointwise second-order necessary optimality conditions for the Mayer problem arising in optimal control theory. We first show that with every optimal trajectory it is possible to associate a solution p (ṡ) of the adjoint system (as in the Pontryagin maximum principle) and a matrix solution W (ṡ) of an adjoint matrix differential equation that satisfy a second-order transversality condition and a second-order maximality condition. These conditions seem to be a natural second-order extension of the maximum principle. We then prove a Jacobson like necessary optimality condition for general control systems and measurable optimal controls that may be only ;partially singular; and may take values on the boundary of control constraints. Finally we investigate the second-order sensitivity relations along optimal trajectories involving both p (ṡ) and W (ṡ).
A fast pointwise strategy for anisotropic wave-mode separation in TI media
Liu, Qiancheng
2017-08-17
The multi-component wavefield contains both compressional and shear waves. Separating wave-modes has many applications in seismic workflows. Conventionally, anisotropic wave-mode separation is implemented by either directly filtering in the wavenumber domain or nonstationary filtering in the space domain, which are computationally expensive. These methods could be categorized into the pseudo-derivative family and only work well within Finite Difference (FD) methods. In this paper, we establish a relationship between group-velocity direction and polarity direction and propose a method, which could go beyond modeling by FD. In particular, we are interested in performing wave-mode separation in a Spectral Element Method (SEM), which is widely used for seismic wave propagation on various scales. The separation is implemented pointwise, independent of its neighbor points, suitable for running in parallel. Moreover, no correction for amplitude and phase changes caused by the derivative operator is required. We have verified our scheme using numerical examples.
Directory of Open Access Journals (Sweden)
Irwin Yousept
2010-07-01
Full Text Available An optimal control problem arising in the context of 3D electromagnetic induction heating is investigated. The state equation is given by a quasilinear stationary heat equation coupled with a semilinear time harmonic eddy current equation. The temperature-dependent electrical conductivity and the presence of pointwise inequality state-constraints represent the main challenge of the paper. In the first part of the paper, the existence and regularity of the state are addressed. The second part of the paper deals with the analysis of the corresponding linearized equation. Some suffcient conditions are presented which guarantee thesolvability of the linearized system. The final part of the paper is concerned with the optimal control. The aim of the optimization is to find the optimal voltage such that a desired temperature can be achieved optimally. The corresponding first-order necessary optimality condition is presented.
Measuring global oil trade dependencies: An application of the point-wise mutual information method
International Nuclear Information System (INIS)
Kharrazi, Ali; Fath, Brian D.
2016-01-01
Oil trade is one of the most vital networks in the global economy. In this paper, we analyze the 1998–2012 oil trade networks using the point-wise mutual information (PMI) method and determine the pairwise trade preferences and dependencies. Using examples of the USA's trade partners, this research demonstrates the usefulness of the PMI method as an additional methodological tool to evaluate the outcomes from countries' decisions to engage in preferred trading partners. A positive PMI value indicates trade preference where trade is larger than would be expected. For example, in 2012 the USA imported 2,548.7 kbpd despite an expected 358.5 kbpd of oil from Canada. Conversely, a negative PMI value indicates trade dis-preference where the amount of trade is smaller than what would be expected. For example, the 15-year average of annual PMI between Saudi Arabia and the U.S.A. is −0.130 and between Russia and the USA −1.596. We reflect the three primary reasons of discrepancies between actual and neutral model trade can be related to position, price, and politics. The PMI can quantify the political success or failure of trade preferences and can more accurately account temporal variation of interdependencies. - Highlights: • We analyzed global oil trade networks using the point-wise mutual information method. • We identified position, price, & politics as drivers of oil trade preference. • The PMI method is useful in research on complex trade networks and dependency theory. • A time-series analysis of PMI can track dependencies & evaluate policy decisions.
Deterministic extraction from weak random sources
Gabizon, Ariel
2011-01-01
In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.
Deterministic hydrodynamics: Taking blood apart
Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.
2006-10-01
We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication
Stochastic and deterministic trend models
Estela Bee Dagum; Camilo Dagum
2008-01-01
In this paper we provide an overview of some trend models formulated for global and local estimation. Global trend models are based on the assumption that the trend or nonstationary mean of a time series can be approximated closely by simple functions of time over the entire span of the series. The most common representation of deterministic and stochastic trend are introduced. In particular, for the former we analyze polynomial and transcendental functions, whereas for the latter we assume t...
Deterministic Function Computation with Chemical Reaction Networks*
Chen, Ho-Lin; Doty, David; Soloveichik, David
2013-01-01
Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising language for the design of artificial molecular control circuitry. Nonetheless, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. CRNs have been shown to be efficiently Turing-universal (i.e., able to simulate arbitrary algorithms) when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates (a multi-dimensional generalization of “eventually periodic” sets). We introduce the notion of function, rather than predicate, computation by representing the output of a function f : ℕk → ℕl by a count of some molecular species, i.e., if the CRN starts with x1, …, xk molecules of some “input” species X1, …, Xk, the CRN is guaranteed to converge to having f(x1, …, xk) molecules of the “output” species Y1, …, Yl. We show that a function f : ℕk → ℕl is deterministically computed by a CRN if and only if its graph {(x, y) ∈ ℕk × ℕl ∣ f(x) = y} is a semilinear set. Finally, we show that each semilinear function f (a function whose graph is a semilinear set) can be computed by a CRN on input x in expected time O(polylog ∥x∥1). PMID:25383068
A deterministic width function model
Directory of Open Access Journals (Sweden)
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
Vsevolozhskaya, Olga A; Greenwood, Mark C; Powell, Scott L; Zaykin, Dmitri V
2015-03-01
In this paper we describe a coherent multiple testing procedure for correlated test statistics such as are encountered in functional linear models. The procedure makes use of two different p -value combination methods: the Fisher combination method and the Šidák correction-based method. P -values for Fisher's and Šidák's test statistics are estimated through resampling to cope with the correlated tests. Building upon these two existing combination methods, we propose the smallest p -value as a new test statistic for each hypothesis. The closure principle is incorporated along with the new test statistic to obtain the overall p -value and appropriately adjust the individual p -values. Furthermore, a shortcut version for the proposed procedure is detailed, so that individual adjustments can be obtained even for a large number of tests. The motivation for developing the procedure comes from a problem of point-wise inference with smooth functional data where tests at neighboring points are related. A simulation study verifies that the methodology performs well in this setting. We illustrate the proposed method with data from a study on the aerial detection of the spectral effect of below ground carbon dioxide leakage on vegetation stress via spectral responses.
Světlák, M; Bob, P; Roman, R; Ježek, S; Damborská, A; Chládek, J; Shaw, D J; Kukleta, M
2013-01-01
In this study, we tested the hypothesis that experimental stress induces a specific change of left-right electrodermal activity (EDA) coupling pattern, as indexed by pointwise transinformation (PTI). Further, we hypothesized that this change is associated with scores on psychometric measures of the chronic stress-related psychopathology. Ninety-nine university students underwent bilateral measurement of EDA during rest and stress-inducing Stroop test and completed a battery of self-report measures of chronic stress-related psychopathology. A significant decrease in the mean PTI value was the prevalent response to the stress conditions. No association between chronic stress and PTI was found. Raw scores of psychometric measures of stress-related psychopathology had no effect on either the resting levels of PTI or the amount of stress-induced PTI change. In summary, acute stress alters the level of coupling pattern of cortico-autonomic influences on the left and right sympathetic pathways to the palmar sweat glands. Different results obtained using the PTI, EDA laterality coefficient, and skin conductance level also show that the PTI algorithm represents a new analytical approach to EDA asymmetry description.
Zhao, Yang; Wang, Junlan; Wu, Xiaoping; Williams, Fred W.; Schmidt, Richard J.
1997-12-01
Based on multi-scattering speckle theory, the speckle fields generated by plant specimens irradiated by laser light have been studied using a pointwise method. In addition, a whole-field method has been developed with which entire botanical specimens may be studied. Results are reported from measurements made on tomato and apple fruits, orange peel, leaves of tobacco seedlings, leaves of shihu seedlings (a Chinese medicinal herb), soy-bean sprouts, and leaves from an unidentified trailing houseplant. Although differences where observed in the temporal fluctuations of speckles that could be ascribed to differences in age and vitality, the growing tip of the bean sprout and the shihu seedling both generated virtually stationary speckles such as were observed from boiled orange peel and from localised heat-damaged regions on apple fruit. Our results suggest that both the identity of the botanical specimen and the site at which measurements are taken are likely to critically affect the observation or otherwise of temporal fluctuations of laser speckles.
Directory of Open Access Journals (Sweden)
Mihaela MUNTEAN
2006-01-01
Full Text Available Using SQL you can manipulate multidimensional data and extract that data into a relational table. There are many PL/SQL packages that you can use directly in SQL*Plus or indirectly in Analytic Workspace Manager and OLAP Worksheet. In this article I discussed about some methods that you can use for manipulating and extracting multidimensional data.
Deterministic global optimization an introduction to the diagonal approach
Sergeyev, Yaroslav D
2017-01-01
This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...
Energy Technology Data Exchange (ETDEWEB)
Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr [Université Paris-Est, LAMA (UMR 8050), UPEMLV, UPEC, CNRS (France)
2016-10-15
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.
Deterministic Bragg Coherent Diffraction Imaging.
Pavlov, Konstantin M; Punegov, Vasily I; Morgan, Kaye S; Schmalz, Gerd; Paganin, David M
2017-04-25
A deterministic variant of Bragg Coherent Diffraction Imaging is introduced in its kinematical approximation, for X-ray scattering from an imperfect crystal whose imperfections span no more than half of the volume of the crystal. This approach provides a unique analytical reconstruction of the object's structure factor and displacement fields from the 3D diffracted intensity distribution centred around any particular reciprocal lattice vector. The simple closed-form reconstruction algorithm, which requires only one multiplication and one Fourier transformation, is not restricted by assumptions of smallness of the displacement field. The algorithm performs well in simulations incorporating a variety of conditions, including both realistic levels of noise and departures from ideality in the reference (i.e. imperfection-free) part of the crystal.
Directory of Open Access Journals (Sweden)
Daniel M Spagnolo
2016-01-01
Full Text Available Background: Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME are key contributors to heterogeneity. Methods: We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI and visually represent heterogeneity with a two-dimensional map. Results: We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. Conclusions: This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI, which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different
Multidimensional Risk Analysis: MRISK
McCollum, Raymond; Brown, Douglas; O'Shea, Sarah Beth; Reith, William; Rabulan, Jennifer; Melrose, Graeme
2015-01-01
Multidimensional Risk (MRISK) calculates the combined multidimensional score using Mahalanobis distance. MRISK accounts for covariance between consequence dimensions, which de-conflicts the interdependencies of consequence dimensions, providing a clearer depiction of risks. Additionally, in the event the dimensions are not correlated, Mahalanobis distance reduces to Euclidean distance normalized by the variance and, therefore, represents the most flexible and optimal method to combine dimensions. MRISK is currently being used in NASA's Environmentally Responsible Aviation (ERA) project o assess risk and prioritize scarce resources.
Applied multidimensional systems theory
Bose, Nirmal K
2017-01-01
Revised and updated, this concise new edition of the pioneering book on multidimensional signal processing is ideal for a new generation of students. Multidimensional systems or m-D systems are the necessary mathematical background for modern digital image processing with applications in biomedicine, X-ray technology and satellite communications. Serving as a firm basis for graduate engineering students and researchers seeking applications in mathematical theories, this edition eschews detailed mathematical theory not useful to students. Presentation of the theory has been revised to make it more readable for students, and introduce some new topics that are emerging as multidimensional DSP topics in the interdisciplinary fields of image processing. New topics include Groebner bases, wavelets, and filter banks.
Deterministic equation solving over finite fields
Woestijne, Christiaan Evert van de
2006-01-01
It is shown how to solve diagonal forms in many variables over finite fields by means of a deterministic efficient algorithm. Applications to norm equations, quadratic forms, and elliptic curves are given.
A Deterministic and Polynomial Modified Perceptron Algorithm
Directory of Open Access Journals (Sweden)
Olof Barr
2006-01-01
Full Text Available We construct a modified perceptron algorithm that is deterministic, polynomial and also as fast as previous known algorithms. The algorithm runs in time O(mn3lognlog(1/ρ, where m is the number of examples, n the number of dimensions and ρ is approximately the size of the margin. We also construct a non-deterministic modified perceptron algorithm running in timeO(mn2lognlog(1/ρ.
Deterministic chaos in the processor load
International Nuclear Information System (INIS)
Halbiniak, Zbigniew; Jozwiak, Ireneusz J.
2007-01-01
In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case
Javidi, Bahram; Andres, Pedro
2014-01-01
Provides a broad overview of advanced multidimensional imaging systems with contributions from leading researchers in the field Multi-dimensional Imaging takes the reader from the introductory concepts through to the latest applications of these techniques. Split into 3 parts covering 3D image capture, processing, visualization and display, using 1) a Multi-View Approach and 2.) a Holographic Approach, followed by a 3rd part addressing other 3D systems approaches, applications and signal processing for advanced 3D imaging. This book describes recent developments, as well as the prospects and
Risk-based and deterministic regulation
International Nuclear Information System (INIS)
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
Zanarini, Alessandro
2018-01-01
The progress of optical systems gives nowadays at disposal on lightweight structures complex dynamic measurements and modal tests, each with its own advantages, drawbacks and preferred usage domains. It is thus more easy than before to obtain highly spatially defined vibration patterns for many applications in vibration engineering, testing and general product development. The potential of three completely different technologies is here benchmarked on a common test rig and advanced applications. SLDV, dynamic ESPI and hi-speed DIC are here first deployed in a complex and unique test on the estimation of FRFs with high spatial accuracy from a thin vibrating plate. The latter exhibits a broad band dynamics and high modal density in the common frequency domain where the techniques can find an operative intersection. A peculiar point-wise comparison is here addressed by means of discrete geometry transforms to put all the three technologies on trial at each physical point of the surface. Full field measurement technologies cannot estimate only displacement fields on a refined grid, but can exploit the spatial consistency of the results through neighbouring locations by means of numerical differentiation operators in the spatial domain to obtain rotational degrees of freedom and superficial dynamic strain distributions, with enhanced quality, compared to other technologies in literature. Approaching the task with the aid of superior quality receptance maps from the three different full field gears, this work calculates and compares rotational and dynamic strain FRFs. Dynamic stress FRFs can be modelled directly from the latter, by means of a constitutive model, avoiding the costly and time-consuming steps of building and tuning a numerical dynamic model of a flexible component or a structure in real life conditions. Once dynamic stress FRFs are obtained, spectral fatigue approaches can try to predict the life of a component in many excitation conditions. Different
Symbolic Multidimensional Scaling
P.J.F. Groenen (Patrick); Y. Terada
2015-01-01
markdownabstract__Abstract__ Multidimensional scaling (MDS) is a technique that visualizes dissimilarities between pairs of objects as distances between points in a low dimensional space. In symbolic MDS, a dissimilarity is not just a value but can represent an interval or even a histogram. Here,
Numeric invariants from multidimensional persistence
Energy Technology Data Exchange (ETDEWEB)
Skryzalin, Jacek [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlsson, Gunnar [Stanford Univ., Stanford, CA (United States)
2017-05-19
In this paper, we analyze the space of multidimensional persistence modules from the perspectives of algebraic geometry. We first build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence over one-dimensional persistence. We argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Lastly, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be used to study data.
Deterministic dense coding with partially entangled states
Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni
2005-01-01
The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.
Multidimensional nonlinear descriptive analysis
Nishisato, Shizuhiko
2006-01-01
Quantification of categorical, or non-numerical, data is a problem that scientists face across a wide range of disciplines. Exploring data analysis in various areas of research, such as the social sciences and biology, Multidimensional Nonlinear Descriptive Analysis presents methods for analyzing categorical data that are not necessarily sampled randomly from a normal population and often involve nonlinear relations. This reference not only provides an overview of multidimensional nonlinear descriptive analysis (MUNDA) of discrete data, it also offers new results in a variety of fields. The first part of the book covers conceptual and technical preliminaries needed to understand the data analysis in subsequent chapters. The next two parts contain applications of MUNDA to diverse data types, with each chapter devoted to one type of categorical data, a brief historical comment, and basic skills peculiar to the data types. The final part examines several problems and then concludes with suggestions for futu...
Visualization of multidimensional database
Lee, Chung
2008-01-01
The concept of multidimensional databases has been extensively researched and wildly used in actual database application. It plays an important role in contemporary information technology, but due to the complexity of its inner structure, the database design is a complicated process and users are having a hard time fully understanding and using the database. An effective visualization tool for higher dimensional information system helps database designers and users alike. Most visualization techniques focus on displaying dimensional data using spreadsheets and charts. This may be sufficient for the databases having three or fewer dimensions but for higher dimensions, various combinations of projection operations are needed and a full grasp of total database architecture is very difficult. This study reviews existing visualization techniques for multidimensional database and then proposes an alternate approach to visualize a database of any dimension by adopting the tool proposed by Kiviat for software engineering processes. In this diagramming method, each dimension is represented by one branch of concentric spikes. This paper documents a C++ based visualization tool with extensive use of OpenGL graphics library and GUI functions. Detailed examples of actual databases demonstrate the feasibility and effectiveness in visualizing multidimensional databases.
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.
2006-01-01
to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models......The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...
DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS
Directory of Open Access Journals (Sweden)
MICULEAC Melania Elena
2014-06-01
Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.
Deterministic geologic processes and stochastic modeling
International Nuclear Information System (INIS)
Rautman, C.A.; Flint, A.L.
1991-01-01
Recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. Consideration of the spatial distribution of measured values and geostatistical measures of spatial variability indicates that there are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. These deterministic features have their origin in the complex, yet logical, interplay of a number of deterministic geologic processes, including magmatic evolution; volcanic eruption, transport, and emplacement; post-emplacement cooling and alteration; and late-stage (diagenetic) alteration. Because of geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly, using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling. It is unlikely that any single representation of physical properties at the site will be suitable for all modeling purposes. Instead, the same underlying physical reality will need to be described many times, each in a manner conducive to assessing specific performance issues
Deterministic seismic hazard macrozonation of India
Indian Academy of Sciences (India)
Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of ...
Deterministic algorithms for multi-criteria TSP
Manthey, Bodo; Ogihara, Mitsunori; Tarui, Jun
2011-01-01
We present deterministic approximation algorithms for the multi-criteria traveling salesman problem (TSP). Our algorithms are faster and simpler than the existing randomized algorithms. First, we devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of
LQ control without Ricatti equations: deterministic systems
D.D. Yao (David); S. Zhang (Shuzhong); X.Y. Zhou (Xun Yu)
1999-01-01
textabstractWe study a deterministic linear-quadratic (LQ) control problem over an infinite horizon, and develop a general apprach to the problem based on semi-definite programming (SDP)and related duality analysis. This approach allows the control cost matrix R to be non-negative (semi-definite), a
A Numerical Simulation for a Deterministic Compartmental ...
African Journals Online (AJOL)
In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...
Deterministic dynamics of plasma focus discharges
International Nuclear Information System (INIS)
Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.
1992-04-01
The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs
Dynamic optimization deterministic and stochastic models
Hinderer, Karl; Stieglitz, Michael
2016-01-01
This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.
Piecewise deterministic processes in biological models
Rudnicki, Ryszard
2017-01-01
This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...
Deterministic nanoparticle assemblies: from substrate to solution
International Nuclear Information System (INIS)
Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J
2014-01-01
The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)
Deterministic properties of mine tremor aftershocks
CSIR Research Space (South Africa)
Kgarume, TE
2010-10-01
Full Text Available in earthquake generation and rupture mechanisms (Persh and Houston, 2004). Yang and Ben-Zion (2009) found that aftershock productivity has an inverse relationship with the mean heat flow. 2 Deterministic analysis of mine tremor aftershocks 2.1 Mining.... and Houston, H. (2004) Strongly depth-dependent aftershock production in deep earthquakes, Bulletin of the Seismological Society of America, 94, pp. 1808 - 1816. Spottiswoode, S. M. (2000) Aftershocks and foreshocks of mine seismic events, 3rd International...
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading....... The suggested models are intended for incorporation into an existing analysis tool a.k.a. CyNC based on the MATLAB/SimuLink framework for graphical system analysis and design....
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic Search Methods for Computational Protein Design.
Traoré, Seydou; Allouche, David; André, Isabelle; Schiex, Thomas; Barbe, Sophie
2017-01-01
One main challenge in Computational Protein Design (CPD) lies in the exploration of the amino-acid sequence space, while considering, to some extent, side chain flexibility. The exorbitant size of the search space urges for the development of efficient exact deterministic search methods enabling identification of low-energy sequence-conformation models, corresponding either to the global minimum energy conformation (GMEC) or an ensemble of guaranteed near-optimal solutions. In contrast to stochastic local search methods that are not guaranteed to find the GMEC, exact deterministic approaches always identify the GMEC and prove its optimality in finite but exponential worst-case time. After a brief overview on these two classes of methods, we discuss the grounds and merits of four deterministic methods that have been applied to solve CPD problems. These approaches are based either on the Dead-End-Elimination theorem combined with A* algorithm (DEE/A*), on Cost Function Networks algorithms (CFN), on Integer Linear Programming solvers (ILP) or on Markov Random Fields solvers (MRF). The way two of these methods (DEE/A* and CFN) can be used in practice to identify low-energy sequence-conformation models starting from a pairwise decomposed energy matrix is detailed in this review.
Karakawa, Ayako; Murata, Hiroshi; Hirasawa, Hiroyo; Mayama, Chihiro; Asaoka, Ryo
2013-01-01
To compare the performance of newly proposed point-wise linear regression (PLR) with the binomial test (binomial PLR) against mean deviation (MD) trend analysis and permutation analyses of PLR (PoPLR), in detecting global visual field (VF) progression in glaucoma. 15 VFs (Humphrey Field Analyzer, SITA standard, 24-2) were collected from 96 eyes of 59 open angle glaucoma patients (6.0 ± 1.5 [mean ± standard deviation] years). Using the total deviation of each point on the 2(nd) to 16(th) VFs (VF2-16), linear regression analysis was carried out. The numbers of VF test points with a significant trend at various probability levels (pbinomial test (one-side). A VF series was defined as "significant" if the median p-value from the binomial test was binomial PLR method (0.14 to 0.86) was significantly higher than MD trend analysis (0.04 to 0.89) and PoPLR (0.09 to 0.93). The PIS of the proposed method (0.0 to 0.17) was significantly lower than the MD approach (0.0 to 0.67) and PoPLR (0.07 to 0.33). The PBNS of the three approaches were not significantly different. The binomial BLR method gives more consistent results than MD trend analysis and PoPLR, hence it will be helpful as a tool to 'flag' possible VF deterioration.
[Intraoperative multidimensional visualization].
Sperling, J; Kauffels, A; Grade, M; Alves, F; Kühn, P; Ghadimi, B M
2016-12-01
Modern intraoperative techniques of visualization are increasingly being applied in general and visceral surgery. The combination of diverse techniques provides the possibility of multidimensional intraoperative visualization of specific anatomical structures. Thus, it is possible to differentiate between normal tissue and tumor tissue and therefore exactly define tumor margins. The aim of intraoperative visualization of tissue that is to be resected and tissue that should be spared is to lead to a rational balance between oncological and functional results. Moreover, these techniques help to analyze the physiology and integrity of tissues. Using these methods surgeons are able to analyze tissue perfusion and oxygenation. However, to date it is not clear to what extent these imaging techniques are relevant in the clinical routine. The present manuscript reviews the relevant modern visualization techniques focusing on intraoperative computed tomography and magnetic resonance imaging as well as augmented reality, fluorescence imaging and optoacoustic imaging.
Multidimensional HAM-conditions
DEFF Research Database (Denmark)
Hansen, Ernst Jan de Place
Heat, Air and Moisture (HAM) conditions, experimental data are needed. Tests were performed in the large climate simulator at SBi involving full-scale wall elements. The elements were exposed for steady-state conditions, and temperature cycles simulating April and September climate in Denmark....... The effect on the moisture and temperature conditions of the addition of a vapour barrier and an outer cladding on timber frame walls was studied. The report contains comprehensive appendices documenting the full-scale tests. The tests were performed as a part of the project 'Model for Multidimensional Heat......, Air and Moisture Conditions in Building Envelope Components' carried out as a co-project between DTU Byg and SBi....
Multidimensional Databases and Data Warehousing
Jensen, Christian
2010-01-01
The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases.The book also covers advanced multidimensional concepts that are considered to b
Deterministic and probabilistic approach to safety analysis
International Nuclear Information System (INIS)
Heuser, F.W.
1980-01-01
The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)
Deterministic multi-player dynkin games
Solan, Eilon; Vieille, Nicolas
2015-01-01
A multi-player Dynkin game is a sequential game in which at every stage one of the players is chosen, and that player can decide whether to continue the game or to stop it, in which case all players receive some terminal payoff. We study a variant of this model, where the order by which players are chosen is deterministic, and the probability that the game terminates once the chosen player decides to stop may be strictly less than one. We prove that a subgame-perfect e-equilibrium in Markovia...
Nine challenges for deterministic epidemic models
DEFF Research Database (Denmark)
Roberts, Mick G; Andreasen, Viggo; Lloyd, Alun
2015-01-01
Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections......, infections with time-varying infectivity, and those where superinfection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern...
Structure of multidimensional patterns
International Nuclear Information System (INIS)
Smith, S.P.
1982-01-01
The problem of describing the structure of multidimensional data is important in exploratory data analysis, statistical pattern recognition, and image processing. A data set is viewed as a collection of points embedded in a high dimensional space. The primary goal of this research is to determine if the data have any clustering structure; such a structure implies the presence of class information (categories) in the data. A statistical hypothesis is used in the decision making. To this end, data with no structure are defined as data following the uniform distribution over some compact convex set in K-dimensional space, called the sampling window. This thesis defines two new tests for uniformity along with various sampling window estimators. The first test is a volume-based test which captures density changes in the data. The second test compares a uniformly distributed sample to the data by using the minimal spanning tree (MST) of the polled samples. Sampling window estimators are provided for simple sampling windows and use the convex hull of the data as a general sampling window estimator. For both of the tests for uniformity, theoretical results are provided on their size, and study their size and power against clustered alternatives is studied. Simulation is also used to study the efficacy of the sampling window estimators
Multidimensional Databases and Data Warehousing
DEFF Research Database (Denmark)
Jensen, Christian S.; Pedersen, Torben Bach; Thomsen, Christian
The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes...... data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases. The book also covers advanced multidimensional concepts that are considered...... techniques that are particularly important to multidimensional databases, including materialized views, bitmap indices, join indices, and star join processing. The book ends with a chapter that presents the literature on which the book is based and offers further readings for those readers who wish to engage...
XML Multidimensional Modelling and Querying
Boucher, Serge; Verhaegen, Boris; Zimányi, Esteban
2009-01-01
As XML becomes ubiquitous and XML storage and processing becomes more efficient, the range of use cases for these technologies widens daily. One promising area is the integration of XML and data warehouses, where an XML-native database stores multidimensional data and processes OLAP queries written in the XQuery interrogation language. This paper explores issues arising in the implementation of such a data warehouse. We first compare approaches for multidimensional data modelling in XML, then...
Kostrzewa, Daniel; Josiński, Henryk
2016-06-01
The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.
A mathematical theory for deterministic quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)
2007-05-15
Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.
Deterministic prediction of surface wind speed variations
Directory of Open Access Journals (Sweden)
G. V. Drisya
2014-11-01
Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Mechanics from Newton's laws to deterministic chaos
Scheck, Florian
2018-01-01
This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present 6th edition is updated and revised with more explanations, additional examples and problems with solutions, together with new sections on applications in science. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 150 problems ...
Deterministic SLIR model for tuberculosis disease mapping
Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat
2017-11-01
Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.
Deterministic quantum annealing expectation-maximization algorithm
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Extreme events in multivariate deterministic systems
Nicolis, C.; Nicolis, G.
2012-05-01
The probabilistic properties of extreme values in multivariate deterministic dynamical systems are analyzed. It is shown that owing to the intertwining of unstable and stable modes the effect of dynamical complexity on the extremes tends to be masked, in the sense that the cumulative probability distribution of typical variables is differentiable and its associated probability density is continuous. Still, there exist combinations of variables probing the dominant unstable modes displaying singular behavior in the form of nondifferentiability of the cumulative distributions of extremes on certain sets of phase space points. Analytic evaluations and extensive numerical simulations are carried out for characteristic examples of Kolmogorov-type systems, for low-dimensional chaotic flows, and for spatially extended systems.
Inferring hierarchical clustering structures by deterministic annealing
International Nuclear Information System (INIS)
Hofmann, T.; Buhmann, J.M.
1996-01-01
The unsupervised detection of hierarchical structures is a major topic in unsupervised learning and one of the key questions in data analysis and representation. We propose a novel algorithm for the problem of learning decision trees for data clustering and related problems. In contrast to many other methods based on successive tree growing and pruning, we propose an objective function for tree evaluation and we derive a non-greedy technique for tree growing. Applying the principles of maximum entropy and minimum cross entropy, a deterministic annealing algorithm is derived in a meanfield approximation. This technique allows us to canonically superimpose tree structures and to fit parameters to averaged or open-quote fuzzified close-quote trees
Deterministic effects of interventional radiology procedures
International Nuclear Information System (INIS)
Shope, Thomas B.
1997-01-01
The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)
Mechanics From Newton's Laws to Deterministic Chaos
Scheck, Florian
2010-01-01
This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present fifth edition is updated and revised with more explanations, additional examples and sections on Noether's theorem. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 120 problems with complete solutions, as well as some practical exa...
Primality deterministic and primality probabilistic tests
Directory of Open Access Journals (Sweden)
Alfredo Rizzi
2007-10-01
Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.
Deterministic-random separation in nonstationary regime
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable
Multidimensional persistence in biomolecular data.
Xia, Kelin; Wei, Guo-Wei
2015-07-30
Persistent homology has emerged as a popular technique for the topological simplification of big data, including biomolecular data. Multidimensional persistence bears considerable promise to bridge the gap between geometry and topology. However, its practical and robust construction has been a challenge. We introduce two families of multidimensional persistence, namely pseudomultidimensional persistence and multiscale multidimensional persistence. The former is generated via the repeated applications of persistent homology filtration to high-dimensional data, such as results from molecular dynamics or partial differential equations. The latter is constructed via isotropic and anisotropic scales that create new simiplicial complexes and associated topological spaces. The utility, robustness, and efficiency of the proposed topological methods are demonstrated via protein folding, protein flexibility analysis, the topological denoising of cryoelectron microscopy data, and the scale dependence of nanoparticles. Topological transition between partial folded and unfolded proteins has been observed in multidimensional persistence. The separation between noise topological signatures and molecular topological fingerprints is achieved by the Laplace-Beltrami flow. The multiscale multidimensional persistent homology reveals relative local features in Betti-0 invariants and the relatively global characteristics of Betti-1 and Betti-2 invariants. © 2015 Wiley Periodicals, Inc.
Bektaş, Burcu; Dursun, Uğur
2015-01-01
In this work, we focus on a class of timelike rotational surfaces in Minkowski space E-1(4) with 2-dimensional axis. There are three types of rotational surfaces with 2-dimensional axis, called rotational surfaces of elliptic, hyperbolic or parabolic type. We obtain all flat timelike rotational surface of elliptic and hyperbolic types with pointwise 1-type Gauss map of the first and second kind. We also prove that there exists no flat timelike rotational surface of parabolic type in E-1(4) wi...
Region-specific deterministic and probabilistic seismic hazard ...
Indian Academy of Sciences (India)
Region-specific deterministic and probabilistic seismic hazard analysis of Kanpur city ... A seismic hazard map of Kanpur city has been developed considering the region-specific seismotectonic parameters within a 500-km radius by deterministic and probabilistic approaches. ... King Saud University, Riyadh, Saudi Arabia.
Deterministic Chaos in the X-ray Sources
Indian Academy of Sciences (India)
2016-01-27
Jan 27, 2016 ... ... a resonant behaviour takes place, there appear the quasi-periodic oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic ...
Safety Verification of Piecewise-Deterministic Markov Processes
DEFF Research Database (Denmark)
Wisniewski, Rafael; Sloth, Christoffer; Bujorianu, Manuela
2016-01-01
We consider the safety problem of piecewise-deterministic Markov processes (PDMP). These are systems that have deterministic dynamics and stochastic jumps, where both the time and the destination of the jumps are stochastic. Specifically, we solve a p-safety problem, where we identify the set...
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...
D2-Tree: A New Overlay with Deterministic Bounds
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Sioutas, Spyros; Tsichlas, Kostas
2010-01-01
We present a new overlay, called the Deterministic Decentralized tree (D 2-tree). The D 2-tree compares favourably to other overlays for the following reasons: (a) it provides matching and better complexities, which are deterministic for the supported operations; (b) the management of nodes (peers...
Multidimensional real analysis I differentiation
Duistermaat, J J; van Braam Houckgeest, J P
2004-01-01
Part one of the authors' comprehensive and innovative work on multidimensional real analysis. This book is based on extensive teaching experience at Utrecht University and gives a thorough account of differential analysis in multidimensional Euclidean space. It is an ideal preparation for students who wish to go on to more advanced study. The notation is carefully organized and all proofs are clean, complete and rigorous. The authors have taken care to pay proper attention to all aspects of the theory. In many respects this book presents an original treatment of the subject and it contains man
MIMO capacity for deterministic channel models: sublinear growth
DEFF Research Database (Denmark)
Bentosela, Francois; Cornean, Horia; Marchetti, Nicola
2013-01-01
This is the second paper by the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. In our previous paper, we started from the Maxwell equations and described the generic structure of such a deterministic transfer matrix. In the cu......This is the second paper by the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. In our previous paper, we started from the Maxwell equations and described the generic structure of such a deterministic transfer matrix....... In the current paper, we apply those results in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given fixed volume. Under...
Szymanowski, Mariusz; Kryza, Maciej
2017-02-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
A Multidimensional Software Engineering Course
Barzilay, O.; Hazzan, O.; Yehudai, A.
2009-01-01
Software engineering (SE) is a multidimensional field that involves activities in various areas and disciplines, such as computer science, project management, and system engineering. Though modern SE curricula include designated courses that address these various subjects, an advanced summary course that synthesizes them is still missing. Such a…
Multidimensional child deprivation in Iran
Yousefzadeh Faal Daghati, Sepideh; Mideros-Mora, Andrés; De Neubourg, Chris; Minujin, Alberto; Nandy, Shailen
The chapter analyses children's multidimensional deprivation in Iran in 2009 and explores inequalities in different regions. The study focused on outcome indicators, with the level of analysis focusing on the individual child as well as the household. A child rights approach is applied to define
Ordinal Comparison of Multidimensional Deprivation
DEFF Research Database (Denmark)
Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter
This paper develops an ordinal method of comparison of multidimensional inequality. In our model, population distribution g is more unequal than f when the distributions have common median and can be obtained from f by one or more shifts in population density that increase inequality. For our...
Deterministic Approach to Detect Heart Sound Irregularities
Directory of Open Access Journals (Sweden)
Richard Mengko
2017-07-01
Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.
A Deterministic Approach to Earthquake Prediction
Directory of Open Access Journals (Sweden)
Vittorio Sgrigna
2012-01-01
Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.
Energy Technology Data Exchange (ETDEWEB)
Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA
2017-03-28
Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.
Operational State Complexity of Deterministic Unranked Tree Automata
Directory of Open Access Journals (Sweden)
Xiaoxue Piao
2010-08-01
Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...
Surface plasmon field enhancements in deterministic aperiodic structures.
Shugayev, Roman
2010-11-22
In this paper we analyze optical properties and plasmonic field enhancements in large aperiodic nanostructures. We introduce extension of Generalized Ohm's Law approach to estimate electromagnetic properties of Fibonacci, Rudin-Shapiro, cluster-cluster aggregate and random deterministic clusters. Our results suggest that deterministic aperiodic structures produce field enhancements comparable to random morphologies while offering better understanding of field localizations and improved substrate design controllability. Generalized Ohm's law results for deterministic aperiodic structures are in good agreement with simulations obtained using discrete dipole method.
Deterministic mode representation of random stationary media for scattering problems.
Li, Jia; Korotkova, Olga
2017-06-01
Deterministic mode representation (DMR) is introduced for a three-dimensional random medium with a statistically stationary refractive index distribution. The DMR allows for the designing and fine tuning of novel random media by adjusting the weights of individual deterministic modes. To illustrate its usefulness, we have applied the decomposition to the problem of weak light scattering from a Gaussian Schell-model medium. In particular, we have shown how individual deterministic modes of the medium contribute to the scattered far-field spectral density distribution.
Equivalence relations between deterministic and quantum mechanical systems
International Nuclear Information System (INIS)
Hooft, G.
1988-01-01
Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale
Multi-dimensional Fuzzy Euler Approximation
Directory of Open Access Journals (Sweden)
Yangyang Hao
2017-05-01
Full Text Available Multi-dimensional Fuzzy differential equations driven by multi-dimen-sional Liu process, have been intensively applied in many fields. However, we can not obtain the analytic solution of every multi-dimensional fuzzy differential equation. Then, it is necessary for us to discuss the numerical results in most situations. This paper focuses on the numerical method of multi-dimensional fuzzy differential equations. The multi-dimensional fuzzy Taylor expansion is given, based on this expansion, a numerical method which is designed for giving the solution of multi-dimensional fuzzy differential equation via multi-dimensional Euler method will be presented, and its local convergence also will be discussed.
Executive Information Systems' Multidimensional Models
Directory of Open Access Journals (Sweden)
2007-01-01
Full Text Available Executive Information Systems are design to improve the quality of strategic level of management in organization through a new type of technology and several techniques for extracting, transforming, processing, integrating and presenting data in such a way that the organizational knowledge filters can easily associate with this data and turn it into information for the organization. These technologies are known as Business Intelligence Tools. But in order to build analytic reports for Executive Information Systems (EIS in an organization we need to design a multidimensional model based on the business model from the organization. This paper presents some multidimensional models that can be used in EIS development and propose a new model that is suitable for strategic business requests.
Method to deterministically study photonic nanostructures in different experimental instruments
Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the
Deterministic oscillatory search: a new meta-heuristic optimization ...
Indian Academy of Sciences (India)
heuristic optimization; power system problem. Abstract. The paper proposes a new optimization algorithm that is extremely robust in solving mathematical and engineering problems. The algorithm combines the deterministic nature of classical ...
Active Chaotic Flows, Deterministic Modeling, and Communication with Chaos
National Research Council Canada - National Science Library
Grebogi, Celso
2001-01-01
...) to establish to what extent a natural chaotic system can be modeled deterministically; and (3) to demonstrate theoretically and experimentally that we can encode a message in a power oscillator...
Cheiloscopy ‑ A diagnostic and deterministic mirror for ...
African Journals Online (AJOL)
Cheiloscopy ‑ A diagnostic and deterministic mirror for establishment of person identification and gender discrimination: A study participated by Indian Medical students to aid legal proceedings and criminal investigations.
Non deterministic finite automata for power systems fault diagnostics
Directory of Open Access Journals (Sweden)
LINDEN, R.
2009-06-01
Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.
Pseudo-random number generator based on asymptotic deterministic randomness
International Nuclear Information System (INIS)
Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming
2008-01-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks
Pseudo-random number generator based on asymptotic deterministic randomness
Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming
2008-06-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.
The probabilistic approach and the deterministic licensing procedure
International Nuclear Information System (INIS)
Fabian, H.; Feigel, A.; Gremm, O.
1984-01-01
If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)
Cuba: Multidimensional numerical integration library
Hahn, Thomas
2016-08-01
The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.
Modelling Emotions with Multidimensional Logic
Gershenson, Carlos
1999-01-01
One of the objectives of Artificial Intelligence has been the modelling of "human" characteristics, such as emotions, behaviour, conscience, etc. But in such characteristics we might find certain degree of contradiction. Previous work on modelling emotions and its problems are reviewed. A model for emotions is proposed using multidimensional logic, which handles the degree of contradiction that emotions might have. The model is oriented to simulate emotions in artificial societies...
Generalized multidimensional dynamic allocation method.
Lebowitsch, Jonathan; Ge, Yan; Young, Benjamin; Hu, Feifang
2012-12-10
Dynamic allocation has received considerable attention since it was first proposed in the 1970s as an alternative means of allocating treatments in clinical trials which helps to secure the balance of prognostic factors across treatment groups. The purpose of this paper is to present a generalized multidimensional dynamic allocation method that simultaneously balances treatment assignments at three key levels: within the overall study, within each level of each prognostic factor, and within each stratum, that is, combination of levels of different factors Further it offers capabilities for unbalanced and adaptive designs for trials. The treatment balancing performance of the proposed method is investigated through simulations which compare multidimensional dynamic allocation with traditional stratified block randomization and the Pocock-Simon method. On the basis of these results, we conclude that this generalized multidimensional dynamic allocation method is an improvement over conventional dynamic allocation methods and is flexible enough to be applied for most trial settings including Phases I, II and III trials. Copyright © 2012 John Wiley & Sons, Ltd.
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
A Review of Deterministic Optimization Methods in Engineering and Management
Directory of Open Access Journals (Sweden)
Ming-Hua Lin
2012-01-01
Full Text Available With the increasing reliance on modeling optimization problems in practical applications, a number of theoretical and algorithmic contributions of optimization have been proposed. The approaches developed for treating optimization problems can be classified into deterministic and heuristic. This paper aims to introduce recent advances in deterministic methods for solving signomial programming problems and mixed-integer nonlinear programming problems. A number of important applications in engineering and management are also reviewed to reveal the usefulness of the optimization methods.
Deterministic chaos in the pitting phenomena of passivable alloys
International Nuclear Information System (INIS)
Hoerle, Stephane
1998-01-01
It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr
Deterministic effects of the ionizing radiation
International Nuclear Information System (INIS)
Raslawski, Elsa C.
2001-01-01
Full text: The deterministic effect is the somatic damage that appears when radiation dose is superior to the minimum value or 'threshold dose'. Over this threshold dose, the frequency and seriousness of the damage increases with the amount given. Sixteen percent of patients younger than 15 years of age with the diagnosis of cancer have the possibility of a cure. The consequences of cancer treatment in children are very serious, as they are physically and emotionally developing. The seriousness of the delayed effects of radiation therapy depends on three factors: a)- The treatment ( dose of radiation, schedule of treatment, time of treatment, beam energy, treatment volume, distribution of the dose, simultaneous chemotherapy, etc.); b)- The patient (state of development, patient predisposition, inherent sensitivity of tissue, the present of other alterations, etc.); c)- The tumor (degree of extension or infiltration, mechanical effects, etc.). The effect of radiation on normal tissue is related to cellular activity and the maturity of the tissue irradiated. Children have a mosaic of tissues in different stages of maturity at different moments in time. On the other hand, each tissue has a different pattern of development, so that sequelae are different in different irradiated tissues of the same patient. We should keep in mind that all the tissues are affected in some degree. Bone tissue evidences damage with growth delay and degree of calcification. Damage is small at 10 Gy; between 10 and 20 Gy growth arrest is partial, whereas at doses larger than 20 Gy growth arrest is complete. The central nervous system is the most affected because the radiation injuries produce demyelination with or without focal or diffuse areas of necrosis in the white matter causing character alterations, lower IQ and functional level, neuro cognitive impairment,etc. The skin is also affected, showing different degrees of erythema such as ulceration and necrosis, different degrees of
Multi-Dimensional Path Queries
DEFF Research Database (Denmark)
Bækgaard, Lars
1998-01-01
that connects a pair of paths. A path expression is a function that maps a set of path sets into a path set. Path sets can be joined, filtering conditions can restrict the set of qualifying paths, and aggregation functions can be applied to path elements. In particular, the aggregation function SET can be used...... to create nested path structures. We present an SQL-like query language that is based on path expressions and we show how to use it to express multi-dimensional path queries that are suited for advanced data analysis in decision support environments like data warehousing environments...
The dialectical thinking about deterministic and probabilistic safety analysis
International Nuclear Information System (INIS)
Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong
2005-01-01
There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)
Deterministic and stochastic CTMC models from Zika disease transmission
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
Learning to Act: Qualitative Learning of Deterministic Action Models
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2017-01-01
in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...... their computational complexity. Finally, we study a parametrized learning method which makes use of the upper bound on the number of propositions relevant for a given learning scenario. We conclude with describing related work and numerous directions of further work....
Hybrid deterministic/stochastic simulation of complex biochemical systems.
Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina
2017-11-21
In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is
Multi-Dimensional Aggregation for Temporal Data
DEFF Research Database (Denmark)
Böhlen, M. H.; Gamper, J.; Jensen, Christian Søndergaard
2006-01-01
Business Intelligence solutions, encompassing technologies such as multi-dimensional data modeling and aggregate query processing, are being applied increasingly to non-traditional data. This paper extends multi-dimensional aggregation to apply to data with associated interval values that capture...... sets and is competitive with respect to other temporal aggregation algorithms....
SUSTAINABLE DEVELOPMENT, A MULTIDIMENSIONAL CONCEPT
Directory of Open Access Journals (Sweden)
TEODORESCU ANA MARIA
2015-06-01
Full Text Available Sustainable development imposed itself as a corollary of economic term "development". Sustainable development is meant to be the summation of economic, environmental and social considerations for the present and especially for the future. The concept of sustainable development plays an important role in european and global meetings since 1972, the year it has been set for the first time. Strategies necessary to achieve the objectives of sustainable development have been developed, indicators meant to indicate the result of the implementation of policies have been created, national plans were oriented towards achieving the proposed targets. I wanted to highlight the multidimensional character of the concept of sustainable development. Thus, using specialized national and international literature, I have revealed different approaches of one pillar to the detriment of another pillar depending on the specific field. In the different concepts of sustainable development, the consensus is undoubtedly agreed on its components: economic, social, environmental. Based on this fact, the concept of sustainability has different connotations depending on the specific content of each discipline: biology, economics, sociology, environmental ethics. The multidimensional valence of sustainable development consists of three pillars ability to act together for the benefit of present and future generations. Being a multidimensional concept, importance attached to a pillar over another is directed according to the particularities of each field: in economy profit prevails, in ecology care of natural resources is the most important, the social aims improving human living conditions. The challenge of sustainable development is to combine all the economic, environmental and social benefits and the present generation to come. Ecological approach is reflected in acceptance of limited natural resources by preserving natural capital. In terms of the importance of
Andersen, Søren B.; Enemark, Søren; Santos, Ilmar F.
2013-12-01
A stable rotor—supported laterally by passive magnetic bearings and longitudinally by magnetic forces and a clutch—loses suddenly its contact to the clutch and executes abruptly longitudinal movements away from its original equilibrium position as a result of small increases in angular velocity. Such an abrupt unstable behaviour and its reasons are thoroughly theoretically as well as experimentally investigated in this work. In this context, this paper gives theoretical as well as experimental contributions to the problem of two dimensional passive magnetic levitation and one dimensional pointwise contact stability dictated by mechanical-magnetic interaction. Load capacity and stiffness of passive multicylinder magnetic bearings (MCMB) are thoroughly investigated using two theoretical approaches followed by experimental validation. The contact dynamics between the clutch and the rotor supported by MCMB using several configurations of magnet distribution are described based on an accurate nonlinear model able to reliably reproduce the rotor-bearing dynamic behaviour. Such investigations lead to: (a) clear physical explanation about the reasons for the rotor's unstable behaviour, losing its contact to the clutch and (b) an accurate prediction of the threshold of stability based on the nonlinear rotor-bearing model, i.e. maximum angular velocity before the rotor misses its contact to the clutch as a function of rotor, bearing and clutch design parameters. passive cylinder-magnet bearings, imbalance ring with a screw, passive rotating cylinder-magnets, rotor, Pointwise contact clutch, and DC-motor. The rotor (4) is levitated in the two horseshoe-shaped bearing houses (1) which contain several cylinder-magnets arranged in a circular pattern. These permanent magnets form a magnetic field around the rotor which repels similar cylinder-magnets (3) embedded in the rotor, thereby counteracting the gravity forces. As the shape of the magnetic field generated by the
A PROPOSAL OF FUZZY MULTIDIMENSIONAL ASSOCIATION RULES
Directory of Open Access Journals (Sweden)
Rolly Intan
2006-01-01
Full Text Available Association rules that involve two or more dimensions or predicates can be referred as multidimensional association rules. Rather than searching for frequent itemsets (as is done in mining single-dimensional association rules, in multidimensional association rules, we search for frequent predicate sets. In general, there are two types of multidimensional association rules, namely interdimension association rules and hybrid-dimension association rules. Interdimension association rules are multidimensional association rules with no repeated predicates. This paper introduces a method for generating interdimension association rules. A more meaningful association rules can be provided by generalizing crisp value of attributes to be fuzzy value. To generate the multidimensional association rules implying fuzzy value, this paper introduces an alternative method for mining the rules by searching for the predicate sets.
Multi-dimensional model order selection
Directory of Open Access Journals (Sweden)
Roemer Florian
2011-01-01
Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.
Heuristics for Multidimensional Packing Problems
DEFF Research Database (Denmark)
Egeblad, Jens
costs significantly. For packing problems in general are given a set of items and one of more containers. The items must be placed within the container such that some objective is optimized and the items do not overlap. Items and container may be rectangular or irregular (e.g. polygons and polyhedra...... methods. Two important problem variants are the knapsack packing problem and the strip-packing problem. In the knapsack packing problem, each item is given a profit value, and the problem asks for the subset with maximal profit that can be placed within one container. The strip-packing problem asks...... for a minimum height container required for the items. The main contributions of the thesis are three new heuristics for strip-packing and knapsack packing problems where items are both rectangular and irregular. In the two first papers we describe a heuristic for the multidimensional strip-packing problem...
Deterministic multimode photonic device for quantum-information processing
DEFF Research Database (Denmark)
Nielsen, Anne E. B.; Mølmer, Klaus
2010-01-01
We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...
Testing for converging deterministic seasonal variation in European industrial production
Ph.H.B.F. Franses (Philip Hans); R.M. Kunst (Robert)
1999-01-01
textabstractIn this paper we consider deterministic seasonal variation in quarterly production for several European countries, and we address the question whether this variation has become more similar across countries over time. Due to economic and institutional factors, one may expect convergence
A Deterministic Annealing Approach to Clustering AIRS Data
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Mixed motion in deterministic ratchets due to anisotropic permeability
Kulrattanarak, T.; Sman, van der R.G.M.; Lubbersen, Y.S.; Schroën, C.G.P.H.; Pham, H.T.M.; Sarro, P.M.; Boom, R.M.
2011-01-01
Nowadays microfluidic devices are becoming popular for cell/DNA sorting and fractionation. One class of these devices, namely deterministic ratchets, seems most promising for continuous fractionation applications of suspensions (Kulrattanarak et al., 2008 [1]). Next to the two main types of particle
Deterministic control of ferroelastic switching in multiferroic materials
Balke, N.; Choudhury, S.; Jesse, S.; Huijben, Mark; Chu, Y.H.; Baddorf, A.P.; Chen, L.Q.; Ramesh, R.; Kalinin, S.V.
2009-01-01
Multiferroic materials showing coupled electric, magnetic and elastic orderings provide a platform to explore complexity and new paradigms for memory and logic devices. Until now, the deterministic control of non-ferroelectric order parameters in multiferroics has been elusive. Here, we demonstrate
Deterministic event-based simulation of quantum phenomena
De Raedt, K; De Raedt, H; Michielsen, K
2005-01-01
We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the
Using a satisfiability solver to identify deterministic finite state automata
Heule, M.J.H.; Verwer, S.
2009-01-01
We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we
Deterministic oscillatory search: a new meta-heuristic optimization ...
Indian Academy of Sciences (India)
The paper proposes a new optimization algorithm that is extremely robust in solving mathematical and engineering problems. The algorithm combines the deterministic nature of classical methods of optimization and global converging characteristics of meta-heuristic algorithms. Common traits of nature-inspired algorithms ...
Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Niels Jacob
1994-01-01
An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...
About the Possibility of Creation of a Deterministic Unified Mechanics
International Nuclear Information System (INIS)
Khomyakov, G.K.
2005-01-01
The possibility of creation of a unified deterministic scheme of classical and quantum mechanics, allowing to preserve their achievements is discussed. It is shown that the canonical system of ordinary differential equation of Hamilton classical mechanics can be added with the vector system of ordinary differential equation for the variables of equations. The interpretational problems of quantum mechanics are considered
Nonlinear deterministic structures and the randomness of protein sequences
Huang Yan Zhao
2003-01-01
To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.
Risk-based versus deterministic explosives safety criteria
Energy Technology Data Exchange (ETDEWEB)
Wright, R.E.
1996-12-01
The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.
Practical deterministic secure quantum communication in a lossy channel
Qaisar, Saad; Rehman, Junaid ur; Jeong, Youngmin; Shin, Hyundong
2017-04-01
Losses in a quantum channel do not allow deterministic communication. We propose a two-way six-state deterministic secure quantum communication scheme that is robust in a lossy channel. Our protocol can be used for two purposes: (a) establishment of a deterministic key, and (b) direct communication of a secret message. Our protocol is directly integrable with the decoy state method while achieving deterministic communication without using a quantum memory. In our protocol, a legitimate party has the control to assign a desired bit value to a successfully transmitted qubit in the public discussion step. Before the public discussion, no information is leaked to the eavesdropper (Eve) even if all the qubits are measured or prepared by her. Hence, our scheme is used as a quantum direct communication (QDC) protocol, to meet the quality of service requirement of swift data communication. We compare the security of our protocol against the photon number splitting attack in the absence of the decoy state method with two QDC protocols. We compute the success probability of Eve when our protocol is used as a multiparty key distribution scheme. We also propose the criteria to compute the efficiency of QDC protocols.
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Comparison of deterministic and Monte Carlo methods in shielding design
International Nuclear Information System (INIS)
Oliveira, A. D.; Oliveira, C.
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
Algorithms for Computing Nash Equilibria in Deterministic LQ Games
Engwerda, J.C.
2006-01-01
In this paper we review a number of algorithms to compute Nash equilibria in deterministic linear quadratic differential games.We will review the open-loop and feedback information case.In both cases we address both the finite and the infinite-planning horizon.
Deterministic entanglement of Rydberg ensembles by engineered dissipation
DEFF Research Database (Denmark)
Dasari, Durga; Mølmer, Klaus
2014-01-01
We propose a scheme that employs dissipation to deterministically generate entanglement in an ensemble of strongly interacting Rydberg atoms. With a combination of microwave driving between different Rydberg levels and a resonant laser coupling to a short lived atomic state, the ensemble can be d...
The State of Deterministic Thinking among Mothers of Autistic Children
Directory of Open Access Journals (Sweden)
Mehrnoush Esbati
2011-10-01
Full Text Available Objectives: The purpose of the present study was to investigate the effectiveness of cognitive-behavior education on decreasing deterministic thinking in mothers of children with autism spectrum disorders. Methods: Participants were 24 mothers of autistic children who were referred to counseling centers of Tehran and their children’s disorder had been diagnosed at least by a psychiatrist and a counselor. They were randomly selected and assigned into control and experimental groups. Measurement tool was Deterministic Thinking Questionnaire and both groups answered it before and after education and the answers were analyzed by analysis of covariance. Results: The results indicated that cognitive-behavior education decreased deterministic thinking among mothers of autistic children, it decreased four sub scale of deterministic thinking: interaction with others, absolute thinking, prediction of future, and negative events (P<0.05 as well. Discussions: By learning cognitive and behavioral techniques, parents of children with autism can reach higher level of psychological well-being and it is likely that these cognitive-behavioral skills would have a positive impact on general life satisfaction of mothers of children with autism.
Multidirectional sorting modes in deterministic lateral displacement devices
DEFF Research Database (Denmark)
Long, B.R.; Heller, Martin; Beech, J.P.
2008-01-01
Deterministic lateral displacement (DLD) devices separate micrometer-scale particles in solution based on their size using a laminar microfluidic flow in an array of obstacles. We investigate array geometries with rational row-shift fractions in DLD devices by use of a simple model including both...
Deterministic teleportation using single-photon entanglement as a resource
DEFF Research Database (Denmark)
Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.
2012-01-01
We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...
Deterministic algorithms for multi-criteria Max-TSP
Manthey, Bodo
2012-01-01
We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of
Demonstration of deterministic and high fidelity squeezing of quantum information
DEFF Research Database (Denmark)
Yoshikawa, J-I.; Hayashi, T-; Akiyama, T.
2007-01-01
By employing a recent proposal [R. Filip, P. Marek, and U.L. Andersen, Phys. Rev. A 71, 042308 (2005)] we experimentally demonstrate a universal, deterministic, and high-fidelity squeezing transformation of an optical field. It relies only on linear optics, homodyne detection, feedforward, and an...
Discovering Multidimensional Structure in Relational Data
DEFF Research Database (Denmark)
Jensen, Mikael Rune; Holmgren, Thomas; Pedersen, Torben Bach
2004-01-01
On-Line Analytical Processing (OLAP) systems based on multidimensional databases are essential elements of decision support. However, most existing data is stored in ordinary relational OLTP databases, i.e., data has to be (re-) modeled as multidimensional cubes before the advantages of OLAP...... tools are available. In this paper we present an approach for the automatic construction of multidimensional OLAP database schemas from existing relational OLTP databases, enabling easy OLAP design and analysis for most existing data sources. This is achieved through a set of practical and effective...
Multidimensionally encoded magnetic resonance imaging.
Lin, Fa-Hsuan
2013-07-01
Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.
Discovering Multidimensional Structure in Relational Data
DEFF Research Database (Denmark)
Jensen, Mikael Rune; Holmgren, Thomas; Pedersen, Torben Bach
2004-01-01
tools are available. In this paper we present an approach for the automatic construction of multidimensional OLAP database schemas from existing relational OLTP databases, enabling easy OLAP design and analysis for most existing data sources. This is achieved through a set of practical and effective......On-Line Analytical Processing (OLAP) systems based on multidimensional databases are essential elements of decision support. However, most existing data is stored in ordinary relational OLTP databases, i.e., data has to be (re-) modeled as multidimensional cubes before the advantages of OLAP...... algorithms for discovering multidimensional schemas from relational databases. The algorithms take a wide range of available metadata into account in the discovery process, including functional and inclusion dependencies, and key and cardinality information....
CAMS: OLAPing Multidimensional Data Streams Efficiently
Cuzzocrea, Alfredo
In the context of data stream research, taming the multidimensionality of real-life data streams in order to efficiently support OLAP analysis/mining tasks is a critical challenge. Inspired by this fundamental motivation, in this paper we introduce CAMS (C ube-based A cquisition model for M ultidimensional S treams), a model for efficiently OLAPing multidimensional data streams. CAMS combines a set of data stream processing methodologies, namely (i) the OLAP dimension flattening process, which allows us to obtain dimensionality reduction of multidimensional data streams, and (ii) the OLAP stream aggregation scheme, which aggregates data stream readings according to an OLAP-hierarchy-based membership approach. We complete our analytical contribution by means of experimental assessment and analysis of both the efficiency and the scalability of OLAPing capabilities of CAMS on synthetic multidimensional data streams. Both analytical and experimental results clearly connote CAMS as an enabling component for next-generation Data Stream Management Systems.
Difference-Huffman Coding of Multidimensional Databases
Szépkúti, István
2011-01-01
A new compression method called difference-Huffman coding (DHC) is introduced in this paper. It is verified empirically that DHC results in a smaller multidimensional physical representation than those for other previously published techniques (single count header compression, logical position compression, base-offset compression and difference sequence compression). The article examines how caching influences the expected retrieval time of the multidimensional and table representations of re...
Multi-Dimensional Games (MD-Games)
Ruiz Estrada, M.A.
2009-01-01
This paper introduces the concept of Multi-Dimensional games (MD-games) based on the application of an alternative mathematical and graphical modeling approach to study the game theory from a multi-dimensional perspective. In fact, the MD-Games request the application of the mega-space coordinate system to visualize a large number of games, players, strategies and pay-offs functions into the same graphical space.
A statistical model for multidimensional irreversible electroporation cell death in tissue
Directory of Open Access Journals (Sweden)
Rubinsky Boris
2010-02-01
Full Text Available Abstract Background Irreversible electroporation (IRE is a minimally invasive tissue ablation technique which utilizes electric pulses delivered by electrodes to a targeted area of tissue to produce high amplitude electric fields, thus inducing irreversible damage to the cell membrane lipid bilayer. An important application of this technique is for cancer tissue ablation. Mathematical modelling is considered important in IRE treatment planning. In the past, IRE mathematical modelling used a deterministic single value for the amplitude of the electric field required for causing cell death. However, tissue, particularly cancerous tissue, is comprised of a population of different cells of different sizes and orientations, which in conventional IRE are exposed to complex electric fields; therefore, using a deterministic single value is overly simplistic. Methods We introduce and describe a new methodology for evaluating IRE induced cell death in tissue. Our approach employs a statistical Peleg-Fermi model to correlate probability of cell death in heterogeneous tissue to the parameters of electroporation pulses such as the number of pulses, electric field amplitude and pulse length. For treatment planning, the Peleg-Fermi model is combined with a numerical solution of the multidimensional electric field equation cast in a dimensionless form. This is the first time in which this concept is used for evaluating IRE cell death in multidimensional situations. Results We illustrate the methodology using data reported in literature for prostate cancer cell death by IRE. We show how to fit this data to a Fermi function in order to calculate the critical statistic parameters. To illustrate the use of the methodology, we simulated 2-D irreversible electroporation protocols and produced 2-D maps of the statistical distribution of cell death in the treated region. These plots were compared to plots produced using a deterministic model of cell death by IRE and
Multidimensional poverty and child survival in India.
Directory of Open Access Journals (Sweden)
Sanjay K Mohanty
Full Text Available BACKGROUND: Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations and included in the development agenda, its measurement and application are still limited. OBJECTIVES AND METHODOLOGY: Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses. RESULTS: The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed. CONCLUSION: Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population.
Multidimensional Poverty and Child Survival in India
Mohanty, Sanjay K.
2011-01-01
Background Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations) and included in the development agenda, its measurement and application are still limited. Objectives and Methodology Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses. Results The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed. Conclusion Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population. PMID:22046384
Multidimensional poverty and child survival in India.
Mohanty, Sanjay K
2011-01-01
Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations) and included in the development agenda, its measurement and application are still limited. OBJECTIVES AND METHODOLOGY: Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses. The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed. Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population.
Information-Theoretic Analysis of Memoryless Deterministic Systems
Directory of Open Access Journals (Sweden)
Bernhard C. Geiger
2016-11-01
Full Text Available The information loss in deterministic, memoryless systems is investigated by evaluating the conditional entropy of the input random variable given the output random variable. It is shown that for a large class of systems the information loss is finite, even if the input has a continuous distribution. For systems with infinite information loss, a relative measure is defined and shown to be related to Rényi information dimension. As deterministic signal processing can only destroy information, it is important to know how this information loss affects the solution of inverse problems. Hence, we connect the probability of perfectly reconstructing the input to the information lost in the system via Fano-type bounds. The theoretical results are illustrated by example systems commonly used in discrete-time, nonlinear signal processing and communications.
Deterministic Hydraulic Load Analysis on Reactor Internals of APR1400
International Nuclear Information System (INIS)
Kim, Kyu Hyung; Ko, Do Young; Gu, Ja Yeong
2011-01-01
The structural integrity of the reactor vessel internals (RVI) of the nuclear power plants that have been constructed should be verified in accordance with the US Nuclear Regulatory Commission Regulatory Guide 1.20 (RG1.20) comprehensive vibration assessment program (CVAP) during preoperational and initial startup testing. The program consists of a vibration and stress analysis, a vibration measurement, an inspection, and an assessment of each program. The vibration and stress analysis program is comprised of a hydraulic load analysis and a structural response analysis. The hydraulic loads include the random hydraulic loads induced by turbulent flow and deterministic hydraulic loads induced by pump pulsation. This paper describes a developed full scope 3-D model and the deterministic hydraulic loads for the RVI of the APR1400
Deterministic Brownian motion generated from differential delay equations.
Lei, Jinzhi; Mackey, Michael C
2011-10-01
This paper addresses the question of how Brownian-like motion can arise from the solution of a deterministic differential delay equation. To study this we analytically study the bifurcation properties of an apparently simple differential delay equation and then numerically investigate the probabilistic properties of chaotic solutions of the same equation. Our results show that solutions of the deterministic equation with randomly selected initial conditions display a Gaussian-like density for long time, but the densities are supported on an interval of finite measure. Using these chaotic solutions as velocities, we are able to produce Brownian-like motions, which show statistical properties akin to those of a classical Brownian motion over both short and long time scales. Several conjectures are formulated for the probabilistic properties of the solution of the differential delay equation. Numerical studies suggest that these conjectures could be "universal" for similar types of "chaotic" dynamics, but we have been unable to prove this.
On the secure obfuscation of deterministic finite automata.
Energy Technology Data Exchange (ETDEWEB)
Anderson, William Erik
2008-06-01
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Deterministic Properties of Serially Connected Distributed Lag Models
Directory of Open Access Journals (Sweden)
Piotr Nowak
2013-01-01
Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract
Relationship of Deterministic Thinking With Loneliness and Depression in the Elderly
Directory of Open Access Journals (Sweden)
Mehdi Sharifi
2017-12-01
Conclusion According to the results, it can be said that deterministic thinking has a significant relationship with depression and sense of loneliness in older adults. So, deterministic thinking acts as a predictor of depression and sense of loneliness in older adults. Therefore, psychological interventions for challenging cognitive distortion of deterministic thinking and attention to mental health in older adult are very important.
Evaluation of Deterministic and Stochastic Components of Traffic Counts
Directory of Open Access Journals (Sweden)
Ivan Bošnjak
2012-10-01
Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.
Efficient deterministic secure quantum communication protocols using multipartite entangled states
Joy, Dintomon; Surendran, Supin P.; Sabir, M.
2017-06-01
We propose two deterministic secure quantum communication protocols employing three-qubit GHZ-like states and five-qubit Brown states as quantum channels for secure transmission of information in units of two bits and three bits using multipartite teleportation schemes developed here. In these schemes, the sender's capability in selecting quantum channels and the measuring bases leads to improved qubit efficiency of the protocols.
The deterministic SIS epidemic model in a Markovian random environment.
Economou, Antonis; Lopez-Herrero, Maria Jesus
2016-07-01
We consider the classical deterministic susceptible-infective-susceptible epidemic model, where the infection and recovery rates depend on a background environmental process that is modeled by a continuous time Markov chain. This framework is able to capture several important characteristics that appear in the evolution of real epidemics in large populations, such as seasonality effects and environmental influences. We propose computational approaches for the determination of various distributions that quantify the evolution of the number of infectives in the population.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
A note on controllability of deterministic context-free systems
Czech Academy of Sciences Publication Activity Database
Masopust, Tomáš
2012-01-01
Roč. 48, č. 8 (2012), s. 1934-1937 ISSN 0005-1098 R&D Projects: GA ČR(CZ) GPP202/11/P028 Institutional support: RVO:67985840 Keywords : discrete-event systems * controllability * deterministic context-free systems Subject RIV: BA - General Mathematics Impact factor: 2.919, year: 2012 http://www.sciencedirect.com/science/article/pii/S0005109812002543
Design Optimization of a Speed Reducer Using Deterministic Techniques
Lin, Ming-Hua; Tsai, Jung-Fa; Hu, Nian-Ze; Chang, Shu-Chuan
2013-01-01
The optimal design problem of minimizing the total weight of a speed reducer under constraints is a generalized geometric programming problem. Since the metaheuristic approaches cannot guarantee to find the global optimum of a generalized geometric programming problem, this paper applies an efficient deterministic approach to globally solve speed reducer design problems. The original problem is converted by variable transformations and piecewise linearization techniques. The reformulated prob...
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
Distinguishing deterministic and noise components in ELM time series
International Nuclear Information System (INIS)
Zvejnieks, G.; Kuzovkov, V.N
2004-01-01
Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type
Deterministic hazard quotients (HQs): Heading down the wrong road
International Nuclear Information System (INIS)
Wilde, L.; Hunter, C.; Simpson, J.
1995-01-01
The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches
Precision production: enabling deterministic throughput for precision aspheres with MRF
Maloney, Chris; Entezarian, Navid; Dumas, Paul
2017-10-01
Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.
Are deterministic methods suitable for short term reserve planning?
International Nuclear Information System (INIS)
Voorspools, Kris R.; D'haeseleer, William D.
2005-01-01
Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
Combining Deterministic structures and stochastic heterogeneity for transport modeling
Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg
2017-04-01
Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.
Disruptive behavior disorders: Multidimensional analysis
Directory of Open Access Journals (Sweden)
José Antonio López-Villalobos
2012-01-01
Full Text Available El estudio tiene como objetivo el análisis de la contribución de variables sociodemográficas, clínicas, familiares y académicas en la probabilidad de presentar trastorno de comportamiento perturbador (TC. Se utiliza un diseño ex post facto, retrospectivo, transversal, comparativo con dos grupos (casos de TC y controles clínicos. La muestra es incidental y consta de 1.847 casos clínicos, con edades comprendidas entre los 6 y 16 años. Casos y controles se han definido mediante entrevista clínica según criterios DSM-IV-TR. El procedimiento incluye una fase descriptiva y un método estimativo multivariable de regresión logística para dar respuesta al objetivo principal. El modelo de regresión logística propuesto es significativo y clasifica el 87,2% de los casos. Las variables sexo varón (OR = 1,82; p = 0,00, comorbilidad (OR = 7,68; p = 0,00, CI límite (OR = 3,15; p = 0,00, menor nivel educativo madres (OR = 1,57; p = 0,04 y repetir curso (OR = 2; p = 0,00 incrementan significativamente la probabilidad para TC. Las variables edad, antecedentes psiquiátricos, padres separados y educación de padres no resultan significativas en el modelo. El TC presenta asociación multidimensional con variables clínicas, académicas y familiares, susceptibles de inclusión en programas preventivos.
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2017-09-01
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p probabilistic than in deterministic tracking (p
Parallel Time 0(log N) Acceptance of Deterministic CFLs.
1984-03-01
algpritht inai be L,,’d Il sirnol1ii a sp,ice-hotinded auxiliar . pulihdo- n aitomaton. In Section 7. we. gnI’ c a eollplenicntai r(,;tl onginerlin...lllI.iliotn of P-R AM, h) dv c.hminitic auxiliar \\ Il )As. IIt SCL I It 7. ’A i,1ncliti SOIni" rla’,Itd %01~k. anId I SeCtion1 h. ’A C ljentit) ,iltiaut n...bounded. t(n) time-bounded deterministic auxiliar ) pushdown automaton M with a stack discipline satisfying the assumptions of Section 1. Each surface
CALTRANS: A parallel, deterministic, 3D neutronics code
Energy Technology Data Exchange (ETDEWEB)
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
A deterministic global optimization using smooth diagonal auxiliary functions
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.
2015-04-01
In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.
Deterministic computational modeling of the radioactive decay phenomenon
International Nuclear Information System (INIS)
Dias, Hugo Rafael; Barros, Ricardo C.
2007-01-01
Based on the deterministic mathematical model, we develop a computational modeling for the problem of radioactivity, and also emphasizing the development of a computational application, i.e., a construction of algorithms, programing and results presentation for this mathematical modeling. The application models the single or composed radioactive decay using classical numeric methods such as the trapezoidal implicit, and the most recent numerical methods, which are free of time truncation, signifying more safety of the calculated values, such as speed and efficiency in the results obtaining. (author)
Enhanced deterministic phase retrieval using a partially developed speckle field
DEFF Research Database (Denmark)
Almoro, Percival F.; Waller, Laura; Agour, Mostafa
2012-01-01
A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle...... intensity measurements are recorded at the output plane corresponding to axially-propagated representations of the PDSF in the input plane. The speckle intensity measurements are then used in a conventional transport of intensity equation (TIE) to reconstruct directly the test wavefront. The PDSF in our...
Calculating Certified Compilers for Non-deterministic Languages
DEFF Research Database (Denmark)
Bahr, Patrick
2015-01-01
Reasoning about programming languages with non-deterministic semantics entails many difficulties. For instance, to prove correctness of a compiler for such a language, one typically has to split the correctness property into a soundness and a completeness part, and then prove these two parts...... be used to formally derive -- from the semantics of the source language -- a compiler that is correct by construction. For such a derivation to succeed it is crucial that the underlying correctness argument proceeds as a single calculation, as opposed to separate calculations of the two directions...... of the correctness property. We demonstrate our technique by deriving a compiler for a simple language with interrupts....
The deterministic optical alignment of the HERMES spectrograph
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Separation of parasites from human blood using deterministic lateral displacement.
Holm, Stefan H; Beech, Jason P; Barrett, Michael P; Tegenfeldt, Jonas O
2011-04-07
We present the use of a simple microfluidic technique to separate living parasites from human blood. Parasitic trypanosomatids cause a range of human and animal diseases. African trypanosomes, responsible for human African trypanosomiasis (sleeping sickness), live free in the blood and other tissue fluids. Diagnosis relies on detection and due to their often low numbers against an overwhelming background of predominantly red blood cells it is crucial to separate the parasites from the blood. By modifying the method of deterministic lateral displacement, confining parasites and red blood cells in channels of optimized depth which accentuates morphological differences, we were able to achieve separation thus offering a potential route to diagnostics.
Synchronization of linearly coupled networks of deterministic ratchets
International Nuclear Information System (INIS)
Lu Pingli; Yang Ying; Huang Lin
2008-01-01
This Letter focuses on the synchronization in a class of dynamical complex networks with each node being a deterministic ratchet. In virtue of the technique derived from pendulum-like nonlinear analytic theory and Kalman-Yakubovich-Popov (KYP) lemma, simple linear matrix inequality (LMI) formulations are established to guarantee the stable synchronization of such networks. An interesting conclusion is reached that the stability of synchronization in the coupled whole N-dimensional networks can be converted into that of the simplest 2-dimensional space
Intuitionistic fuzzy (IF) evaluations of multidimensional model
International Nuclear Information System (INIS)
Valova, I.
2012-01-01
There are different logical methods for data structuring, but no one is perfect enough. Multidimensional model-MD of data is presentation of data in a form of cube (referred also as info-cube or hypercube) with data or in form of 'star' type scheme (referred as multidimensional scheme), by use of F-structures (Facts) and set of D-structures (Dimensions), based on the notion of hierarchy of D-structures. The data, being subject of analysis in a specific multidimensional model is located in a Cartesian space, being restricted by D-structures. In fact, the data is either dispersed or 'concentrated', therefore the data cells are not distributed evenly within the respective space. The moment of occurrence of any event is difficult to be predicted and the data is concentrated as per time periods, location of performed business event, etc. To process such dispersed or concentrated data, various technical strategies are needed. The basic methods for presentation of such data should be selected. The approaches of data processing and respective calculations are connected with different options for data representation. The use of intuitionistic fuzzy evaluations (IFE) provide us new possibilities for alternative presentation and processing of data, subject of analysis in any OLAP application. The use of IFE at the evaluation of multidimensional models will result in the following advantages: analysts will dispose with more complete information for processing and analysis of respective data; benefit for the managers is that the final decisions will be more effective ones; enabling design of more functional multidimensional schemes. The purpose of this work is to apply intuitionistic fuzzy evaluations of multidimensional model of data. (authors)
Strongly Deterministic Population Dynamics in Closed Microbial Communities
Directory of Open Access Journals (Sweden)
Zak Frentz
2015-10-01
Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
Deterministic doping and the exploration of spin qubits
Energy Technology Data Exchange (ETDEWEB)
Schenkel, T.; Weis, C. D.; Persaud, A. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lo, C. C. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States); London Centre for Nanotechnology (United Kingdom); Chakarov, I. [Global Foundries, Malta, NY 12020 (United States); Schneider, D. H. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Bokor, J. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States)
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
Deterministic extinction effect of parasites on host populations.
Hwang, Tzy-Wei; Kuang, Yang
2003-01-01
Experimental studies have shown that parasites can reduce host density and even drive host population to extinction. Conventional mathematical models for parasite-host interactions, while can address the host density reduction scenario, fail to explain such deterministic extinction phenomena. In order to understand the parasite induced host extinction, Ebert et al. (2000) formulated a plausible but ad hoc epidemiological microparasite model and its stochastic variation. The deterministic model, resembles a simple SI type model, predicts the existence of a globally attractive positive steady state. Their simulation of the stochastic model indicates that extinction of host is a likely outcome in some parameter regions. A careful examination of their ad hoc model suggests an alternative and plausible model assumption. With this modification, we show that the revised parasite-host model can exhibit the observed parasite induced host extinction. This finding strengthens and complements that of Ebert et al. (2000), since all continuous models are likely break down when all population densities are small. This extinction dynamics resembles that of ratio-dependent predator-prey models. We report here a complete global study of the revised parasite-host model. Biological implications and limitations of our findings are also presented.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
A survey of deterministic solvers for rarefied flows (Invited)
Mieussens, Luc
2014-12-01
Numerical simulations of rarefied gas flows are generally made with DSMC methods. Up to a recent period, deterministic numerical methods based on a discretization of the Boltzmann equation were restricted to simple problems (1D, linearized flows, or simple geometries, for instance). In the last decade, several deterministic solvers have been developed in different teams to tackle more complex problems like 2D and 3D flows. Some of them are based on the full Boltzmann equation. Solving this equation numerically is still very challenging, and 3D solvers are still restricted to monoatomic gases, even if recent works have proved it was possible to simulate simple flows for polyatomic gases. Other solvers are based on simpler BGK like models: they allow for much more intensive simulations on 3D flows for realistic geometries, but treating complex gases requires extended BGK models that are still under development. In this paper, we discuss the main features of these existing solvers, and we focus on their strengths and inefficiencies. We will also review some recent results that show how these solvers can be improved: - higher accuracy (higher order finite volume methods, discontinuous Galerkin approaches) - lower memory and CPU costs with special velocity discretization (adaptive grids, spectral methods) - multi-scale simulations by using hybrid and asymptotic preserving schemes - efficient implementation on high performance computers (parallel computing, hybrid parallelization) Finally, we propose some perspectives to make these solvers more efficient and more popular.
A deterministic model of electron transport for electron probe microanalysis
Bünger, J.; Richter, S.; Torrilhon, M.
2018-01-01
Within the last decades significant improvements in the spatial resolution of electron probe microanalysis (EPMA) were obtained by instrumental enhancements. In contrast, the quantification procedures essentially remained unchanged. As the classical procedures assume either homogeneity or a multi-layered structure of the material, they limit the spatial resolution of EPMA. The possibilities of improving the spatial resolution through more sophisticated quantification procedures are therefore almost untouched. We investigate a new analytical model (M 1-model) for the quantification procedure based on fast and accurate modelling of electron-X-ray-matter interactions in complex materials using a deterministic approach to solve the electron transport equations. We outline the derivation of the model from the Boltzmann equation for electron transport using the method of moments with a minimum entropy closure and present first numerical results for three different test cases (homogeneous, thin film and interface). Taking Monte Carlo as a reference, the results for the three test cases show that the M 1-model is able to reproduce the electron dynamics in EPMA applications very well. Compared to classical analytical models like XPP and PAP, the M 1-model is more accurate and far more flexible, which indicates the potential of deterministic models of electron transport to further increase the spatial resolution of EPMA.
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
Forecasting project schedule performance using probabilistic and deterministic models
Directory of Open Access Journals (Sweden)
S.A. Abdel Azeem
2014-04-01
Full Text Available Earned value management (EVM was originally developed for cost management and has not widely been used for forecasting project duration. In addition, EVM based formulas for cost or schedule forecasting are still deterministic and do not provide any information about the range of possible outcomes and the probability of meeting the project objectives. The objective of this paper is to develop three models to forecast the estimated duration at completion. Two of these models are deterministic; earned value (EV and earned schedule (ES models. The third model is a probabilistic model and developed based on Kalman filter algorithm and earned schedule management. Hence, the accuracies of the EV, ES and Kalman Filter Forecasting Model (KFFM through the different project periods will be assessed and compared with the other forecasting methods such as the Critical Path Method (CPM, which makes the time forecast at activity level by revising the actual reporting data for each activity at a certain data date. A case study project is used to validate the results of the three models. Hence, the best model is selected based on the lowest average percentage of error. The results showed that the KFFM developed in this study provides probabilistic prediction bounds of project duration at completion and can be applied through the different project periods with smaller errors than those observed in EV and ES forecasting models.
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-03
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
Shock-induced explosive chemistry in a deterministic sample configuration.
Energy Technology Data Exchange (ETDEWEB)
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Bayesian analysis of deterministic and stochastic prisoner's dilemma games
Directory of Open Access Journals (Sweden)
Howard Kunreuther
2009-08-01
Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.
Application of multidimensional IRT models to longitudinal data
te Marvelde, J.M.; Glas, Cornelis A.W.; Van Landeghem, Georges; Van Damme, Jan
2006-01-01
The application of multidimensional item response theory (IRT) models to longitudinal educational surveys where students are repeatedly measured is discussed and exemplified. A marginal maximum likelihood (MML) method to estimate the parameters of a multidimensional generalized partial credit model
Solvable nonlinear evolution PDEs in multidimensional space involving elliptic functions
Energy Technology Data Exchange (ETDEWEB)
Calogero, F [Dipartimento di Fisica, Universita di Roma ' La Sapienza' , 00185 Roma (Italy); Francoise, J-P [Laboratoire J-L Lions, UMR CNRS, Universite P-M Curie, Paris 6 (France); Sommacal, M [Dipartimento di Matematica e Informatica, Universita di Perugia, Perugia (Italy)
2007-07-27
A solvable nonlinear (system of) evolution PDEs in multidimensional space, involving elliptic functions, is identified, and certain of its solutions are exhibited. An isochronous version of this (system of) evolution PDEs in multidimensional space is also reported. (fast track communication)
Solvable nonlinear evolution PDEs in multidimensional space involving trigonometric functions
Energy Technology Data Exchange (ETDEWEB)
Calogero, F [Dipartimento di Fisica, Universita di Roma ' La Sapienza' , 00185 Rome (Italy); Francoise, J-P [Laboratoire J.-L. Lions, UMR CNRS, Universite P.-M. Curie, Paris 6 (France); Sommacal, M [Dipartimento di Matematica e Informatica, Universita di Perugia (Italy)
2007-05-04
A solvable nonlinear (system of) evolution PDEs in multidimensional space, involving trigonometric (or hyperbolic) functions, is identified. An isochronous version of this (system of) evolution PDEs in multidimensional space is also reported. (fast track communication)
Quantum and Multidimensional Explanations in a Neurobiological Context of Mind
Korf, Jakob
This article examines the possible relevance of physical-mathematical multidimensional or quantum concepts aiming at understanding the (human) mind in a neurobiological context. Some typical features of the quantum and multidimensional concepts are briefly introduced, including entanglement,
Multilevel Architecture for Multidimensional Data Base
Salahli, M.
2003-01-01
A Multidimensional Data Base is an essential element of decision support, which allows to process complex queries. In this paper, a multilevel distributed Data Model for MDDB is presented. Metadata for MDDB on relations is introduced. To improve efficiency of query processing, fuzzy cache fact table is proposed.
A Multidimensional Construct of Self-Esteem
Norem-Hebeisen, Ardyth A.
1976-01-01
Evidence for construct validity of this multi-dimensional concept of self esteem includes the relative congruence of the factor structure with the theoretical construct, the stability of the structure when subjected to a series of empirical tests, increasingly positive self-referent responses with increasing age, willingness to become more…
Continued validation of the Multidimensional Perfectionism Scale.
Clavin, S L; Clavin, R H; Gayton, W F; Broida, J
1996-06-01
Scores on the Multidimensional Perfectionism Scale have been correlated with measures of obsessive-compulsive tendencies for women, so the validity of scores on this scale for 41 men was examined. Scores on the Perfectionism Scale were significantly correlated (.47-.03) with scores on the Maudsley Obsessive-Compulsive Inventory.
Multidimensional Perfectionism and Obsessive-Compulsive Behaviors
Ashby, Jeffrey S.; Bruner, Linda Pak
2005-01-01
This study examined the relationship between adaptive and maladaptive perfectionism and obsessive-compulsive behaviors. One hundred and forty-four undergraduate psychology students completed a measure of multidimensional perfectionism and two measures of obsessive-compulsive behaviors. The authors found that maladaptive perfectionists engaged in…
Independence of Dimensions in Multidimensional Scaling.
Wender, Karl
Models for multidimensional scaling use metric spaces with additive difference metrics. Two important properties of additive difference metrics are decomposability and intradimensional subtractivity. A prediction was derived from these properties and tested experimentally. Eleven non-psychology students were used as subjects. Rectangles varying in…
Multidimensional human dynamics in mobile phone communications.
Directory of Open Access Journals (Sweden)
Christian Quadri
Full Text Available In today's technology-assisted society, social interactions may be expressed through a variety of techno-communication channels, including online social networks, email and mobile phones (calls, text messages. Consequently, a clear grasp of human behavior through the diverse communication media is considered a key factor in understanding the formation of the today's information society. So far, all previous research on user communication behavior has focused on a sole communication activity. In this paper we move forward another step on this research path by performing a multidimensional study of human sociality as an expression of the use of mobile phones. The paper focuses on user temporal communication behavior in the interplay between the two complementary communication media, text messages and phone calls, that represent the bi-dimensional scenario of analysis. Our study provides a theoretical framework for analyzing multidimensional bursts as the most general burst category, that includes one-dimensional bursts as the simplest case, and offers empirical evidence of their nature by following the combined phone call/text message communication patterns of approximately one million people over three-month period. This quantitative approach enables the design of a generative model rooted in the three most significant features of the multidimensional burst - the number of dimensions, prevalence and interleaving degree - able to reproduce the main media usage attitude. The other findings of the paper include a novel multidimensional burst detection algorithm and an insight analysis of the human media selection process.
MCMC estimation of multidimensional IRT models
Beguin, Anton; Glas, Cornelis A.W.
1998-01-01
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will
Multidimensional Perceptions of the 1972 Presidential Election
Shikiar, Richard
1976-01-01
Five separate multidimensional scaling analyses with measurement occasions varying from election day to about 14 months after, resulted in two stable dimensions of political perception. These dimensions were identified as Republican and Democratic evaluative dimensions. The publicity surrounding Watergate apparently did not affect the stability of…
White Dialectics as Multidimensional, Contextual, and Transformational
Abrams, Elizabeth M.; Todd, Nathan R.
2011-01-01
This rejoinder provides a response to reactions by Ponterotto, Sue, and Toporek to the White dialectics framework presented in Todd and Abrams's article. The present response focuses on incorporating multidimensionality, the multilevel nature of context, and the potential for transformation in the White dialectics framework. The authors expand…
Multidimensional human dynamics in mobile phone communications.
Quadri, Christian; Zignani, Matteo; Capra, Lorenzo; Gaito, Sabrina; Rossi, Gian Paolo
2014-01-01
In today's technology-assisted society, social interactions may be expressed through a variety of techno-communication channels, including online social networks, email and mobile phones (calls, text messages). Consequently, a clear grasp of human behavior through the diverse communication media is considered a key factor in understanding the formation of the today's information society. So far, all previous research on user communication behavior has focused on a sole communication activity. In this paper we move forward another step on this research path by performing a multidimensional study of human sociality as an expression of the use of mobile phones. The paper focuses on user temporal communication behavior in the interplay between the two complementary communication media, text messages and phone calls, that represent the bi-dimensional scenario of analysis. Our study provides a theoretical framework for analyzing multidimensional bursts as the most general burst category, that includes one-dimensional bursts as the simplest case, and offers empirical evidence of their nature by following the combined phone call/text message communication patterns of approximately one million people over three-month period. This quantitative approach enables the design of a generative model rooted in the three most significant features of the multidimensional burst - the number of dimensions, prevalence and interleaving degree - able to reproduce the main media usage attitude. The other findings of the paper include a novel multidimensional burst detection algorithm and an insight analysis of the human media selection process.
The Multidimensional Curriculum Model (MdCM)
Vidergor, Hava E.
2010-01-01
The multidimensional Curriculum Model (MdCM) helps teachers to better prepare gifted and able students for our changing world, acquiring much needed skills. It is influenced by general learning theory of constructivism, notions of preparing students for 21st century, Teaching the Future Model, and current comprehensive curriculum models for…
New bounds for multi-dimensional packing
S. Seiden; R. van Stee (Rob)
2001-01-01
textabstractNew upper and lower bounds are presented for a multi-dimensional generalization of bin packing called box packing. Several variants of this problem, including bounded space box packing, square packing, variable sized box packing and resource augmented box packing are also studied. The
Multidimensional Data Model and Query Language for Informetrics.
Niemi, Timo; Hirvonen, Lasse; Jarvelin, Kalervo
2003-01-01
Discusses multidimensional data analysis, or online analytical processing (OLAP), which offer a single subject-oriented source for analyzing summary data based on various dimensions. Develops a conceptual/logical multidimensional model for supporting the needs of informetrics, including a multidimensional query language whose basic idea is to…
Deterministic learning enhanced neutral network control of unmanned helicopter
Directory of Open Access Journals (Sweden)
Yiming Jiang
2016-11-01
Full Text Available In this article, a neural network–based tracking controller is developed for an unmanned helicopter system with guaranteed global stability in the presence of uncertain system dynamics. Due to the coupling and modeling uncertainties of the helicopter systems, neutral networks approximation techniques are employed to compensate the unknown dynamics of each subsystem. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is also integrated into the control design, such that the resulted neural controller is always valid without any concern on either initial conditions or range of state variables. In addition, deterministic learning is applied to the neutral network learning control, such that the adaptive neutral networks are able to store the learned knowledge that could be reused to construct neutral network controller with improved control performance. Simulation studies are carried out on a helicopter model to illustrate the effectiveness of the proposed control design.
HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks
Directory of Open Access Journals (Sweden)
Luca Marchetti
2017-01-01
Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
Deterministically entangling multiple remote quantum memories inside an optical cavity
Yan, Zhihui; Liu, Yanhong; Yan, Jieli; Jia, Xiaojun
2018-01-01
Quantum memory for the nonclassical state of light and entanglement among multiple remote quantum nodes hold promise for a large-scale quantum network, however, continuous-variable (CV) memory efficiency and entangled degree are limited due to imperfect implementation. Here we propose a scheme to deterministically entangle multiple distant atomic ensembles based on CV cavity-enhanced quantum memory. The memory efficiency can be improved with the help of cavity-enhanced electromagnetically induced transparency dynamics. A high degree of entanglement among multiple atomic ensembles can be obtained by mapping the quantum state from multiple entangled optical modes into a collection of atomic spin waves inside optical cavities. Besides being of interest in terms of unconditional entanglement among multiple macroscopic objects, our scheme paves the way towards the practical application of quantum networks.
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
Properties of the Statistical Complexity Functional and Partially Deterministic HMMs
Directory of Open Access Journals (Sweden)
Wolfgang Löhr
2009-08-01
Full Text Available Statistical complexity is a measure of complexity of discrete-time stationary stochastic processes, which has many applications. We investigate its more abstract properties as a non-linear function of the space of processes and show its close relation to the Knight’s prediction process. We prove lower semi-continuity, concavity, and a formula for the ergodic decomposition of statistical complexity. On the way, we show that the discrete version of the prediction process has a continuous Markov transition. We also prove that, given the past output of a partially deterministic hidden Markov model (HMM, the uncertainty of the internal state is constant over time and knowledge of the internal state gives no additional information on the future output. Using this fact, we show that the causal state distribution is the unique stationary representation on prediction space that may have finite entropy.
International Nuclear Information System (INIS)
Zio, Enrico
2014-01-01
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives
Energy Technology Data Exchange (ETDEWEB)
Zio, Enrico, E-mail: enrico.zio@ecp.fr [Ecole Centrale Paris and Supelec, Chair on System Science and the Energetic Challenge, European Foundation for New Energy – Electricite de France (EDF), Grande Voie des Vignes, 92295 Chatenay-Malabry Cedex (France); Dipartimento di Energia, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy)
2014-12-15
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives.
Sensitivity analysis in a Lassa fever deterministic mathematical model
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
A Deterministic Entropy Based on the Instantaneous Phase Space Volume
Diebner, Hans H.; Rössler, Otto E.
1998-02-01
A deterministic entropic measure is derived for the time evolution of Newtonian N-particle systems based on the volume of the instantaneously occupied phase space (IOPS). This measure is found as a natural extension of Boltzmann's entropy. The instantaneous arrangement of the particles is exploited in the form of spatial correlations. The new entropy is a bridge between the time-dependent Boltzmann entropy, formulated on the basis of densities in the one-particle phase space, and the static Gibbs entropy which uses densities in the full phase space. We apply the new concept in a molecular dynamics simulation (MDS) using an exactly time reversible "discrete Newtonian equation of motion" recently derived from the fundamental principle of least action in discretized space-time. The simulation therefore is consistent with micro-time-reversibility. Entropy becomes an exact momentary observable in both time directions in fulfillment of a dream of Boltzmann.
Distributed Design of a Central Service to Ensure Deterministic Behavior
Directory of Open Access Journals (Sweden)
Imran Ali Jokhio
2012-10-01
Full Text Available A central authentication service to EPC (Electronic Product Code system architecture is proposed in our previous work. A challenge for a central service always arises that how it can ensure a certain level of delay while processing emergent data. The increasing data in the EPC system architecture is tags data. Therefore, authenticating increasing number of tag in the central authentication service with a deterministic time response is investigated and a distributed authentication service is designed in a layered approach. A distributed design of tag searching services in SOA (Service Oriented Architecture style is also presented. Using the SOA architectural style a self-adaptive authentication service over Cloud is also proposed for the central authentication service, that may also be extended for other applications.
A deterministic model of nettle caterpillar life cycle
Syukriyah, Y.; Nuraini, N.; Handayani, D.
2018-03-01
Palm oil is an excellent product in the plantation sector in Indonesia. The level of palm oil productivity is very potential to increase every year. However, the level of palm oil productivity is lower than its potential. Pests and diseases are the main factors that can reduce production levels by up to 40%. The existence of pests in plants can be caused by various factors, so the anticipation in controlling pest attacks should be prepared as early as possible. Caterpillars are the main pests in oil palm. The nettle caterpillars are leaf eaters that can significantly decrease palm productivity. We construct a deterministic model that describes the life cycle of the caterpillar and its mitigation by using a caterpillar predator. The equilibrium points of the model are analyzed. The numerical simulations are constructed to give a representation how the predator as the natural enemies affects the nettle caterpillar life cycle.
Analysis of deterministic cyclic gene regulatory network models with delays
Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian
2015-01-01
This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.
On integration of probabilistic and deterministic safety analysis
International Nuclear Information System (INIS)
Cepin, M.; Wardzinski, A.
1996-01-01
The paper presents the case study on probabilistic and deterministic safety analysis of Engineered Safety Features Actuation System. The Fault Tree as a Probabilistic Safety Assessment tool is developed and analysed. The same Fault Tree is specified in a formal way. When formalized, it has a possibility to include the time requirements of the analysed system, which can not be included in a probabilistic approach to Fault Tree Analysis. The feature of inclusion of time is the main advantage of formalized Fault Tree, which extends it to a dynamic tool. Its results are Minimal Cut Sets with time relations, which are the base for the definition of safety requirements. Definition of safety requirements is one of early phases of software lifecycle and it is of special importance designing safety-related computer systems. (author)
Design Optimization of a Speed Reducer Using Deterministic Techniques
Directory of Open Access Journals (Sweden)
Ming-Hua Lin
2013-01-01
Full Text Available The optimal design problem of minimizing the total weight of a speed reducer under constraints is a generalized geometric programming problem. Since the metaheuristic approaches cannot guarantee to find the global optimum of a generalized geometric programming problem, this paper applies an efficient deterministic approach to globally solve speed reducer design problems. The original problem is converted by variable transformations and piecewise linearization techniques. The reformulated problem is a convex mixed-integer nonlinear programming problem solvable to reach an approximate global solution within an acceptable error. Experiment results from solving a practical speed reducer design problem indicate that this study obtains a better solution comparing with the other existing methods.
Deterministic calculations of radiation doses from brachytherapy seeds
International Nuclear Information System (INIS)
Reis, Sergio Carneiro dos; Vasconcelos, Vanderley de; Santos, Ana Maria Matildes dos
2009-01-01
Brachytherapy is used for treating certain types of cancer by inserting radioactive sources into tumours. CDTN/CNEN is developing brachytherapy seeds to be used mainly in prostate cancer treatment. Dose calculations play a very significant role in the characterization of the developed seeds. The current state-of-the-art of computation dosimetry relies on Monte Carlo methods using, for instance, MCNP codes. However, deterministic calculations have some advantages, as, for example, short computer time to find solutions. This paper presents a software developed to calculate doses in a two-dimensional space surrounding the seed, using a deterministic algorithm. The analysed seeds consist of capsules similar to IMC6711 (OncoSeed), that are commercially available. The exposure rates and absorbed doses are computed using the Sievert integral and the Meisberger third order polynomial, respectively. The software also allows the isodose visualization at the surface plan. The user can choose between four different radionuclides ( 192 Ir, 198 Au, 137 Cs and 60 Co). He also have to enter as input data: the exposure rate constant; the source activity; the active length of the source; the number of segments in which the source will be divided; the total source length; the source diameter; and the actual and effective source thickness. The computed results were benchmarked against results from literature and developed software will be used to support the characterization process of the source that is being developed at CDTN. The software was implemented using Borland Delphi in Windows environment and is an alternative to Monte Carlo based codes. (author)
Absorbing phase transitions in deterministic fixed-energy sandpile models
Park, Su-Chan
2018-03-01
We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010), 10.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.
Deterministic Safety Analysis of Kozloduy Units 3 and 4
International Nuclear Information System (INIS)
Ivanova, A.
2002-01-01
During development of SAR of Kozloduy NPP are used Regulatory basis, guides and recommendations, such as Regulation order No.3 of CUAEPP, Regulation order No.5 of CUAEPP, Guidelines for accident analyses of WWER NPP, Guidance for Accident Analyses of Commercial Nuclear Power Plants, and many others. The list of initiating events is evaluated on the basis of IAEA requirements, generic WWER data and statistical data from NPP. The final categorisation is carried out according to the highest frequency in the above sources. Within DBA are defined Anticipated operational occurrence (AOO) and Postulated accidents. The List of IE considered in SAR of KNPP 3 and 4 is presented. In the process of development of SAR are investigated 11 DBA cases and 9 BDBA cases. The acceptance criteria are chosen from above mentioned references and depend from the categorisation of event. Main Approaches to the deterministic safety analysis are using the best-estimate codes with conservatively selected initial and boundary conditions for DBA and best-estimate codes with relaxed conservatism for the selection of the initial and boundary conditions for BDBA. Computer codes RELAP5/Mod 3.2, MELCOR 1.8.3, DYN3D, SPPS and SMART are used for the SAR KNPP evaluation. The results shows that the new SARs of KNPP 3 and 4 cover the whole spectrum of IE, defined in the regulatory documents and IAEA guidelines. The deterministic analyses of the IEs are performed using best estimate codes with conservative sets of initial and boundary conditions. The worst single failure is selected for each individual IE and different scenarios are specified for the different aspects of the analysis. The analyses show a sufficient margin to the fulfilment of the applicable acceptance criteria and reflect all major plant upgrades except the modification of the SG collectors
Deterministic Earthquake Hazard Assessment by Public Agencies in California
Mualchin, L.
2005-12-01
Even in its short recorded history, California has experienced a number of damaging earthquakes that have resulted in new codes and other legislation for public safety. In particular, the 1971 San Fernando earthquake produced some of the most lasting results such as the Hospital Safety Act, the Strong Motion Instrumentation Program, the Alquist-Priolo Special Studies Zone Act, and the California Department of Transportation (Caltrans') fault-based deterministic seismic hazard (DSH) map. The latter product provides values for earthquake ground motions based on Maximum Credible Earthquakes (MCEs), defined as the largest earthquakes that can reasonably be expected on faults in the current tectonic regime. For surface fault rupture displacement hazards, detailed study of the same faults apply. Originally, hospital, dam, and other critical facilities used seismic design criteria based on deterministic seismic hazard analyses (DSHA). However, probabilistic methods grew and took hold by introducing earthquake design criteria based on time factors and quantifying "uncertainties", by procedures such as logic trees. These probabilistic seismic hazard analyses (PSHA) ignored the DSH approach. Some agencies were influenced to adopt only the PSHA method. However, deficiencies in the PSHA method are becoming recognized, and the use of the method is now becoming a focus of strong debate. Caltrans is in the process of producing the fourth edition of its DSH map. The reason for preferring the DSH method is that Caltrans believes it is more realistic than the probabilistic method for assessing earthquake hazards that may affect critical facilities, and is the best available method for insuring public safety. Its time-invariant values help to produce robust design criteria that are soundly based on physical evidence. And it is the method for which there is the least opportunity for unwelcome surprises.
Lera, Daniela; Sergeyev, Yaroslav D.
2015-06-01
In this paper, the global optimization problem miny∈S F (y) with S being a hyperinterval in RN and F (y) satisfying the Lipschitz condition with an unknown Lipschitz constant is considered. It is supposed that the function F (y) can be multiextremal, non-differentiable, and given as a 'black-box'. To attack the problem, a new global optimization algorithm based on the following two ideas is proposed and studied both theoretically and numerically. First, the new algorithm uses numerical approximations to space-filling curves to reduce the original Lipschitz multi-dimensional problem to a univariate one satisfying the Hölder condition. Second, the algorithm at each iteration applies a new geometric technique working with a number of possible Hölder constants chosen from a set of values varying from zero to infinity showing so that ideas introduced in a popular DIRECT method can be used in the Hölder global optimization. Convergence conditions of the resulting deterministic global optimization method are established. Numerical experiments carried out on several hundreds of test functions show quite a promising performance of the new algorithm in comparison with its direct competitors.
Asinari, Pietro
2010-10-01
.gz Programming language: Tested with Matlab version ⩽6.5. However, in principle, any recent version of Matlab or Octave should work Computer: All supporting Matlab or Octave Operating system: All supporting Matlab or Octave RAM: 300 MBytes Classification: 23 Nature of problem: The problem consists in integrating the homogeneous Boltzmann equation for a generic collisional kernel in case of isotropic symmetry, by a deterministic direct method. Difficulties arise from the multi-dimensionality of the collisional operator and from satisfying the conservation of particle number and energy (momentum is trivial for this test case) as accurately as possible, in order to preserve the late dynamics. Solution method: The solution is based on the method proposed by Aristov (2001) [1], but with two substantial improvements: (a) the original problem is reformulated in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium). Both these corrections make possible to derive very accurate reference solutions for this test case. Restrictions: The nonlinear Boltzmann equation is extremely challenging from the computational point of view, in particular for deterministic methods, despite the increased computational power of recent hardware. In this work, only the homogeneous isotropic case is considered, for making possible the development of a minimal program (by a simple scripting language) and allowing the user to check the advantages of the proposed improvements beyond Aristov's (2001) method [1]. The initial conditions are supposed parameterized according to a fixed analytical expression, but this can be
International Nuclear Information System (INIS)
Bor-Jing Chang; Yen-Wan H. Liu
1992-01-01
The HYBRID, or mixed group and point, method was developed to solve the neutron transport equation deterministically using detailed treatment at cross section minima for deep penetration calculations. Its application so far is limited to one-dimensional calculations due to the enormous computing time involved in multi-dimensional calculations. In this article, a collapsing method is developed for the mixed group and point cross section sets to provide a more direct and practical way of using the HYBRID method in the multi-dimensional calculations. A testing problem is run. The method is then applied to the calculation of a deep penetration benchmark experiment. It is observed that half of the window effect is smeared in the collapsing treatment, but it still provide a better cross section set than the VITAMIN-C cross sections for the deep penetrating calculations
Multi-Dimensional Aggregation for Temporal Data
DEFF Research Database (Denmark)
Böhlen, M. H.; Gamper, J.; Jensen, Christian Søndergaard
2006-01-01
Business Intelligence solutions, encompassing technologies such as multi-dimensional data modeling and aggregate query processing, are being applied increasingly to non-traditional data. This paper extends multi-dimensional aggregation to apply to data with associated interval values that capture...... when the data hold. In temporal databases, intervals typically capture the states of reality that the data apply to, or capture when the data are, or were, part of the current database state. This paper proposes a new aggregation operator that addresses several challenges posed by interval data. First......, the intervals to be associated with the result tuples may not be known in advance, but depend on the actual data. Such unknown intervals are accommodated by allowing result groups that are specified only partially. Second, the operator contends with the case where an interval associated with data expresses...
Multidimensional integral representations problems of analytic continuation
Kytmanov, Alexander M
2015-01-01
The monograph is devoted to integral representations for holomorphic functions in several complex variables, such as Bochner-Martinelli, Cauchy-Fantappiè, Koppelman, multidimensional logarithmic residue etc., and their boundary properties. The applications considered are problems of analytic continuation of functions from the boundary of a bounded domain in C^n. In contrast to the well-known Hartogs-Bochner theorem, this book investigates functions with the one-dimensional property of holomorphic extension along complex lines, and includes the problems of receiving multidimensional boundary analogs of the Morera theorem. This book is a valuable resource for specialists in complex analysis, theoretical physics, as well as graduate and postgraduate students with an understanding of standard university courses in complex, real and functional analysis, as well as algebra and geometry.
Multidimensional entropy landscape of quantum criticality
Grube, K.; Zaum, S.; Stockert, O.; Si, Q.; Löhneysen, H. V.
2017-08-01
The third law of thermodynamics states that the entropy of any system in equilibrium has to vanish at absolute zero temperature. At nonzero temperatures, on the other hand, matter is expected to accumulate entropy near a quantum critical point, where it undergoes a continuous transition from one ground state to another. Here, we determine, based on general thermodynamic principles, the spatial-dimensional profile of the entropy S near a quantum critical point and its steepest descent in the corresponding multidimensional stress space. We demonstrate this approach for the canonical quantum critical compound CeCu 6-xAux near its onset of antiferromagnetic order. We are able to link the directional stress dependence of S to the previously determined geometry of quantum critical fluctuations. Our demonstration of the multidimensional entropy landscape provides the foundation to understand how quantum criticality nucleates novel phases such as high-temperature superconductivity.
Multidimensional incremental parsing for universal source coding.
Bae, Soo Hyun; Juang, Biing-Hwang
2008-10-01
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
Multidimensional Scaling Visualization Using Parametric Similarity Indices
Machado, J. Tenreiro; Lopes, António; Galhano, Alexandra
2015-01-01
In this paper, we apply multidimensional scaling (MDS) and parametric similarity indices (PSI) in the analysis of complex systems (CS). Each CS is viewed as a dynamical system, exhibiting an output time-series to be interpreted as a manifestation of its behavior. We start by adopting a sliding window to sample the original data into several consecutive time periods. Second, we define a given PSI for tracking pieces of data. We then compare the windows for different values of the parameter, an...
Multidimensionality of centripetal and centrifugal nationalisms
Directory of Open Access Journals (Sweden)
Filipe Vasconcelos Romão
2013-11-01
Full Text Available Traditionally, authors focus on the speech of political actors and how these define themselves in order to identify the presence of nationalist political trends. This paper aims to present a wider analytical grid so as to include how nationalism is manifested. In line with this multidimensional proposal, we aim to identify differences as to how nationalisms are made manifest according to their relation with power.
Charged particle trajectories and multidimensional analysis
International Nuclear Information System (INIS)
Benayoun, M.; Leruste, P.
1980-03-01
We examine here the simplified physical problem of straight line trajectories of particles through three wire chambers. Working out the covariance matrix of the data, we compare the physical model to the one deduced from multidimensional analysis. We also examine stability of the results, and espacially the stability of the constraints with respect to errors in the metrics induced by the error matrix of the measures. The results obtained look general and can be applied especially to elementary particle beam [fr
Frost Multidimensional Perfectionism Scale: the portuguese version
Directory of Open Access Journals (Sweden)
Ana Paula Monteiro Amaral
2013-01-01
Full Text Available BACKGROUND: The Frost Multidimensional Perfectionism Scale is one of the most world widely used measures of perfectionism. OBJECTIVE: To analyze the psychometric properties of the Portuguese version of the Frost Multidimensional Perfectionism Scale. METHODS: Two hundred and seventeen (178 females students from two Portuguese Universities filled in the scale, and a subgroup (n = 166 completed a retest with a four weeks interval. RESULTS: The scale reliability was good (Cronbach alpha = .857. Corrected item-total correlations ranged from .019 to .548. The scale test-retest reliability suggested a good temporal stability with a test-retest correlation of .765. A principal component analysis with Varimax rotation was performed and based on the Scree plot, two robust factorial structures were found (four and six factors. The principal component analyses, using Monte Carlo PCA for parallel analyses confirmed the six factor solution. The concurrent validity with Hewitt and Flett MPS was high, as well as the discriminant validity of positive and negative affect (Profile of Mood Stats-POMS. DISCUSSION: The two factorial structures (of four and six dimensions of the Portuguese version of Frost Multidimensional Perfectionism Scale replicate the results from different authors, with different samples and cultures. This suggests this scale is a robust instrument to assess perfectionism, in several clinical and research settings as well as in transcultural studies.
S. Boldyreva; S. Fehr (Serge); A. O'Neill; D. Wagner
2008-01-01
textabstractThe study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic
Using EFDD as a Robust Technique for Deterministic Excitation in Operational Modal Analysis
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2007-01-01
carried out on a plate structure excited by respectively a pure stochastic signal and the same stochastic signal superimposed by a deterministic signal. Good agreement was found in terms of both natural frequencies, damping ratios and mode shapes. Even the influence of a deterministic signal located...
Deterministic direct aperture optimization using multiphase piecewise constant segmentation.
Nguyen, Dan; O'Connor, Daniel; Ruan, Dan; Sheng, Ke
2017-11-01
Direct aperture optimization (DAO) attempts to incorporate machine constraints in the inverse optimization to eliminate the post-processing steps in fluence map optimization (FMO) that degrade plan quality. Current commercial DAO methods utilize a stochastic or greedy approach to search a small aperture solution space. In this study, we propose a novel deterministic direct aperture optimization that integrates the segmentation of fluence map in the optimization problem using the multiphase piecewise constant Mumford-Shah formulation. The Mumford-Shah based direct aperture optimization problem was formulated to include an L2-norm dose fidelity term to penalize differences between the projected dose and the prescribed dose, an anisotropic total variation term to promote piecewise continuity in the fluence maps, and the multiphase piecewise constant Mumford-Shah function to partition the fluence into pairwise discrete segments. A proximal-class, first-order primal-dual solver was implemented to solve the large scale optimization problem, and an alternating module strategy was implemented to update fluence and delivery segments. Three patients of varying complexity-one glioblastoma multiforme (GBM) patient, one lung (LNG) patient, and one bilateral head and neck (H&N) patient with 3 PTVs-were selected to test the new DAO method. For each patient, 20 non-coplanar beams were first selected using column generation, followed by the Mumford-Shah based DAO (DAO MS ). For comparison, a popular and successful approach to DAO known as simulated annealing-a stochastic approach-was replicated. The simulated annealing DAO (DAO SA ) plans were then created using the same beam angles and maximum number of segments per beam. PTV coverage, PTV homogeneity D95D5, and OAR sparing were assessed for each plan. In addition, high dose spillage, defined as the 50% isodose volume divided by the tumor volume, as well as conformity, defined as the van't Riet conformation number, were evaluated
Multi-dimensional hybrid Fourier continuation-WENO solvers for conservation laws
Shahbazi, Khosro; Hesthaven, Jan S.; Zhu, Xueyu
2013-11-01
We introduce a multi-dimensional point-wise multi-domain hybrid Fourier-Continuation/WENO technique (FC-WENO) that enables high-order and non-oscillatory solution of systems of nonlinear conservation laws, and essentially dispersionless, spectral, solution away from discontinuities, as well as mild CFL constraints for explicit time stepping schemes. The hybrid scheme conjugates the expensive, shock-capturing WENO method in small regions containing discontinuities with the efficient FC method in the rest of the computational domain, yielding a highly effective overall scheme for applications with a mix of discontinuities and complex smooth structures. The smooth and discontinuous solution regions are distinguished using the multi-resolution procedure of Harten [A. Harten, Adaptive multiresolution schemes for shock computations, J. Comput. Phys. 115 (1994) 319-338]. We consider a WENO scheme of formal order nine and a FC method of order five. The accuracy, stability and efficiency of the new hybrid method for conservation laws are investigated for problems with both smooth and non-smooth solutions. The Euler equations for gas dynamics are solved for the Mach 3 and Mach 1.25 shock wave interaction with a small, plain, oblique entropy wave using the hybrid FC-WENO, the pure WENO and the hybrid central difference-WENO (CD-WENO) schemes. We demonstrate considerable computational advantages of the new FC-based method over the two alternatives. Moreover, in solving a challenging two-dimensional Richtmyer-Meshkov instability (RMI), the hybrid solver results in seven-fold speedup over the pure WENO scheme. Thanks to the multi-domain formulation of the solver, the scheme is straightforwardly implemented on parallel processors using message passing interface as well as on Graphics Processing Units (GPUs) using CUDA programming language. The performance of the solver on parallel CPUs yields almost perfect scaling, illustrating the minimal communication requirements of the multi
DETERMINISTICALLY-MODIFIED INTEGRAL ESTIMATORS OF GRAVITATIONAL TENSOR
Directory of Open Access Journals (Sweden)
Mohsen Romeshkani
Full Text Available The Earth's global gravity field modelling is an important subject in Physical Geodesy. For this purpose different satellite gravimetry missions have been designed and launched. Satellite gravity gradiometry (SGG is a technique to measure the second-order derivatives of the gravity field. The gravity field and steady state ocean circulation explorer (GOCE is the first satellite mission which uses this technique and is dedicated to recover Earth's gravity models (EGMs up to medium wavelengths. The existing terrestrial gravimetric data and EGM scan be used for validation of the GOCE data prior to their use. In this research, the tensor of gravitation in the local north-oriented frame is generated using deterministically-modified integral estimators involving terrestrial data and EGMs. The paper presents that the SGG data is assessable with an accuracy of 1-2 mE in Fennoscandia using a modified integral estimatorby the Molodensky method. A degree of modification of 100 and an integration cap size of for integrating terrestrial data are proper parameters for the estimator.
A Modified Deterministic Model for Reverse Supply Chain in Manufacturing
Directory of Open Access Journals (Sweden)
R. N. Mahapatra
2013-01-01
Full Text Available Technology is becoming pervasive across all facets of our lives today. Technology innovation leading to development of new products and enhancement of features in existing products is happening at a faster pace than ever. It is becoming difficult for the customers to keep up with the deluge of new technology. This trend has resulted in gross increase in use of new materials and decreased customers' interest in relatively older products. This paper deals with a novel model in which the stationary demand is fulfilled by remanufactured products along with newly manufactured products. The current model is based on the assumption that the returned items from the customers can be remanufactured at a fixed rate. The remanufactured products are assumed to be as good as the new ones in terms of features, quality, and worth. A methodology is used for the calculation of optimum level for the newly manufactured items and the optimum level of the remanufactured products simultaneously. The model is formulated depending on the relationship between different parameters. An interpretive-modelling-based approach has been employed to model the reverse logistics variables typically found in supply chains (SCs. For simplicity of calculation a deterministic approach is implemented for the proposed model.
Conversion of dependability deterministic requirements into probabilistic requirements
International Nuclear Information System (INIS)
Bourgade, E.; Le, P.
1993-02-01
This report concerns the on-going survey conducted jointly by the DAM/CCE and NRE/SR branches on the inclusion of dependability requirements in control and instrumentation projects. Its purpose is to enable a customer (the prime contractor) to convert into probabilistic terms dependability deterministic requirements expressed in the form ''a maximum permissible number of failures, of maximum duration d in a period t''. The customer shall select a confidence level for each previously defined undesirable event, by assigning a maximum probability of occurrence. Using the formulae we propose for two repair policies - constant rate or constant time - these probabilized requirements can then be transformed into equivalent failure rates. It is shown that the same formula can be used for both policies, providing certain realistic assumptions are confirmed, and that for a constant time repair policy, the correct result can always be obtained. The equivalent failure rates thus determined can be included in the specifications supplied to the contractors, who will then be able to proceed to their previsional justification. (author), 8 refs., 3 annexes
A deterministic seismic hazard map of India and adjacent areas
International Nuclear Information System (INIS)
Parvez, Imtiyaz A.; Vaccari, Franco; Panza, Giuliano
2001-09-01
A seismic hazard map of the territory of India and adjacent areas has been prepared using a deterministic approach based on the computation of synthetic seismograms complete of all main phases. The input data set consists of structural models, seismogenic zones, focal mechanisms and earthquake catalogue. The synthetic seismograms have been generated by the modal summation technique. The seismic hazard, expressed in terms of maximum displacement (DMAX), maximum velocity (VMAX), and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid of 0.2 deg. x 0.2 deg. over the studied territory. The estimated values of the peak ground acceleration are compared with the observed data available for the Himalayan region and found in good agreement. Many parts of the Himalayan region have the DGA values exceeding 0.6 g. The epicentral areas of the great Assam earthquakes of 1897 and 1950 represent the maximum hazard with DGA values reaching 1.2-1.3 g. (author)
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels.
Is there a sharp phase transition for deterministic cellular automata?
International Nuclear Information System (INIS)
Wootters, W.K.
1990-01-01
Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider. 18 refs., 6 figs
Rapid detection of small oscillation faults via deterministic learning.
Wang, Cong; Chen, Tianrui
2011-08-01
Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.
Entrepreneurs, Chance, and the Deterministic Concentration of Wealth
Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Multidimensional First-Order Dominance Comparisons of Population Wellbeing
DEFF Research Database (Denmark)
Siersbæk, Nikolaj; Østerdal, Lars Peter Raahave; Arndt, Thomas Channing
2017-01-01
This chapter conveys the concept of first-order dominance (FOD) with particular focus on applications to multidimensional population welfare comparisons. It gives an account of the fundamental equivalent definitions of FOD both in the one-dimensional and multidimensional setting, illustrated...... and weaknesses of FOD compared to other multidimensional population comparison concepts, and describes practical tools that enable the reader to easily use it....
An Improved Multidimensional MPA Procedure for Bidirectional Earthquake Excitations
Wang, Feng; Sun, Jian-Gang; Zhang, Ning
2014-01-01
Presently, the modal pushover analysis procedure is extended to multidimensional analysis of structures subjected to multidimensional earthquake excitations. an improved multidimensional modal pushover analysis (IMMPA) method is presented in the paper in order to estimate the response demands of structures subjected to bidirectional earthquake excitations, in which the unidirectional earthquake excitation applied on equivalent SDOF system is replaced by the direct superposition of two compone...
Directory of Open Access Journals (Sweden)
Seyed Jalal Younesi
2015-06-01
Full Text Available Objective: The current research is to investigate the relation between deterministic thinking and mental health among drug abusers, in which the role of cognitive distortions is considered and clarified by focusing on deterministic thinking. Methods: The present study is descriptive and correlative. All individuals with experience of drug abuse who had been referred to the Shafagh Rehabilitation center (Kahrizak were considered as the statistical population. 110 individuals who were addicted to drugs (stimulants and Methamphetamine were selected from this population by purposeful sampling to answer questionnaires about deterministic thinking and general health. For data analysis Pearson coefficient correlation and regression analysis was used. Results: The results showed that there is a positive and significant relationship between deterministic thinking and the lack of mental health at the statistical level [r=%22, P<0.05], which had the closest relation to deterministic thinking among the factors of mental health, such as anxiety and depression. It was found that the two factors of deterministic thinking which function as the strongest variables that predict the lack of mental health are: definitiveness in predicting tragic events and future anticipation. Discussion: It seems that drug abusers suffer from deterministic thinking when they are confronted with difficult situations, so they are more affected by depression and anxiety. This way of thinking may play a major role in impelling or restraining drug addiction.
Multidimensional Risk Management for Underground Electricity Networks
Directory of Open Access Journals (Sweden)
Garcez Thalles V.
2014-08-01
Full Text Available In the paper we consider an electricity provider company that makes decision on allocating resources on electric network maintenance. The investments decrease malfunction rate of network nodes. An accidental event (explosion, fire, etc. or a malfunctioning on underground system can have various consequences and in different perspectives, such as deaths and injuries of pedestrians, fires in nearby locations, disturbances in the flow of vehicular traffic, loss to the company image, operating and financial losses, etc. For this reason it is necessary to apply an approach of the risk management that considers the multidimensional view of the consequences. Furthermore an analysis of decision making should consider network dependencies between the nodes of the electricity distribution system. In the paper we propose the use of the simulation to assess the network effects (such as the increase of the probability of other accidental event and the occurrence of blackouts of the dependent nodes in the multidimensional risk assessment in electricity grid. The analyzed effects include node overloading due to malfunction of adjacent nodes and blackouts that take place where there is temporarily no path in the grid between the power plant and a node. The simulation results show that network effects have crucial role for decisions in the network maintenance – outcomes of decisions to repair a particular node in the network can have significant influence on performance of other nodes. However, those dependencies are non-linear. The effects of network connectivity (number of connections between nodes on its multidimensional performance assessment depend heavily on the overloading effect level. The simulation results do not depend on network type structure (random or small world – however simulation outcomes for random networks have shown higher variance compared to small-world networks.
Trust and credibility: measured by multidimensional scaling
International Nuclear Information System (INIS)
Warg, L.E.; Bodin, L.
1998-01-01
Full text of publication follows: in focus of much of today's research interest in risk communication, is the fact that the communities do not trust policy and decision makers such as politicians, government or industry people. This is especially serious in the years to come when we are expecting risk issues concerning for example the nuclear industry, global warming and hazardous waste, to be even higher on the political and social agenda all over the world. Despite the research efforts devoted to trust, society needs an in depth understanding of trust for conducting successful communication regarding environmental hazards. The present abstract is about an experimental study in psychology where focus has been on the possibility to use the multidimensional scaling technique to explore the characteristics people consider to be of importance when they say that certain persons are credible. In the study, a total of 61 students of the University of Oerebro, Sweden, were required to make comparisons of the similarity between 12 well-known swedish persons from politics science, media, industry, 'TV-world' and literature (two persons at a time), regarding their credibility when making statements about risks in society. In addition, the subjects were rating the importance of 19 factors for the credibility of a source. These 61 persons comprised three groups of students: pedagogists, business economists, and chemists. There were 61 % women and 39% men and the mean age was 23 years. The results will be analyzed using multidimensional scaling technique. Differences between the three groups will be analyzed and presented as well as those between men and women. In addition, the 19 factors will be discussed and considered when trying to label the dimensions accounted for by the multidimensional scaling technique. The result from this study will contribute to our understanding of important factors behind human judgments concerning trust and credibility. It will also point to a
Construct continuity in the presence of multidimensionality
Staniewska, Dorota
Unidimensionality -- a condition, under which only one dominant construct is being measured by the test, is a fundamental assumption of most modern day psychometric models. However, some tests are multidimensional by design. A test, for instance, might measure physics, biology and chemistry subscales combined to measure a "general science" composite. The relative magnitudes of those subscales sometimes shift from administration to administration, which results in an altered composite. This study examined the conditions under which two different forms of a multidimensional test measure the same composite construct to a degree that allows them to be equated, i.e. used interchangeably. IRT true-score equating was used in a simulation study to assess the closeness of the scores on the forms. Conditions examined included the correlations between subscales, varying number of items per subscale form to form, and different subpopulation ability estimates on the subscales. Differences in the equating errors due to generating model (1PL or 3PL) were also examined. A way of calculating a unidimensional composite from a two-dimensional ability was devised and compared to the unidimensional composite obtained from Parscale. It was found that in general, the errors increase with decreasing correlation between traits and increased divergence of the two forms to be equated, with the later being the main predictor of the equating errors. However, the magnitude of those errors was small for the population as a whole especially when all examinee abilities are drawn from the same distribution. It was concluded that IRT true score equating is relatively robust to multidimensionality for the conditions examined, especially if the overall population score is desired. However, when accurate estimate of the equated score for individuals at the extremes of the population is needed, or whenever population abilities are drawn from more than one distribution, the unidimensional true score
Point Information Gain and Multidimensional Data Analysis
Directory of Open Access Journals (Sweden)
Renata Rychtáriková
2016-10-01
Full Text Available We generalize the point information gain (PIG and derived quantities, i.e., point information gain entropy (PIE and point information gain entropy density (PIED, for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these methods for the analysis of multidimensional datasets. We demonstrate the main properties of PIE/PIED spectra for the real data with the examples of several images and discuss further possible utilizations in other fields of data processing.
Multidimensional Perceptions Of The 1972 Presidential Election.
Shikiar, R
1976-04-01
Five separate multidimensional scaling analyses, with a total of 2231 subjects and with measurement occasions varying from election day to about 14 months after election day, resulted in two stable dimensions of political perception. These dimensions were identified as Republican and Democratic evaluative dimensions. Significant changes in the saliencies of these dimensions over time were noted for the pro-McGovern subjects, but no such changes were found for the pro-Nixon subjects. Most of these findings were consistent with the previous literature in political perception. The publicity surrounding Watergate apparently did not affect the stability of the political perceptions.
Multidimensional rare event probability estimation algorithm
Directory of Open Access Journals (Sweden)
Leonidas Sakalauskas
2013-09-01
Full Text Available This work contains Monte–Carlo Markov Chain algorithm for estimation of multi-dimensional rare events frequencies. Logits of rare event likelihood we are modeling with Poisson distribution, which parameters are distributed by multivariate normal law with unknown parameters – mean vector and covariance matrix. The estimations of unknown parameters are calculated by the maximum likelihood method. There are equations derived, those must be satisfied with model’s maximum likelihood parameters estimations. Positive definition of evaluated covariance matrixes are controlled by calculating ratio between matrix maximum and minimum eigenvalues.
Coset Group Construction of Multidimensional Number Systems
Directory of Open Access Journals (Sweden)
Horia I. Petrache
2014-07-01
Full Text Available Extensions of real numbers in more than two dimensions, in particular quaternions and octonions, are finding applications in physics due to the fact that they naturally capture symmetries of physical systems. However, in the conventional mathematical construction of complex and multicomplex numbers multiplication rules are postulated instead of being derived from a general principle. A more transparent and systematic approach is proposed here based on the concept of coset product from group theory. It is shown that extensions of real numbers in two or more dimensions follow naturally from the closure property of finite coset groups adding insight into the utility of multidimensional number systems in describing symmetries in nature.
Cluster dynamics modelling of materials: A new hybrid deterministic/stochastic coupling approach
Terrier, Pierre; Athènes, Manuel; Jourdan, Thomas; Adjanor, Gilles; Stoltz, Gabriel
2017-12-01
Deterministic simulations of the rate equations governing cluster dynamics in materials are limited by the number of equations to integrate. Stochastic simulations are limited by the high frequency of certain events. We propose a coupling method combining deterministic and stochastic approaches. It allows handling different time scale phenomena for cluster dynamics. This method, based on a splitting of the dynamics, is generic and we highlight two different hybrid deterministic/stochastic methods. These coupling schemes are highly parallelizable and specifically designed to treat large size cluster problems. The proof of concept is made on a simple model of vacancy clustering under thermal ageing.
Anti-deterministic behaviour of discrete systems that are less predictable than noise
Urbanowicz, Krzysztof; Kantz, Holger; Holyst, Janusz A.
2005-05-01
We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in recurrence plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact, less predictable than noise and hence is different from pseudo-random number generators.
Deterministic and heuristic models of forecasting spare parts demand
Directory of Open Access Journals (Sweden)
Ivan S. Milojević
2012-04-01
Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing
Activity modes selection for project crashing through deterministic simulation
Directory of Open Access Journals (Sweden)
Ashok Mohanty
2011-12-01
Full Text Available Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
Directory of Open Access Journals (Sweden)
Ortega Antonio
2005-01-01
Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
An alternate protocol to achieve stochastic and deterministic resonances
Tiwari, Ishant; Dave, Darshil; Phogat, Richa; Khera, Neev; Parmananda, P.
2017-10-01
Periodic and Aperiodic Stochastic Resonance (SR) and Deterministic Resonance (DR) are studied in this paper. To check for the ubiquitousness of the phenomena, two unrelated systems, namely, FitzHugh-Nagumo and a particle in a bistable potential well, are studied. Instead of the conventional scenario of noise amplitude (in the case of SR) or chaotic signal amplitude (in the case of DR) variation, a tunable system parameter ("a" in the case of FitzHugh-Nagumo model and the damping coefficient "j" in the bistable model) is regulated. The operating values of these parameters are defined as the "setpoint" of the system throughout the present work. Our results indicate that there exists an optimal value of the setpoint for which maximum information transfer between the input and the output signals takes place. This information transfer from the input sub-threshold signal to the output dynamics is quantified by the normalised cross-correlation coefficient ( | CCC | ). | CCC | as a function of the setpoint exhibits a unimodal variation which is characteristic of SR (or DR). Furthermore, | CCC | is computed for a grid of noise (or chaotic signal) amplitude and setpoint values. The heat map of | CCC | over this grid yields the presence of a resonance region in the noise-setpoint plane for which the maximum enhancement of the input sub-threshold signal is observed. This resonance region could be possibly used to explain how organisms maintain their signal detection efficacy with fluctuating amounts of noise present in their environment. Interestingly, the method of regulating the setpoint without changing the noise amplitude was not able to induce Coherence Resonance (CR). A possible, qualitative reasoning for this is provided.
Parkinson's disease classification using gait analysis via deterministic learning.
Zeng, Wei; Liu, Fenglin; Wang, Qinghui; Wang, Ying; Ma, Limin; Zhang, Yu
2016-10-28
Gait analysis plays an important role in maintaining the well-being of human mobility and health care, and is a valuable tool for obtaining quantitative information on motor deficits in Parkinson's disease (PD). In this paper, we propose a method to classify (diagnose) patients with PD and healthy control subjects using gait analysis via deterministic learning theory. The classification approach consists of two phases: a training phase and a classification phase. In the training phase, gait characteristics represented by the gait dynamics are derived from the vertical ground reaction forces under the usual and self-selected paces of the subjects. The gait dynamics underlying gait patterns of healthy controls and PD patients are locally accurately approximated by radial basis function (RBF) neural networks. The obtained knowledge of approximated gait dynamics is stored in constant RBF networks. The gait patterns of healthy controls and PD patients constitute a training set. In the classification phase, a bank of dynamical estimators is constructed for all the training gait patterns. Prior knowledge of gait dynamics represented by the constant RBF networks is embedded in the estimators. By comparing the set of estimators with a test gait pattern of a certain PD patient to be classified (diagnosed), a set of classification errors are generated. The average L 1 norms of the errors are taken as the classification measure between the dynamics of the training gait patterns and the dynamics of the test PD gait pattern according to the smallest error principle. When the gait patterns of 93 PD patients and 73 healthy controls are classified with five-fold cross-validation method, the accuracy, sensitivity and specificity of the results are 96.39%, 96.77% and 95.89%, respectively. Based on the results, it may be claimed that the features and the classifiers used in the present study could effectively separate the gait patterns between the groups of PD patients and healthy
Development of a Deterministic Ethernet Building blocks for Space Applications
Fidi, C.; Jakovljevic, Mirko
2015-09-01
The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The deterministic Ethernet technology TTEthernet [1] diploid on the NASA Orion spacecraft has demonstrated the use of the TTEthernet technology for a safety critical human space flight application during the Exploration Flight Test 1 (EFT-1). The TTEthernet technology used within the NASA Orion program has been matured for the use within this mission but did not lead to a broader use in space applications or an international space standard. Therefore TTTech has developed a new version which allows to scale the technology for different applications not only the high end missions allowing to decrease the size of the building blocks leading to a reduction of size weight and power enabling the use in smaller applications. TTTech is currently developing a full space products offering for its TTEthernet technology to allow the use in different space applications not restricted to launchers and human spaceflight. A broad space market assessment and the current ESA TRP7594 lead to the development of a space grade TTEthernet controller ASIC based on the ESA qualified Atmel AT1C8RHA95 process [2]. In this paper we will describe our current TTEthernet controller development towards a space qualified network component allowing future spacecrafts to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer.
Benchmarking the Multidimensional Stellar Implicit Code MUSIC
Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.
2017-04-01
We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.
Multidimensional Conservation Laws and Low Regularity Solutions
Energy Technology Data Exchange (ETDEWEB)
Barbara Lee Keyfitz
2007-06-16
This is the concluding report for the project, a continuation of research by Keyfitz and co-workers on multidimensional conservation laws, and applications of nonhyperbolic conservation laws in the two-fluid model for multiphase flow. The multidimensional research project was started with Suncica Canic, at the University of Houston and with Eun Heui Kim, now at California State University Long Beach. Two postdoctoral researchers, Katarina Jegdic and Allen Tesdall, also worked on this research. Jegdic's research was supported (for a total of one year) by this grant. Work on nonhyperbolic models for two-phase flows is being pursued jointly with Michael Sever, Hebrew University. Background for the project is contained in earlier reports. Note that in 2006, the project received a one-year no-cost extension that will end in September, 2007. A new proposal, for continuation of the research and for new projects, will be submitted in the Fall of 2007, with funding requested to begin in the summer of 2008. The reason for the 'funding gap' is Keyfitz's four-year stint as Director of the Fields Institute in Toronto, Canada. The research has continued, but has been supported by Canadian grant funds, as seems appropriate during this period.
Multidimensional biochemical information processing of dynamical patterns.
Hasegawa, Yoshihiko
2018-02-01
Cells receive signaling molecules by receptors and relay information via sensory networks so that they can respond properly depending on the type of signal. Recent studies have shown that cells can extract multidimensional information from dynamical concentration patterns of signaling molecules. We herein study how biochemical systems can process multidimensional information embedded in dynamical patterns. We model the decoding networks by linear response functions, and optimize the functions with the calculus of variations to maximize the mutual information between patterns and output. We find that, when the noise intensity is lower, decoders with different linear response functions, i.e., distinct decoders, can extract much information. However, when the noise intensity is higher, distinct decoders do not provide the maximum amount of information. This indicates that, when transmitting information by dynamical patterns, embedding information in multiple patterns is not optimal when the noise intensity is very large. Furthermore, we explore the biochemical implementations of these decoders using control theory and demonstrate that these decoders can be implemented biochemically through the modification of cascade-type networks, which are prevalent in actual signaling pathways.
Testlet-based Multidimensional Adaptive Testing
Directory of Open Access Journals (Sweden)
Andreas Frey
2016-11-01
Full Text Available Multidimensional adaptive testing (MAT is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT. MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, 1.5 and testlet sizes (3 items, 6 items, 9 items with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.
Testlet-Based Multidimensional Adaptive Testing.
Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen
2016-01-01
Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.
MULTIDIMENSIONAL MODELING OF CORONAL RAIN DYNAMICS
International Nuclear Information System (INIS)
Fang, X.; Xia, C.; Keppens, R.
2013-01-01
We present the first multidimensional, magnetohydrodynamic simulations that capture the initial formation and long-term sustainment of the enigmatic coronal rain phenomenon. We demonstrate how thermal instability can induce a spectacular display of in situ forming blob-like condensations which then start their intimate ballet on top of initially linear force-free arcades. Our magnetic arcades host a chromospheric, transition region, and coronal plasma. Following coronal rain dynamics for over 80 minutes of physical time, we collect enough statistics to quantify blob widths, lengths, velocity distributions, and other characteristics which directly match modern observational knowledge. Our virtual coronal rain displays the deformation of blobs into V-shaped features, interactions of blobs due to mostly pressure-mediated levitations, and gives the first views of blobs that evaporate in situ or are siphoned over the apex of the background arcade. Our simulations pave the way for systematic surveys of coronal rain showers in true multidimensional settings to connect parameterized heating prescriptions with rain statistics, ultimately allowing us to quantify the coronal heating input.
MULTIDIMENSIONAL MODELING OF CORONAL RAIN DYNAMICS
Energy Technology Data Exchange (ETDEWEB)
Fang, X.; Xia, C.; Keppens, R. [Centre for mathematical Plasma Astrophysics, Department of Mathematics, KU Leuven, B-3001 Leuven (Belgium)
2013-07-10
We present the first multidimensional, magnetohydrodynamic simulations that capture the initial formation and long-term sustainment of the enigmatic coronal rain phenomenon. We demonstrate how thermal instability can induce a spectacular display of in situ forming blob-like condensations which then start their intimate ballet on top of initially linear force-free arcades. Our magnetic arcades host a chromospheric, transition region, and coronal plasma. Following coronal rain dynamics for over 80 minutes of physical time, we collect enough statistics to quantify blob widths, lengths, velocity distributions, and other characteristics which directly match modern observational knowledge. Our virtual coronal rain displays the deformation of blobs into V-shaped features, interactions of blobs due to mostly pressure-mediated levitations, and gives the first views of blobs that evaporate in situ or are siphoned over the apex of the background arcade. Our simulations pave the way for systematic surveys of coronal rain showers in true multidimensional settings to connect parameterized heating prescriptions with rain statistics, ultimately allowing us to quantify the coronal heating input.
Multidimensional biochemical information processing of dynamical patterns
Hasegawa, Yoshihiko
2018-02-01
Cells receive signaling molecules by receptors and relay information via sensory networks so that they can respond properly depending on the type of signal. Recent studies have shown that cells can extract multidimensional information from dynamical concentration patterns of signaling molecules. We herein study how biochemical systems can process multidimensional information embedded in dynamical patterns. We model the decoding networks by linear response functions, and optimize the functions with the calculus of variations to maximize the mutual information between patterns and output. We find that, when the noise intensity is lower, decoders with different linear response functions, i.e., distinct decoders, can extract much information. However, when the noise intensity is higher, distinct decoders do not provide the maximum amount of information. This indicates that, when transmitting information by dynamical patterns, embedding information in multiple patterns is not optimal when the noise intensity is very large. Furthermore, we explore the biochemical implementations of these decoders using control theory and demonstrate that these decoders can be implemented biochemically through the modification of cascade-type networks, which are prevalent in actual signaling pathways.
Multidimensional Learner Model In Intelligent Learning System
Deliyska, B.; Rozeva, A.
2009-11-01
The learner model in an intelligent learning system (ILS) has to ensure the personalization (individualization) and the adaptability of e-learning in an online learner-centered environment. ILS is a distributed e-learning system whose modules can be independent and located in different nodes (servers) on the Web. This kind of e-learning is achieved through the resources of the Semantic Web and is designed and developed around a course, group of courses or specialty. An essential part of ILS is learner model database which contains structured data about learner profile and temporal status in the learning process of one or more courses. In the paper a learner model position in ILS is considered and a relational database is designed from learner's domain ontology. Multidimensional modeling agent for the source database is designed and resultant learner data cube is presented. Agent's modules are proposed with corresponding algorithms and procedures. Multidimensional (OLAP) analysis guidelines on the resultant learner module for designing dynamic learning strategy have been highlighted.
[Multidimensional family therapy: which influences, which specificities?].
Bonnaire, C; Bastard, N; Couteron, J-P; Har, A; Phan, O
2014-10-01
Among illegal psycho-active drugs, cannabis is the most consumed by French adolescents. Multidimensional family therapy (MDFT) is a family-based outpatient therapy which has been developed for adolescents with drug and behavioral problems. MDFT has shown its effectiveness in adolescents with substance abuse disorders (notably cannabis abuse) not only in the United States but also in Europe (International Cannabis Need of Treatment project). MDFT is a multidisciplinary approach and an evidence-based treatment, at the crossroads of developmental psychology, ecological theories and family therapy. Its psychotherapeutic techniques find its roots in a variety of approaches which include systemic family therapy and cognitive therapy. The aims of this paper are: to describe all the backgrounds of MDFT by highlighting its characteristics; to explain how structural and strategy therapies have influenced this approach; to explore the links between MDFT, brief strategic family therapy and multi systemic family therapy; and to underline the specificities of this family therapy method. The multidimensional family therapy was created on the bases of 1) the integration of multiple therapeutic techniques stemming from various family therapy theories; and 2) studies which have shown family therapy efficiency. Several trials have shown a better efficiency of MDFT compared to group treatment, cognitive-behavioral therapy and home-based treatment. Studies have also highlighted that MDFT led to superior treatment outcomes, especially among young people with severe drug use and psychiatric co-morbidities. In the field of systemic family therapies, MDFT was influenced by: 1) the structural family therapy (S. Minuchin), 2) the strategic family theory (J. Haley), and 3) the intergenerational family therapy (Bowen and Boszormenyi-Nagy). MDFT has specific aspects: MDFT therapists think in a multidimensional perspective (because an adolescent's drug abuse is a multidimensional disorder), they
A plateau–valley separation method for textured surfaces with a deterministic pattern
DEFF Research Database (Denmark)
Godi, Alessandro; Kühle, Anders; De Chiffre, Leonardo
2014-01-01
The effective characterization of textured surfaces presenting a deterministic pattern of lubricant reservoirs is an issue with which many researchers are nowadays struggling. Existing standards are not suitable for the characterization of such surfaces, providing at times values without physical...
Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data
U.S. Environmental Protection Agency — This dataset documents the source of the data analyzed in the manuscript " Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII...
International Nuclear Information System (INIS)
Azadeh, A.; Ghaderi, S.F.; Omrani, H.
2009-01-01
This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran
National Research Council Canada - National Science Library
Michalowicz, Joseph V; Nichols, Jonathan M; Bucholtz, Frank
2008-01-01
Understanding the limitations to detecting deterministic signals in the presence of noise, especially additive, white Gaussian noise, is of importance for the design of LPI systems and anti-LPI signal defense...
Handbook of EOQ inventory problems stochastic and deterministic models and applications
Choi, Tsan-Ming
2013-01-01
This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.
Recent achievements of the neo-deterministic seismic hazard assessment in the CEI region
International Nuclear Information System (INIS)
Panza, G.F.; Vaccari, F.; Kouteva, M.
2008-03-01
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales - regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown. (author)
On the application of deterministic optimization methods to stochastic control problems
Kramer, L. C.; Athans, M.
1974-01-01
A technique is presented by which deterministic optimization techniques, for example, the maximum principle of Pontriagin, can be applied to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being deterministic. The use of the technique in treating problems with quadratic and nonquadratic costs is illustrated.
Yildirim, Necmettin; Kazanci, Caner
2011-01-01
A brief introduction to mathematical modeling of biochemical regulatory reaction networks is presented. Both deterministic and stochastic modeling techniques are covered with examples from enzyme kinetics, coupled reaction networks with oscillatory dynamics and bistability. The Yildirim-Mackey model for lactose operon is used as an example to discuss and show how deterministic and stochastic methods can be used to investigate various aspects of this bacterial circuit. © 2011 Elsevier Inc. All rights reserved.
Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads
Directory of Open Access Journals (Sweden)
Králik Juraj
2014-12-01
Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A.F.; Roussin, R.W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A. F.; Roussin, R. W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
f-MAC: A Deterministic Media Access Control Protocol Without Time Synchronization
Roedig, Utz; Barroso, Andre; Sreenan, Cormac J.
2006-01-01
Nodes in a wireless network transmit messages through a shared medium. Thus, a Media Access Control (MAC) protocol is necessary to regulate and coordinate medium access. For some application areas it is necessary to have a deterministic MAC protocol which can give guarantees on message delay and channel throughput. Schedule based MAC protocols, based on time synchronization among nodes, are currently used to implement deterministic MAC protocols. Time synchronization is difficult and costly, ...
Multidimensional Physical Self-Concept of Athletes with Physical Disabilities
Shapiro, Deborah R.; Martin, Jeffrey J.
2010-01-01
The purposes of this investigation were first to predict reported PA (physical activity) behavior and self-esteem using a multidimensional physical self-concept model and second to describe perceptions of multidimensional physical self-concept (e.g., strength, endurance, sport competence) among athletes with physical disabilities. Athletes (N =…
Multidimensional Computerized Adaptive Testing for Indonesia Junior High School Biology
Kuo, Bor-Chen; Daud, Muslem; Yang, Chih-Wei
2015-01-01
This paper describes a curriculum-based multidimensional computerized adaptive test that was developed for Indonesia junior high school Biology. In adherence to the Indonesian curriculum of different Biology dimensions, 300 items was constructed, and then tested to 2238 students. A multidimensional random coefficients multinomial logit model was…
Analysis of Multidimensional Poverty: Theory and Case Studies ...
International Development Research Centre (IDRC) Digital Library (Canada)
Analysis of Multidimensional Poverty: Theory and Case Studies. Couverture du livre Analysis of Multidimensional Poverty: Theory and Case Studies. Auteur(s):. Louis-Marie Asselin. Maison(s) d'édition: Springer, CRDI. 18 août 2009. ISBN : 9781441909053. 228 pages. e-ISBN : 9781552504604. Téléchargez le PDF.
Multidimensional filter banks and wavelets research developments and applications
Levy, Bernard
1997-01-01
Multidimensional Filter Banks and Wavelets: Reserach Developments and Applications brings together in one place important contributions and up-to-date research results in this important area. Multidimensional Filter Banks and Wavelets: Research Developments and Applications serves as an excellent reference, providing insight into some of the most important research issues in the field.
Supervised and Unsupervised Learning of Multidimensional Acoustic Categories
Goudbeek, Martijn; Swingley, Daniel; Smits, Roel
2009-01-01
Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over 1 dimension are easy to learn and that learning multidimensional categories is…
Multidimensional First-Order Dominance Comparisons of Population Wellbeing
DEFF Research Database (Denmark)
Siersbæk, Nikolaj; Østerdal, Lars Peter Raahave; Arndt, Thomas Channing
2017-01-01
This chapter conveys the concept of first-order dominance (FOD) with particular focus on applications to multidimensional population welfare comparisons. It gives an account of the fundamental equivalent definitions of FOD both in the one-dimensional and multidimensional setting, illustrated...
DEFF Research Database (Denmark)
Antón Castro, Francesc/François; Musiige, Deogratius; Mioc, Darka
2016-01-01
This paper presents a case study for comparing different multidimensional mathematical modeling methodologies used in multidimensional spatial big data modeling and proposing a new technique. An analysis of multidimensional modeling approaches (neural networks, polynomial interpolation and homoto...
Application of tabu search to deterministic and stochastic optimization problems
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is
Deterministic Modeling of the High Temperature Test Reactor
Energy Technology Data Exchange (ETDEWEB)
Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the
Order and Chaos in Some Deterministic Infinite Trigonometric Products
Albert, Leif; Kiessling, Michael K.-H.
2017-08-01
It is shown that the deterministic infinite trigonometric products \\prod _{n\\in N}[ 1- p +p cos ( style n^{-s}_{_{}}t) ] =: {{ Cl }_{p;s}^{}}(t) with parameters p\\in (0,1] & s>1/2, and variable t\\in R, are inverse Fourier transforms of the probability distributions for certain random series Ω p^ζ (s) taking values in the real ω line; i.e. the {{ Cl }_{p;s}^{}}(t) are characteristic functions of the Ω p^ζ (s). The special case p=1=s yields the familiar random harmonic series, while in general Ω p^ζ (s) is a "random Riemann-ζ function," a notion which will be explained and illustrated—and connected to the Riemann hypothesis. It will be shown that Ω p^ζ (s) is a very regular random variable, having a probability density function (PDF) on the ω line which is a Schwartz function. More precisely, an elementary proof is given that there exists some K_{p;s}^{}>0, and a function F_{p;s}^{}(|t|) bounded by |F_{p;s}^{}(|t|)|!≤ \\exp \\big (K_{p;s}^{} |t|^{1/(s+1)}), and C_{p;s}^{} =-1/s\\int _0^∞ ln |{1-p+p cos ξ }|1/ξ ^{1+1/s}{d}ξ , such that \\forall t\\in R:\\quad {{ Cl }_{p;s}^{}}(t) = \\exp \\bigl ({- C_{p;s}^{} |t|^{1/s}\\bigr )F_{p;s}^{}(|t|)}; the regularity of Ω p^ζ (s) follows. Incidentally, this theorem confirms a surmise by Benoit Cloitre, that ln {{ Cl }_{{{1}/{3}};2}^{}}(t) ˜ -C√{t} ( t→ ∞) for some C>0. Graphical evidence suggests that {{ Cl }_{{{1}/{3}};2}^{}}(t) is an empirically unpredictable (chaotic) function of t. This is reflected in the rich structure of the pertinent PDF (the Fourier transform of {{ Cl }_{{{1}/{3}};2}^{}}), and illustrated by random sampling of the Riemann-ζ walks, whose branching rules allow the build-up of fractal-like structures.
Nuclear Forensic Inferences Using Iterative Multidimensional Statistics
Energy Technology Data Exchange (ETDEWEB)
Robel, M; Kristo, M J; Heller, M A
2009-06-09
Nuclear forensics involves the analysis of interdicted nuclear material for specific material characteristics (referred to as 'signatures') that imply specific geographical locations, production processes, culprit intentions, etc. Predictive signatures rely on expert knowledge of physics, chemistry, and engineering to develop inferences from these material characteristics. Comparative signatures, on the other hand, rely on comparison of the material characteristics of the interdicted sample (the 'questioned sample' in FBI parlance) with those of a set of known samples. In the ideal case, the set of known samples would be a comprehensive nuclear forensics database, a database which does not currently exist. In fact, our ability to analyze interdicted samples and produce an extensive list of precise materials characteristics far exceeds our ability to interpret the results. Therefore, as we seek to develop the extensive databases necessary for nuclear forensics, we must also develop the methods necessary to produce the necessary inferences from comparison of our analytical results with these large, multidimensional sets of data. In the work reported here, we used a large, multidimensional dataset of results from quality control analyses of uranium ore concentrate (UOC, sometimes called 'yellowcake'). We have found that traditional multidimensional techniques, such as principal components analysis (PCA), are especially useful for understanding such datasets and drawing relevant conclusions. In particular, we have developed an iterative partial least squares-discriminant analysis (PLS-DA) procedure that has proven especially adept at identifying the production location of unknown UOC samples. By removing classes which fell far outside the initial decision boundary, and then rebuilding the PLS-DA model, we have consistently produced better and more definitive attributions than with a single pass classification approach. Performance of the
Multidimensional Data Modeling For Location-Based Services
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Kligys, Augustas; Pedersen, Torben Bach
2004-01-01
and requests of their users in multidimensional databases, i.e., data warehouses, and content delivery may be based on the results of complex queries on these data warehouses. Such queries aggregate detailed data in order to find useful patterns, e.g., in the interaction of a particular user with the services....... The application of multidimensional technology in this context poses a range of new challenges. The specific challenge addressed here concerns the provision of an appropriate multidimensional data model. In particular, the paper extends an existing multidimensional data model and algebraic query language...... to accommodate spatial values that exhibit partial containment relationships instead of the total containment relationships normally assumed in multidimensional data models. Partial containment introduces imprecision in aggregation paths. The paper proposes a method for evaluating the imprecision of such paths...
Multidimensional quantum entanglement with large-scale integrated optics.
Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G
2018-04-20
The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
The simulation of multidimensional multiphase flows
International Nuclear Information System (INIS)
Lahey, Richard T.
2005-01-01
This paper presents an assessment of various models which can be used for the multidimensional simulation of multiphase flows, such as may occur in nuclear reactors. In particular, a model appropriate for the direct numerical simulation (DNS) of multiphase flows and a mechanistically based, three-dimensional, four-field, turbulent, two-fluid computational multiphase fluid dynamics (CMFD) model are discussed. A two-fluid bubbly flow model, which was derived using potential flow theory, can be extended to other flow regimes, but this will normally involve ensemble-averaging the results from direct numerical simulations (DNS) of various flow regimes to provide the detailed numerical data necessary for the development of flow-regime-specific interfacial and wall closure laws
Constraint theory multidimensional mathematical model management
Friedman, George J
2017-01-01
Packed with new material and research, this second edition of George Friedman’s bestselling Constraint Theory remains an invaluable reference for all engineers, mathematicians, and managers concerned with modeling. As in the first edition, this text analyzes the way Constraint Theory employs bipartite graphs and presents the process of locating the “kernel of constraint” trillions of times faster than brute-force approaches, determining model consistency and computational allowability. Unique in its abundance of topological pictures of the material, this book balances left- and right-brain perceptions to provide a thorough explanation of multidimensional mathematical models. Much of the extended material in this new edition also comes from Phan Phan’s PhD dissertation in 2011, titled “Expanding Constraint Theory to Determine Well-Posedness of Large Mathematical Models.” Praise for the first edition: "Dr. George Friedman is indisputably the father of the very powerful methods of constraint theory...
Biological evolution in a multidimensional fitness landscape.
Saakian, David B; Kirakosyan, Zara; Hu, Chin-Kun
2012-09-01
We considered a multiblock molecular model of biological evolution, in which fitness is a function of the mean types of alleles located at different parts (blocks) of the genome. We formulated an infinite population model with selection and mutation, and calculated the mean fitness. For the case of recombination, we formulated a model with a multidimensional fitness landscape (the dimension of the space is equal to the number of blocks) and derived a theorem about the dynamics of initially narrow distribution. We also considered the case of lethal mutations. We also formulated the finite population version of the model in the case of lethal mutations. Our models, derived for the virus evolution, are interesting also for the statistical mechanics and the Hamilton-Jacobi equation as well.
Gender Ideologies in Europe: A Multidimensional Framework.
Grunow, Daniela; Begall, Katia; Buchler, Sandra
2018-02-01
The authors argue, in line with recent research, that operationalizing gender ideology as a unidimensional construct ranging from traditional to egalitarian is problematic and propose an alternative framework that takes the multidimensionality of gender ideologies into account. Using latent class analysis, they operationalize their gender ideology framework based on data from the 2008 European Values Study, of which eight European countries reflecting the spectrum of current work-family policies were selected. The authors examine the form in which gender ideologies cluster in the various countries. Five ideology profiles were identified: egalitarian, egalitarian essentialism, intensive parenting, moderate traditional, and traditional. The five ideology profiles were found in all countries, but with pronounced variation in size. Ideologies mixing gender essentialist and egalitarian views appear to have replaced traditional ideologies, even in countries offering some institutional support for gendered separate spheres.
A complete set of multidimensional Bell inequalities
International Nuclear Information System (INIS)
Arnault, François
2012-01-01
We give a multidimensional generalization of the complete set of Bell-correlation inequalities given by Werner and Wolf (2001 Phys. Rev. A 64 032112) and by Zukowski and Brukner (2002 Phys. Rev. Lett. 88 210401), for the two-dimensional case. Our construction applies to the n-party, two-observable case, where each observable is d-valued. The d d n inequalities obtained involve homogeneous polynomials. They define the facets of a polytope in a complex vector space of dimension d n . We detail the inequalities obtained in the case d = 3 and, from them, we recover known inequalities. We finally explain how the violations of our inequalities by quantum mechanics can be computed and could be observed, when using unitary observables. (paper)
Crisis multidimensional y sostenibilidad de la vida
Directory of Open Access Journals (Sweden)
Amaia Pérez Orozco
2012-03-01
Full Text Available En el presente texto se propone una mirada a la crisis desde la sostenibilidad de la vida como alternativa a la perspectiva hegemónica focalizada en los mercados. Se argumenta que la crisis es multidimensional y acumulada, civilizatoria, y que precede al estallido financiero. Se analizan sus consecuencias en términos de los procesos vitales y se propone abrir dos debates para afrontarla: qué entender por vida que merezca la pena ser vivida y cómo construir una responsabilidad colectiva en la generación de sus condiciones de posibilidad. Finalmente, se argumenta que es necesario devolver la economía al terreno de la discusión política.
Multidimensional evaluation on FR cycle systems
International Nuclear Information System (INIS)
Nakai, Ryodai; Fujii, Sumio; Takakuma, Katsuyuki; Katoh, Atsushi; Ono, Kiyoshi; Ohtaki, Akira; Shiotani, Hiroki
2004-01-01
This report explains some results of the multidimensional evaluation on various fast reactor cycle system concepts from an interim report of the 2nd phase of ''Feasibility Study on Commercialized FR Cycle System''. This method is designed to give more objective and more quantitative evaluations to clarify commercialized system candidate concepts. Here we brief current evaluation method from the five viewpoints of safety, economy, environment, resource and non-proliferation, with some trial evaluation results for some cycles consist of promising technologies in reactor, core and fuel, reprocessing and fuel manufacture. Moreover, we describe FR cycle deployment scenarios which describe advantages and disadvantages of the cycles from the viewpoints of uranium resource and radioactive waste based on long-term nuclear material mass flow analyses and advantages of the deployment of FR cycle itself from the viewpoints of the comparison with alternative power supplies as well as cost and benefit. (author)
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.
Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko
2015-08-01
When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.
A study of multidimensional modeling approaches for data warehouse
Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani
2016-08-01
Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.
A Conceptual Model for Multidimensional Analysis of Documents
Ravat, Franck; Teste, Olivier; Tournier, Ronan; Zurlfluh, Gilles
Data warehousing and OLAP are mainly used for the analysis of transactional data. Nowadays, with the evolution of Internet, and the development of semi-structured data exchange format (such as XML), it is possible to consider entire fragments of data such as documents as analysis sources. As a consequence, an adapted multidimensional analysis framework needs to be provided. In this paper, we introduce an OLAP multidimensional conceptual model without facts. This model is based on the unique concept of dimensions and is adapted for multidimensional document analysis. We also provide a set of manipulation operations.
Experimental verification of multidimensional quantum steering
Li, Che-Ming; Lo, Hsin-Pin; Chen, Liang-Yu; Yabushita, Atsushi
2018-03-01
Quantum steering enables one party to communicate with another remote party even if the sender is untrusted. Such characteristics of quantum systems not only provide direct applications to quantum information science, but are also conceptually important for distinguishing between quantum and classical resources. While concrete illustrations of steering have been shown in several experiments, quantum steering has not been certified for higher dimensional systems. Here, we introduce a simple method to experimentally certify two different kinds of quantum steering: Einstein-Podolsky-Rosen (EPR) steering and single-system (SS) steering (i.e., temporal steering), for dimensionality (d) up to d = 16. The former reveals the steerability among bipartite systems, whereas the latter manifests itself in single quantum objects. We use multidimensional steering witnesses to verify EPR steering of polarization-entangled pairs and SS steering of single photons. The ratios between the measured witnesses and the maximum values achieved by classical mimicries are observed to increase with d for both EPR and SS steering. The designed scenario offers a new method to study further the genuine multipartite steering of large dimensionality and potential uses in quantum information processing.
Multidimensional point transform for public health practice.
AbdelMalik, P; Kamel Boulos, M N
2012-01-01
With increases in spatial information and enabling technologies, location-privacy concerns have been on the rise. A commonly proposed solution in public health involves random perturbation, however consideration for individual dimensions (attributes) has been weak. The current study proposes a multidimensional point transform (MPT) that integrates the spatial dimension with other dimensions of interest to comprehensively anonymise data. The MPT relies on the availability of a base population, a subset patient dataset, and shared dimensions of interest. Perturbation distance and anonymity thresholds are defined, as are allowable dimensional perturbations. A preliminary implementation is presented using sex, age and location as the three dimensions of interest, with a maximum perturbation distance of 1 kilometre and an anonymity threshold of 20%. A synthesised New York county population is used for testing with 1000 iterations for each of 25, 50, 100, 200 and 400 patient dataset sizes. The MPT consistently yielded a mean perturbation distance of 46 metres with no sex or age perturbation required. Displacement of the spatial mean decreased with patient dataset size and averaged 5.6 metres overall. The MPT presents a flexible, customisable and adaptive algorithm for perturbing datasets for public health, allowing tweaking and optimisation of the trade-offs for different datasets and purposes. It is not, however, a substitute for secure and ethical conduct, and a public health framework for the appropriate disclosure, use and dissemination of data containing personal identifiable information is required. The MPT presents an important component of such a framework.
Multidimensional Scaling Visualization Using Parametric Similarity Indices
Directory of Open Access Journals (Sweden)
J. A. Tenreiro Machado
2015-03-01
Full Text Available In this paper, we apply multidimensional scaling (MDS and parametric similarity indices (PSI in the analysis of complex systems (CS. Each CS is viewed as a dynamical system, exhibiting an output time-series to be interpreted as a manifestation of its behavior. We start by adopting a sliding window to sample the original data into several consecutive time periods. Second, we define a given PSI for tracking pieces of data. We then compare the windows for different values of the parameter, and we generate the corresponding MDS maps of ‘points’. Third, we use Procrustes analysis to linearly transform the MDS charts for maximum superposition and to build a globalMDS map of “shapes”. This final plot captures the time evolution of the phenomena and is sensitive to the PSI adopted. The generalized correlation, theMinkowski distance and four entropy-based indices are tested. The proposed approach is applied to the Dow Jones Industrial Average stock market index and the Europe Brent Spot Price FOB time-series.
Proposed empirical gas geothermometer using multidimensional approach
Energy Technology Data Exchange (ETDEWEB)
Supranto; Sudjatmiko; Toha, Budianto; Wintolo, Djoko; Alhamid, Idrus
1996-01-24
Several formulas of surface gas geothermometer have been developed to utilize in geothermal exploration, i.e. by D'Amore and Panichi (1980) and by Darling and Talbot (1992). This paper presents an empirical gas geothermometer formula using multidimensional approach. The formula was derived from 37 selected chemical data of the 5 production wells from the Awibengkok Geothermal Volcanic Field in West Java. Seven components, i.e., gas volume percentage, CO_{2}, H_{2}S, CH_{4}, H_{2}, N_{2}, and NH_{3}, from these data are utilize to developed three model equations which represent relationship between temperature and gas compositions. These formulas are then tested by several fumarolic chemical data from Sibual-buali Area (North Sumatera) and from Ringgit Area (South Sumatera). Preliminary result indicated that gas volume percentage, H_{2}S and CO_{2} concentrations have a significant role in term of gas geothermometer. Further verification is currently in progress.
Construct validity of multidimensional personality questionnaire (MPQ
Directory of Open Access Journals (Sweden)
Mitrović Dušanka
2007-01-01
Full Text Available The paper presents two studies aimed at the examination of the factor structure of The Multidimensional Personality Questionnaire (MPQ and joint factor structure of the scales of MPQ and SPSRQ (The Sensitivity to Punishment and Sensitivity to Reward Questionnaire. In the first research conducted on the sample of 1127 participants of both sexes, age 18 to 67, the results of the principal component analysis of the MPQ scales point to the existence of three higher-order dimensions, named General Adaptedness, Psychopathic Tendencies and Negative Emotionality. These dimensions correspond to the dimensions of the Eysenck’s PEN model to the greater extent than they achieve the assumed similarity with the dimensions of the Reinforcement Sensitivity Theory. In the second study conducted on the sample of 199 respondents of both sexes, age 18 to 59, the results of the joint principal component analysis of the MPQ and SPSRQ scales point to the existence of three higher order dimensions, which correspond to the Tellegen’s Positive Emotionality, Negative Emotionality, and Constraint. .
Imaging Multidimensional Therapeutically Relevant Circadian Relationships
Directory of Open Access Journals (Sweden)
Jamil Singletary
2009-01-01
Full Text Available Circadian clocks gate cellular proliferation and, thereby, therapeutically target availability within proliferative pathways. This temporal coordination occurs within both cancerous and noncancerous proliferating tissues. The timing within the circadian cycle of the administration of drugs targeting proliferative pathways necessarily impacts the amount of damage done to proliferating tissues and cancers. Concurrently measuring target levels and associated key pathway components in normal and malignant tissues around the circadian clock provides a path toward a fuller understanding of the temporal relationships among the physiologic processes governing the therapeutic index of antiproliferative anticancer therapies. The temporal ordering among these relationships, paramount to determining causation, is less well understood using two- or three-dimensional representations. We have created multidimensional multimedia depictions of the temporal unfolding of putatively causative and the resultant therapeutic effects of a drug that specifically targets these ordered processes at specific times of the day. The systems and methods used to create these depictions are provided, as well as three example supplementary movies.
Multidimensional Hybridization of Dark Surface Plasmons.
Yankovich, Andrew B; Verre, Ruggero; Olsén, Erik; Persson, Anton E O; Trinh, Viet; Dovner, Gudrun; Käll, Mikael; Olsson, Eva
2017-04-25
Synthetic three-dimensional (3D) nanoarchitectures are providing more control over light-matter interactions and rapidly progressing photonic-based technology. These applications often utilize the strong synergy between electromagnetic fields and surface plasmons (SPs) in metallic nanostructures. However, many of the SP interactions hosted by complex 3D nanostructures are poorly understood because they involve dark hybridized states that are typically undetectable with far-field optical spectroscopy. Here, we use experimental and theoretical electron energy loss spectroscopy to elucidate dark SPs and their interactions in layered metal-insulator-metal disc nanostructures. We go beyond the established dipole SP hybridization analysis by measuring breathing and multipolar SP hybridization. In addition, we reveal multidimensional SP hybridization that simultaneously utilizes in-plane and out-of-plane SP coupling. Near-field classic electrodynamics calculations provide excellent agreement with all experiments. These results advance the fundamental understanding of SP hybridization in 3D nanostructures and provide avenues to further tune the interaction between electromagnetic fields and matter.
Energy Poverty in Europe: A Multidimensional Approach
Directory of Open Access Journals (Sweden)
Carlo Andrea Bollino
2017-12-01
Full Text Available With the European Commission’s “Third Energy Package”, the challenges posed by energy poverty have been recently acknowledged by European legislation. The paper develops a synthetic indicator of energy poverty for the purpose of assessing households’ well-being across different domains of inequality in access to energy services and to a healthy domestic environment. These dimensions are broadly defined in terms of energy affordability and thermal efficiency, two of the main manifestations of energy poverty. The analysis focuses on Europe and expands on existing economic literature by employing a fuzzy analysis for the definition of a multidimensional energy poverty index, which is then used to investigate the role of individual and household characteristics in shaping energy poverty. We find that during the European crisis energy poverty has been more stable than monetary poverty, and that thermal efficiency plays a crucial role in shaping individual and countries’ average degrees of energy poverty. JEL codes: I32; Q41; D10; D63
Correlative visualization techniques for multidimensional data
Treinish, Lloyd A.; Goettsche, Craig
1989-01-01
Critical to the understanding of data is the ability to provide pictorial or visual representation of those data, particularly in support of correlative data analysis. Despite the advancement of visualization techniques for scientific data over the last several years, there are still significant problems in bringing today's hardware and software technology into the hands of the typical scientist. For example, there are other computer science domains outside of computer graphics that are required to make visualization effective such as data management. Well-defined, flexible mechanisms for data access and management must be combined with rendering algorithms, data transformation, etc. to form a generic visualization pipeline. A generalized approach to data visualization is critical for the correlative analysis of distinct, complex, multidimensional data sets in the space and Earth sciences. Different classes of data representation techniques must be used within such a framework, which can range from simple, static two- and three-dimensional line plots to animation, surface rendering, and volumetric imaging. Static examples of actual data analyses will illustrate the importance of an effective pipeline in data visualization system.
A multidimensional subdiffusion model: An arbitrage-free market
International Nuclear Information System (INIS)
Li Guo-Hua; Zhang Hong; Luo Mao-Kang
2012-01-01
To capture the subdiffusive characteristics of financial markets, the subordinated process, directed by the inverse α-stale subordinator S α (t) for 0 < α < 1, has been employed as the model of asset prices. In this article, we introduce a multidimensional subdiffusion model that has a bond and K correlated stocks. The stock price process is a multidimensional subdiffusion process directed by the inverse α-stable subordinator. This model describes the period of stagnation for each stock and the behavior of the dependency between multiple stocks. Moreover, we derive the multidimensional fractional backward Kolmogorov equation for the subordinated process using the Laplace transform technique. Finally, using a martingale approach, we prove that the multidimensional subdiffusion model is arbitrage-free, and also gives an arbitrage-free pricing rule for contingent claims associated with the martingale measure. (interdisciplinary physics and related areas of science and technology)
Exactly soluble multidimensional Fokker-Planck equations with nonlinear drift
International Nuclear Information System (INIS)
Brand, H.; Schenzle, A.
1981-01-01
The time-dependent analytic solutions of three classes of multidimensional Fokker-Planck equations with nonlinear drift are presented together with eigenvalues which are complex and depend essentially on the correlation functions of the fluctuations. (orig.)
Multidimensional scaling technique for analysis of magnetic storms ...
Indian Academy of Sciences (India)
Abstract. Multidimensional scaling is a powerful technique for analysis of data. The latitudinal dependenceof geomagnetic field variation in horizontal component (H) during magnetic storms is analysed in this paper by employing this technique.
Van der Zee, KI; Van Oudenhoven, JP
2000-01-01
In today's global business environment, executive work is becoming more international in orientation. Several skills and traits may underlie executive success in an inter national environment. The Multicultural Personality Questionnaire was developed as a multidimensional instrument aimed at
Multidimensional Data Modeling For Location-Based Services
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Kligys, Augustas; Pedersen, Torben Bach
2004-01-01
and requests of their users in multidimensional databases, i.e., data warehouses, and content delivery may be based on the results of complex queries on these data warehouses. Such queries aggregate detailed data in order to find useful patterns, e.g., in the interaction of a particular user with the services......With the recent and continuing advances in areas such as wireless communications and positioning technologies, mobile, location-based services are becoming possible. Such services deliver location-dependent content to their users. More specifically, these services may capture the movements....... The application of multidimensional technology in this context poses a range of new challenges. The specific challenge addressed here concerns the provision of an appropriate multidimensional data model. In particular, the paper extends an existing multidimensional data model and algebraic query language...
Multidimensional Data Modeling For Location-Based Services
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Kligys, A.; Pedersen, Torben Bach
2003-01-01
and requests of their users in multidimensional databases, i.e., data warehouses; and content delivery may be based on the results of complex queries on these data warehouses. Such queries aggregate detailed data in order to find useful patterns, e.g., in the interaction of a particular user with the services......With the recent and continuing advances in areas such as wireless communications and positioning technologies, mobile, location-based services are becoming possible. Such services deliver location-dependent content to their users. More specifically, these services may capture the movements....... The application of multidimensional technology in this context poses a range of new challenges. The specific challenge addressed here concerns the provision of an appropriate multidimensional data model. In particular, the paper extends an existing multidimensional data model and algebraic query language...
A vector-based, multidimensional scanpath similarity measure
Jarodzka, Halszka; Holmqvist, Kenneth; Nyström, Marcus
2011-01-01
Jarodzka, H., Holmqvist, K., & Nyström, M. (2010, March). A vector-based, multidimensional scanpath similarity measure. Presentation at the Eye Tracking Research & Application Symposium (ETRA), Austin, Texas, USA.
Nonparametric Bayesian drift estimation for multidimensional stochastic differential equations
Gugushvili, S.; Spreij, P.
2014-01-01
We consider nonparametric Bayesian estimation of the drift coefficient of a multidimensional stochastic differential equation from discrete-time observations on the solution of this equation. Under suitable regularity conditions, we establish posterior consistency in this context.
Multidimensional quantum entanglement with large-scale integrated optics
DEFF Research Database (Denmark)
Wang, Jianwei; Paesani, Stefano; Ding, Yunhong
2018-01-01
-dimensional entanglement. A programmable bipartite entangled system is realized with dimension up to 15 × 15 on a large-scale silicon-photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality......The ability to control multidimensional quantum systems is key for the investigation of fundamental science and for the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control and analyze high...... and controllability of our multidimensional technology, and further exploit these abilities to demonstrate key quantum applications experimentally unexplored before, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development...
International Nuclear Information System (INIS)
2007-03-01
Computer codes are being used to analyse operational events in nuclear power plants but until now no special attention has been given to the dissemination of the benefits from these analyses. The IAEA's Incident Reporting System contains more than 3000 reported operational events. Even though deterministic analyses were certainly performed for some of them, only a few reports are supplemented by the results of the computer code analysis. From 23-26 May 2005 a Technical Meeting on Deterministic Analysis of Operational Events in Nuclear Power Plants was organized by the IAEA and held at the International Centre of Croatian Universities in Dubrovnik, Croatia. The objective of the meeting was to provide an international forum for presentations and discussions on how deterministic analysis can be utilized for the evaluation of operational events at nuclear power plants in addition to the traditional root cause evaluation methods
International Nuclear Information System (INIS)
Kutkov, V; Buglova, E; McKenna, T
2011-01-01
Lessons learned from responses to past events have shown that more guidance is needed for the response to radiation emergencies (in this context, a 'radiation emergency' means the same as a 'nuclear or radiological emergency') which could lead to severe deterministic effects. The International Atomic Energy Agency (IAEA) requirements for preparedness and response for a radiation emergency, inter alia, require that arrangements shall be made to prevent, to a practicable extent, severe deterministic effects and to provide the appropriate specialised treatment for these effects. These requirements apply to all exposure pathways, both internal and external, and all reasonable scenarios, to include those resulting from malicious acts (e.g. dirty bombs). This paper briefly describes the approach used to develop the basis for emergency response criteria for protective actions to prevent severe deterministic effects in the case of external exposure and intake of radioactive material.
On new physics searches with multidimensional differential shapes
Ferreira, Felipe; Fichet, Sylvain; Sanz, Veronica
2018-03-01
In the context of upcoming new physics searches at the LHC, we investigate the impact of multidimensional differential rates in typical LHC analyses. We discuss the properties of shape information, and argue that multidimensional rates bring limited information in the scope of a discovery, but can have a large impact on model discrimination. We also point out subtleties about systematic uncertainties cancellations and the Cauchy-Schwarz bound on interference terms.
Multidimensional first-order dominance comparisons of population wellbeing
DEFF Research Database (Denmark)
Arndt, Thomas Channing; Siersbæk, Nikolaj; Østerdal, Lars Peter Raahave
In this paper, we convey the concept of first-order dominance (FOD) with particular focus on applications to multidimensional population welfare comparisons. We give an account of the fundamental equivalent definitions of FOD, illustrated with simple numerical examples. An implementable method...... for detecting dominances is explained along with a bootstrapping procedure that yields additional information relative to what can be obtained from dominance comparisons alone. We discuss strengths and weaknesses of FOD, compared to other multidimensional population comparison concepts, and describe practical...
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
Fatigue and multidimensional disease severity in chronic obstructive pulmonary disease
Directory of Open Access Journals (Sweden)
Inal-Ince Deniz
2010-06-01
Full Text Available Abstract Background and aims Fatigue is associated with longitudinal ratings of health in patients with chronic obstructive pulmonary disease (COPD. Although the degree of airflow obstruction is often used to grade disease severity in patients with COPD, multidimensional grading systems have recently been developed. The aim of this study was to investigate the relationship between perceived and actual fatigue level and multidimensional disease severity in patients with COPD. Materials and methods Twenty-two patients with COPD (aged 52-74 years took part in the study. Multidimensional disease severity was measured using the SAFE and BODE indices. Perceived fatigue was assessed using the Fatigue Severity Scale (FSS and the Fatigue Impact Scale (FIS. Peripheral muscle endurance was evaluated using the number of sit-ups, squats, and modified push-ups that each patient could do. Results Thirteen patients (59% had severe fatigue, and their St George's Respiratory Questionnaire scores were significantly higher (p Conclusions Peripheral muscle endurance and fatigue perception in patients with COPD was related to multidimensional disease severity measured with both the SAFE and BODE indices. Improvements in perceived and actual fatigue levels may positively affect multidimensional disease severity and health status in COPD patients. Further research is needed to investigate the effects of fatigue perception and exercise training on patients with different stages of multidimensional COPD severity.
Energy Technology Data Exchange (ETDEWEB)
Zhao Yi [Hong Kong Polytechnic University, Kowloon, Hong Kong (China); Small, Michael [Hong Kong Polytechnic University, Kowloon, Hong Kong (China); Coward, David [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia); Howell, Eric [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia); Zhao Chunnong [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia); Ju Li [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia); Blair, David [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia)
2006-03-07
We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)
International Nuclear Information System (INIS)
Yokose, Yoshio; Noguchi, So; Yamashita, Hideo
2002-01-01
Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)
Palmer, Tim N; O'Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
Directory of Open Access Journals (Sweden)
Tim ePalmer
2015-10-01
Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
Deterministic and stochastic trends in the Lee-Carter mortality model
DEFF Research Database (Denmark)
Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene
2015-01-01
The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics load with identical weights when describing the development of age-specific mortality rates. Effectively this means that the main characteristics of the model simplify to a random walk model with age...... mortality data. We find empirical evidence that this feature of the Lee–Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee...
Deterministic and stochastic trends in the Lee-Carter mortality model
DEFF Research Database (Denmark)
Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene
The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model...... that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find...
Timing and related artifacts in multidimensional NMR
International Nuclear Information System (INIS)
Marion, Dominique
2012-01-01
The information content of multidimensional NMR spectra is limited by the presence of several kinds of artifacts that originate from incorrect timing of evolution periods. The objective of this review is to provide tools for successful implementation of published pulse sequences, in which timing and pulse compensations are often implicit. We will analyze the constraints set by the use of Fourier transformation, the spin precession during rectangular or shaped pulses, the Bloch-Siegert effects due to pulse on other spins and the delay introduced by the filters for the acquisition dimension. A frequency dependent phase correction or an incorrect scaling of the first data point leads to baseline offsets or curvature due to the properties of the Fourier transform. Because any r.f. pulse has a finite length, chemical shift is always active during excitation, flip-back, inversion, and refocusing pulses. Rectangular or selective shaped pulses can be split into three periods: an ideal rotation surrounded by two chemical shift evolution periods, which should be subtracted from the adjacent delays to avoid linear phase correction. Bloch-Siegert effects originate from irradiation at frequencies near those observed in the spectrum and can lead to phase or frequency shifts. They can be minimized by simultaneous irradiation on both sides of the observed spins. In terms of timing, the very end of the pulse sequence the acquisition behaves differently since the data are filtered by either analog or digital means. This additional delay is filter and spectrometer specific and should be tuned to minimize the required phase correction. Combined together, all these adjustments lead to perfectly phased spectra with flat baseline and no peak shifts or distortion. (author)
Deterministic Price Setting Rules to Guarantee Profitability of Unbundling in the Airline Industry
Van Diepen, G.; Curran, R.
2011-01-01
Unbundling the traditional airfare is one of the airline industry’s practices to generate ancillary revenue in its struggle for profitability. However, unbundling might just as well negatively affect profit. In this paper deterministic price setting rules are established to guarantee profitability
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
DEFF Research Database (Denmark)
Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto
1974-01-01
The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...
Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.
Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.
1996-01-01
Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate
2D deterministic radiation transport with the discontinuous finite element method
International Nuclear Information System (INIS)
Kershaw, D.; Harte, J.
1993-01-01
This report provides a complete description of the analytic and discretized equations for 2D deterministic radiation transport. This computational model has been checked against a wide variety of analytic test problems and found to give excellent results. We make extensive use of the discontinuous finite element method
Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System
Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.
2016-01-01
Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…
Pfaff, W.; Vos, A.; Hanson, R.
2013-01-01
Metal nanostructures can be used to harvest and guide the emission of single photon emitters on-chip via surface plasmon polaritons. In order to develop and characterize photonic devices based on emitter-plasmon hybrid structures, a deterministic and scalable fabrication method for such structures
Czech Academy of Sciences Publication Activity Database
Lin, Qiang; De Vrieze, J.; Li, Ch.; Li, J.; Li, J.; Yao, M.; Heděnec, Petr; Li, H.; Li, T.; Rui, J.; Frouz, Jan; Li, X.
2017-01-01
Roč. 123, October (2017), s. 134-143 ISSN 0043-1354 Institutional support: RVO:60077344 Keywords : anaerobic digestion * deterministic process * microbial interactions * modularity * temperature gradient Subject RIV: DJ - Water Pollution ; Quality OBOR OECD: Water resources Impact factor: 6.942, year: 2016
In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
Performance of HSPA Vertical Sectorization System under Semi-Deterministic Propagation Model
DEFF Research Database (Denmark)
Nguyen, Huan Cong; Makinen, Jarmo; Stoermer, Wolfgang
2013-01-01
The performance of the Vertical Sectorization (VS) system has been evaluated previously using an empirical propagation model and a regular network layout. In this paper, our aim is to investigate the gain of the VS system under a more realistic scenario. A semi-deterministic path loss model run o...
Deterministic factor analysis: methods of integro-differentiation of non-integral order
Directory of Open Access Journals (Sweden)
Valentina V. Tarasova
2016-12-01
Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes
Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha
2015-01-01
To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.
Multidimensional data encryption with virtual optics
Yu, Lingfeng
Information security is very important in many application areas in the field of information technology. Recently, a number of optical encryption methods have been proposed for the purpose of information hiding and data security, because optical information processing techniques have obvious advantages such as high degrees of freedom (e.g. amplitude, phase, polarization, wavelength) for encryption and decryption, and high-level data security. However, the limitations of current optical encryption methods relate to the complexity of their optical hardware, the requirements of the information type, lack of flexibility, and the lack of compact and low-cost optoelectronics devices and systems. These problems make it difficult to move optical encryption out of the research lab and into real world application areas. In this thesis, a novel parameterized multidimensional data encryption method based on the concept of "virtual optics" is proposed. A strong motivation for the research in this thesis is to overcome the abovementioned problems currently existing in optical encryption and to retain most of the favorable features of optical encryption. The phrase "virtual optics" means that both the optical encryption and decryption processes are implemented in an all-digital manner, adopting optical information processing technologies such as optical holography, optical diffraction or other relevant optical processes. In addition to utilizing some geometric and physical parameters derived from a configuration of digital optics, some information disarrangement actions have also been suggested as tools for designing multiple locks and keys for data encryption in hyperspace. The sensitivities of these supposed keys are quantitatively analyzed and the possible security level of the proposed cryptosystem is assessed. Security of the cryptosystem is also analyzed by examining some possible attacks on the cryptosystem from the viewpoint of a cryptanalysis. This thesis has shown the
Energy Technology Data Exchange (ETDEWEB)
Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.
2016-12-15
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
International Nuclear Information System (INIS)
Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad
2016-01-01
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Visual Analysis and Processing of Clusters Structures in Multidimensional Datasets
Bondarev, A. E.
2017-05-01
The article is devoted to problems of visual analysis of clusters structures for a multidimensional datasets. For visual analyzing an approach of elastic maps design [1,2] is applied. This approach is quite suitable for processing and visualizing of multidimensional datasets. To analyze clusters in original data volume the elastic maps are used as the methods of original data points mapping to enclosed manifolds having less dimensionality. Diminishing the elasticity parameters one can design map surface which approximates the multidimensional dataset in question much better. Then the points of dataset in question are projected to the map. The extension of designed map to a flat plane allows one to get an insight about the cluster structure of multidimensional dataset. The approach of elastic maps does not require any a priori information about data in question and does not depend on data nature, data origin, etc. Elastic maps are usually combined with PCA approach. Being presented in the space based on three first principal components the elastic maps provide quite good results. The article describes the results of elastic maps approach application to visual analysis of clusters for different multidimensional datasets including medical data.
A CONCEPTUAL TRAJECTORY MULTIDIMENSIONAL MODEL: AN APPLICATION TO PUBLIC TRANSPORTATION
Directory of Open Access Journals (Sweden)
FRANCISCO MORENO
2011-01-01
Full Text Available Actualmente, gracias a tecnologías como los sistemas de posicionamiento global y dispositivos móviles provistos de sensores, se puede recopilar una gran cantidad de datos sobre objetos móviles, e.g., datos relacionados con la trayectoria seguida por estos objetos. Por otra parte, las Bodegas de Datos (BDs, usualmente modeladas mediante una vista multidimensional de los datos, son bases de datos especializadas para ayudar en la toma de decisiones. Desafortunadamente, las BDs convencionales ofrecen poco soporte para la gestión de trayectorias. Aunque existen algunas propuestas que tratan con BDs de trayectorias, ninguna de ellas se enfoca en su modelamiento conceptual multidimensional. En este artículo se extiende un modelo conceptual multidimensional espacial donde se incorporan las trayectorias como conceptos de primera clase. Con el fi n de mostrar la conveniencia de la propuesta, se presenta un ejemplo relacionado con transporte público.
Graphical Representation of Proximity Measures for Multidimensional Data
Zand, Martin S.; Wang, Jiong; Hilchey, Shannon
2015-01-01
We describe the use of classical and metric multidimensional scaling methods for graphical representation of the proximity between collections of data consisting of cases characterized by multidimensional attributes. These methods can preserve metric differences between cases, while allowing for dimensional reduction and projection to two or three dimensions ideal for data exploration. We demonstrate these methods with three datasets for: (i) the immunological similarity of influenza proteins measured by a multidimensional assay; (ii) influenza protein sequence similarity; and (iii) reconstruction of airport-relative locations from paired proximity measurements. These examples highlight the use of proximity matrices, eigenvalues, eigenvectors, and linear and nonlinear mappings using numerical minimization methods. Some considerations and caveats for each method are also discussed, and compact Mathematica programs are provided. PMID:26692757
Visual modeling in an analysis of multidimensional data
Zakharova, A. A.; Vekhter, E. V.; Shklyar, A. V.; Pak, A. J.
2018-01-01
The article proposes an approach to solve visualization problems and the subsequent analysis of multidimensional data. Requirements to the properties of visual models, which were created to solve analysis problems, are described. As a perspective direction for the development of visual analysis tools for multidimensional and voluminous data, there was suggested an active use of factors of subjective perception and dynamic visualization. Practical results of solving the problem of multidimensional data analysis are shown using the example of a visual model of empirical data on the current state of studying processes of obtaining silicon carbide by an electric arc method. There are several results of solving this problem. At first, an idea of possibilities of determining the strategy for the development of the domain, secondly, the reliability of the published data on this subject, and changes in the areas of attention of researchers over time.
Conservative Initial Mapping For Multidimensional Simulations of Stellar Explosions
International Nuclear Information System (INIS)
Chen, Ke-Jung; Heger, Alexander; Almgren, Ann
2012-01-01
Mapping one-dimensional stellar profiles onto multidimensional grids as initial conditions for hydrodynamics calculations can lead to numerical artifacts, one of the most severe of which is the violation of conservation laws for physical quantities such as energy and mass. Here we introduce a numerical scheme for mapping one-dimensional spherically-symmetric data onto multidimensional meshes so that these physical quantities are conserved. We validate our scheme by porting a realistic 1D Lagrangian stellar profile to the new multidimensional Eulerian hydro code CASTRO. Our results show that all important features in the profiles are reproduced on the new grid and that conservation laws are enforced at all resolutions after mapping.
SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations
International Nuclear Information System (INIS)
Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir
2014-01-01
The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with
On the classification of multidimensionally consistent 3D maps
Petrera, Matteo; Suris, Yuri B.
2017-11-01
We classify multidimensionally consistent maps given by (formal or convergent) series of the following kind: T_k x_{ij}=x_{ij} + \\sum _{m=2}^∞ A_{ij ; k}^{(m)}(x_{ij},x_{ik},x_{jk}), where A_{ij; k}^{(m)} are homogeneous polynomials of degree m of their respective arguments. The result of our classification is that the only non-trivial multidimensionally consistent map in this class is given by the well-known symmetric discrete Darboux system T_k x_{ij}=x_{ij}+x_{ik}x_{jk}/√{1-x_{ik^2}√{1-x_{jk}^2}}.
Theme section: Multi-dimensional modelling, analysis and visualization
DEFF Research Database (Denmark)
Guilbert, Éric; Coltekin, Arzu; Antón Castro, Francesc/François
2016-01-01
describing complex multidimensional phenomena. An example of the relevance of multidimensional modelling is seen with the development of urban modelling where several dimensions have been added to the traditional 2D map representation (Sester et al.,2011). These include obviously the third spatial dimension...... in order to provide a meaningful representation and assist in data visualisation and mining, modelling and analysis; such as data structures allowing representation at different scalesor in different contexts of thematic information. Such issues are of importance with regard to the mission of theI SPRS...
Findings Toward a Multidimensional Measure of Adolescent Health Literacy
Massey, Philip; Prelip, Michael; Calimlim, Brian; Afifi, Abdelmonem; Quiter, Elaine; Nessim, Sharon; Wongvipat-Kalev, Nancy; Glik, Deborah
2017-01-01
Objective To explore a multidimensional measure of health literacy that incorporates skills necessary to manage one’s health environment. Methods We designed a questionnaire to assess variation in an expanded understanding of health literacy among publicly insured adolescents in California (N = 1208) regarding their health care experiences and insurance. Results Factor loading and item clustering patterns reflected in the exploratory principal components factor analysis suggest that the data are parsimoniously described by 6 domains. Conclusion This multidimensional measure becomes relevant in an era of health care reform in which many will for the first time have health insurance requiring them to navigate a system that uses a managed care model. PMID:23985181
A scalable pairwise class interaction framework for multidimensional classification
DEFF Research Database (Denmark)
Arias, Jacinto; Gámez, Jose A.; Nielsen, Thomas Dyhre
2016-01-01
We present a general framework for multidimensional classification that cap- tures the pairwise interactions between class variables. The pairwise class inter- actions are encoded using a collection of base classifiers (Phase 1), for which the class predictions are combined in a Markov random fie...... of the framework and we test the behavior of the different scalability strategies proposed. A comparison with other state-of-the-art multidimensional classifiers show that the proposed framework either outperforms or is competitive with the tested straw-men methods....
Multidimensional universes, Kaluza-Klein, Einstein spaces and symmetry breaking
International Nuclear Information System (INIS)
Coquereaux, R.
1983-12-01
The aim of these lectures was to present a review of the ''multidimensional universes'' where the old Kaluza-Klein idea holds true. I give first a survey of the theory of fiber bundles. Then there is a discussion on invariant metrics on groups and homogeneous spaces. Then comes a very short section on basic Riemannian geometry. The important results about the structure (topology and metric) of these multidimensional universes is given, the physical ideas are also discussed. In section 6 we show how to obtain many homogeneous Einstein metrics on groups and homogeneous spaces and study how they can lead to ''spontaneous symmetry breaking''
Fuzzy multidimensional inequality measurement. Policies to reduce inequality in Tunisia
Directory of Open Access Journals (Sweden)
Lamia HASNAOUI
2015-11-01
Full Text Available This article debates a systematic treatment of the conceptual frame work of the multidimensional fuzzy measurement of inequality. Fuzzy logic is a type of multivalued logic consequential from fuzzy set theory. The introduction of the dimension relative to the human existence like health, energy and housing provides considerable enrichment to our understanding of inequality and its causes. In fact, we propose a multidimensional fuzzy measurement with the membership functions of inequality then we suggested some policies to reduce inequality. An application based on individual well-being data from Tunisian households in 2010 is presented to illustrate the use of the proposed index.
Integral and Multidimensional Linear Distinguishers with Correlation Zero
DEFF Research Database (Denmark)
Bogdanov, Andrey; Leander, Gregor; Nyberg, Kaisa
2012-01-01
Zero-correlation cryptanalysis uses linear approximations holding with probability exactly 1/2. In this paper, we reveal fundamental links of zero-correlation distinguishers to integral distinguishers and multidimensional linear distinguishers. We show that an integral implies zero......-correlation linear approximations and that a zero-correlation linear distinguisher is actually a special case of multidimensional linear distinguishers. These observations provide new insight into zero-correlation cryptanalysis which is illustrated by attacking a Skipjack variant and round-reduced CAST-256 without...
Energy Technology Data Exchange (ETDEWEB)
Marchand, E
2007-12-15
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Regularity of pointwise boundary control systems
DEFF Research Database (Denmark)
Pedersen, Michael
1992-01-01
We will in these notes address some problems arising in "real-life" control application, namely problems concerning distributional control inputs on the boundary of the spatial domain. We extend the classical variational approach and give easily checkable sufficient conditions for the solutions...
Directory of Open Access Journals (Sweden)
MANFREDI, P.
2014-11-01
Full Text Available This paper extends recent literature results concerning the statistical simulation of circuits affected by random electrical parameters by means of the polynomial chaos framework. With respect to previous implementations, based on the generation and simulation of augmented and deterministic circuit equivalents, the modeling is extended to generic and ?black-box? multi-terminal nonlinear subcircuits describing complex devices, like those found in integrated circuits. Moreover, based on recently-published works in this field, a more effective approach to generate the deterministic circuit equivalents is implemented, thus yielding more compact and efficient models for nonlinear components. The approach is fully compatible with commercial (e.g., SPICE-type circuit simulators and is thoroughly validated through the statistical analysis of a realistic interconnect structure with a 16-bit memory chip. The accuracy and the comparison against previous approaches are also carefully established.
How the growth rate of host cells affects cancer risk in a deterministic way
Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe
2017-09-01
It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab
Exponential power spectra, deterministic chaos and Lorentzian pulses in plasma edge dynamics
International Nuclear Information System (INIS)
Maggs, J E; Morales, G J
2012-01-01
Exponential spectra have been observed in the edges of tokamaks, stellarators, helical devices and linear machines. The observation of exponential power spectra is significant because such a spectral character has been closely associated with the phenomenon of deterministic chaos by the nonlinear dynamics community. The proximate cause of exponential power spectra in both magnetized plasma edges and nonlinear dynamics models is the occurrence of Lorentzian pulses in the time signals of fluctuations. Lorentzian pulses are produced by chaotic behavior in the separatrix regions of plasma E × B flow fields or the limit cycle regions of nonlinear models. Chaotic advection, driven by the potential fields of drift waves in plasmas, results in transport. The observation of exponential power spectra and Lorentzian pulses suggests that fluctuations and transport at the edge of magnetized plasmas arise from deterministic, rather than stochastic, dynamics. (paper)
International Nuclear Information System (INIS)
Sheng Yubo; Deng Fuguo
2010-01-01
Entanglement purification is a very important element for long-distance quantum communication. Different from all the existing entanglement purification protocols (EPPs) in which two parties can only obtain some quantum systems in a mixed entangled state with a higher fidelity probabilistically by consuming quantum resources exponentially, here we present a deterministic EPP with hyperentanglement. Using this protocol, the two parties can, in principle, obtain deterministically maximally entangled pure states in polarization without destroying any less-entangled photon pair, which will improve the efficiency of long-distance quantum communication exponentially. Meanwhile, it will be shown that this EPP can be used to complete nonlocal Bell-state analysis perfectly. We also discuss this EPP in a practical transmission.
Lee, Sylvanus Y.; Amsden, Jason J.; Boriskina, Svetlana V.; Gopinath, Ashwin; Mitropolous, Alexander; Kaplan, David L.; Omenetto, Fiorenzo G.; Negro, Luca Dal
2010-01-01
Light scattering phenomena in periodic systems have been investigated for decades in optics and photonics. Their classical description relies on Bragg scattering, which gives rise to constructive interference at specific wavelengths along well defined propagation directions, depending on illumination conditions, structural periodicity, and the refractive index of the surrounding medium. In this paper, by engineering multifrequency colorimetric responses in deterministic aperiodic arrays of nanoparticles, we demonstrate significantly enhanced sensitivity to the presence of a single protein monolayer. These structures, which can be readily fabricated by conventional Electron Beam Lithography, sustain highly complex structural resonances that enable a unique optical sensing approach beyond the traditional Bragg scattering with periodic structures. By combining conventional dark-field scattering micro-spectroscopy and simple image correlation analysis, we experimentally demonstrate that deterministic aperiodic surfaces with engineered structural color are capable of detecting, in the visible spectral range, protein layers with thickness of a few tens of Angstroms. PMID:20566892
A Deterministic Safety Assessment of a Pyro-processed Waste Repository
International Nuclear Information System (INIS)
Lee, Youn Myoung; Jeong, Jong Tae; Choi, Jong Won
2012-01-01
A GoldSim template program for a safety assessment of a hybrid-typed repository system, called 'A-KRS', in which two kinds of pyro-processed radioactive wastes, low-level metal wastes and ceramic high-level wastes that arise from the pyro-processing of PWR nuclear spent fuels are disposed of, has been developed. This program is ready both for a deterministic and probabilistic total system performance assessment which is able to evaluate nuclide release from the repository and farther transport into the geosphere and biosphere under various normal, disruptive natural and manmade events, and scenarios. The A-KRS has been deterministically assessed with 5 various normal and abnormal scenarios associated with nuclide release and transport in and around the repository. Dose exposure rates to the farming exposure group have been evaluated in accordance with all the scenarios and then compared among other.
Seismic hazard in Romania associated to Vrancea subcrustal source Deterministic evaluation
Radulian, M; Moldoveanu, C L; Panza, G F; Vaccari, F
2002-01-01
Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the mo...
Deterministic and stochastic control of chimera states in delayed feedback oscillator
Energy Technology Data Exchange (ETDEWEB)
Semenov, V. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Zakharova, A.; Schöll, E. [Institut für Theoretische Physik, TU Berlin, Hardenbergstraße 36, 10623 Berlin (Germany); Maistrenko, Y. [Institute of Mathematics and Center for Medical and Biotechnical Research, NAS of Ukraine, Tereschenkivska Str. 3, 01601 Kyiv (Ukraine)
2016-06-08
Chimera states, characterized by the coexistence of regular and chaotic dynamics, are found in a nonlinear oscillator model with negative time-delayed feedback. The control of these chimera states by external periodic forcing is demonstrated by numerical simulations. Both deterministic and stochastic external periodic forcing are considered. It is shown that multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. The constructive role of noise in the formation of a chimera states is shown.
Using reputation systems and non-deterministic routing to secure wireless sensor networks.
Moya, José M; Vallejo, Juan Carlos; Fraga, David; Araujo, Alvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano
2009-01-01
Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios.
Deterministic Seirs Epidemic Model for Modeling Vital Dynamics, Vaccinations, and Temporary Immunity
Marek B. Trawicki
2017-01-01
In this paper, the author proposes a new SEIRS model that generalizes several classical deterministic epidemic models (e.g., SIR and SIS and SEIR and SEIRS) involving the relationships between the susceptible S, exposed E, infected I, and recovered R individuals for understanding the proliferation of infectious diseases. As a way to incorporate the most important features of the previous models under the assumption of homogeneous mixing (mass-action principle) of the individuals in the popula...
Autogenic succession and deterministic recovery following disturbance in soil bacterial communities
DEFF Research Database (Denmark)
Jurburg, Stephanie D.; Nunes, Ines Marques; Stegen, James C.
2017-01-01
slowed down, and a stability phase (after 29 days), during which the community tended towards its original composition. Phylogenetic turnover patterns indicated that the community experienced stronger deterministic selection during recovery. Thus, soil bacterial communities, despite their extreme...... to understand the successional trajectory of soil bacterial communities following disturbances and the mechanisms controlling these dynamics at a scale relevant for these organisms, we subjected soil microcosms to a heat disturbance and followed the community composition of active bacteria over 50 days...
Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Juan-Mariano de Goyeneche
2009-05-01
Full Text Available Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios.
Energy Technology Data Exchange (ETDEWEB)
Dillstroem, Peter; Bergman, Mats; Brickstad, Bjoern; Weilin Zang; Sattari-Far, Iradj; Andersson, Peder; Sund, Goeran; Dahlberg, Lars; Nilsson, Fred (Inspecta Technology AB, Stockholm (Sweden))
2008-07-01
SSM has supported research work for the further development of a previously developed procedure/handbook (SKI Report 99:49) for assessment of detected cracks and tolerance for defect analysis. During the operative use of the handbook it was identified needs to update the deterministic part of the procedure and to introduce a new probabilistic flaw evaluation procedure. Another identified need was a better description of the theoretical basis to the computer program. The principal aim of the project has been to update the deterministic part of the recently developed procedure and to introduce a new probabilistic flaw evaluation procedure. Other objectives of the project have been to validate the conservatism of the procedure, make the procedure well defined and easy to use and make the handbook that documents the procedure as complete as possible. The procedure/handbook and computer program ProSACC, Probabilistic Safety Assessment of Components with Cracks, has been extensively revised within this project. The major differences compared to the last revision are within the following areas: It is now possible to deal with a combination of deterministic and probabilistic data. It is possible to include J-controlled stable crack growth. The appendices on material data to be used for nuclear applications and on residual stresses are revised. A new deterministic safety evaluation system is included. The conservatism in the method for evaluation of the secondary stresses for ductile materials is reduced. A new geometry, a circular bar with a circumferential surface crack has been introduced. The results of this project will be of use to SSM in safety assessments of components with cracks and in assessments of the interval between the inspections of components in nuclear power plants
A property of the value multifunction of the deterministic mean-field game
Averboukh, Yurii
2017-11-01
The note is concerned with the theory of many players differential games examined within the framework of mean field approach. The results presented in the note are as follows. First, we show that the solution to the deterministic mean field game can be nonunique. Second, we present a property of the multifunction of the mean field game that describes the value multifunction using its value in the intermediate time.
Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-03-17
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.
Bondorf, Steffen; Nikolaus, Paul; Schmitt, Jens B.
2016-01-01
Networks are integral parts of modern safety-critical systems and certification demands the provision of guarantees for data transmissions. Deterministic Network Calculus (DNC) can compute a worst-case bound on a data flow's end-to-end delay. Accuracy of DNC results has been improved steadily, resulting in two DNC branches: the classical algebraic analysis and the more recent optimization-based analysis. The optimization-based branch provides a theoretical solution for tight bounds. Its compu...
Faster Deterministic Volume Estimation in the Oracle Model via Thin Lattice Coverings
D.N. Dadush (Daniel)
2015-01-01
htmlabstractWe give a 2O(n)(1+1/")n time and poly(n)-space deterministic algorithm for computing a (1+")n approximation to the volume of a general convex body K, which comes close to matching the (1+c/")n/2 lower bound for volume estimation in the oracle model by Bárány and Füredi (STOC 1986,
DEFF Research Database (Denmark)
Siampour, Hamidreza; Kumar, Shailesh; Bozhevolnyi, Sergey I.
We report on the fabrication of dielectric-loaded-waveguides which are excited by single-nitrogen-vacancy (NV) centers in nanodiamonds. The waveguides are deterministically written onto the pre-characterized nanodiamonds by using electron beam lithography of hydrogen silsesquioxane (HSQ) resist...... on silver-coated silicon substrate. Change in lifetime for NV-centers is observed after fabrication of waveguides and an antibunching in correlation measurement confirms that nanodiamonds contain single NV-centers....
Liu, Xiaoying; Biswas, Sushmita; Jarrett, Jeremy W; Poutrina, Ekaterina; Urbas, Augustine; Knappenberger, Kenneth L; Vaia, Richard A; Nealey, Paul F
2015-12-02
Plasmonic heterostructures are deterministically constructed in organized arrays through chemical pattern directed assembly, a combination of top-down lithography and bottom-up assembly, and by the sequential immobilization of gold nanoparticles of three different sizes onto chemically patterned surfaces using tailored interaction potentials. These spatially addressable plasmonic chain nanostructures demonstrate localization of linear and nonlinear optical fields as well as nonlinear circular dichroism. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
1991-03-01
This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section's Seismic Qualification Program for reactor restart
Analysis of local dependence and multidimensionality in graphical loglinear Rasch models
DEFF Research Database (Denmark)
Kreiner, Svend; Christensen, Karl Bang
local independence; multidimensionality, differential item functioning; uniform local dependency and DIF; graphical Rasch models; loglinear Rasch models......local independence; multidimensionality, differential item functioning; uniform local dependency and DIF; graphical Rasch models; loglinear Rasch models...
Analysis of Local Dependence and Multidimensionality in Graphical Loglinear Rasch Models
DEFF Research Database (Denmark)
Kreiner, Svend; Christensen, Karl Bang
2004-01-01
Local independence; Multidimensionality; Differential item functioning; Uniform local dependence and DIF; Graphical Rasch models; Loglinear Rasch model......Local independence; Multidimensionality; Differential item functioning; Uniform local dependence and DIF; Graphical Rasch models; Loglinear Rasch model...
Energy Technology Data Exchange (ETDEWEB)
Morhac, M. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)]. E-mail: fyzimiro@savba.sk; Matousek, V. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia); Turzo, I. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia); Kliman, J. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)
2006-04-01
Multidimensional data acquisition, processing and visualization system to analyze experimental data in nuclear physics is described. It includes a large number of sophisticated algorithms of the multidimensional spectra processing, including background elimination, deconvolution, peak searching and fitting.
The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes
Bogdanova, E. V.; Kuznetsov, A. N.
2017-01-01
The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
Strategies for the preparation of large cluster states using non-deterministic gates
International Nuclear Information System (INIS)
Rohde, Peter P; Barrett, Sean D
2007-01-01
The cluster state model for quantum computation has paved the way for schemes that allow scalable quantum computing, even when using non-deterministic quantum gates. Here the initial step is to prepare a large entangled state using non-deterministic gates. A key question in this context is the relative efficiencies of different 'strategies', i.e. in what order should the non-deterministic gates be applied, in order to maximize the size of the resulting cluster states? In this paper we consider this issue in the context of 'large' cluster states. Specifically, we assume an unlimited resource of qubits and ask what the steady state rate at which 'large' clusters are prepared from this resource is, given an entangling gate with particular characteristics. We measure this rate in terms of the number of entangling gate operations that are applied. Our approach works for a variety of different entangling gate types, with arbitrary failure probability. Our results indicate that strategies whereby one preferentially bonds together clusters of identical length are considerably more efficient than those in which one does not. Additionally, compared to earlier analytic results, our numerical study offers substantially improved resource scaling
Human Resources Readiness as TSO for Deterministic Safety Analysis on the First NPP in Indonesia
International Nuclear Information System (INIS)
Sony Tjahyani, D. T.
2010-01-01
In government regulation no. 43 year 2006 it is mentioned that preliminary safety analysis report and final safety analysis report are one of requirements which should be applied in construction and operation licensing for commercial power reactor (NPPs). The purpose of safety analysis report is to confirm the adequacy and efficiency of provisions within the defence in depth of nuclear reactor. Deterministic analysis is used on the safety analysis report. One of the TSO task is to evaluate this report based on request of operator or regulatory body. This paper discusses about human resources readiness as TSO for deterministic safety analysis on the first NPP in Indonesia. The assessment is done by comparing the analysis step on SS-23 and SS-30 with human resources status of BATAN currently. The assessment results showed that human resources for deterministic safety analysis are ready as TSO especially to review preliminary safety analysis report and to revise final safety analysis report in licensing on the first NPP in Indonesia. Otherwise, to prepare the safety analysis report is still needed many competency human resources. (author)
Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.
Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker
2009-02-21
Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.
Probabilistic approach in treatment of deterministic analyses results of severe accidents
International Nuclear Information System (INIS)
Krajnc, B.; Mavko, B.
1996-01-01
Severe accidents sequences resulting in loss of the core geometric integrity have been found to have small probability of the occurrence. Because of their potential consequences to public health and safety, an evaluation of the core degradation progression and the resulting effects on the containment is necessary to determine the probability of a significant release of radioactive materials. This requires assessment of many interrelated phenomena including: steel and zircaloy oxidation, steam spikes, in-vessel debris cooling, potential vessel failure mechanisms, release of core material to the containment, containment pressurization from steam generation, or generation of non-condensable gases or hydrogen burn, and ultimately coolability of degraded core material. To asses the answer from the containment event trees in the sense of weather certain phenomenological event would happen or not the plant specific deterministic analyses should be performed. Due to the fact that there is a large uncertainty in the prediction of severe accidents phenomena in Level 2 analyses (containment event trees) the combination of probabilistic and deterministic approach should be used. In fact the result of the deterministic analyses of severe accidents are treated in probabilistic manner due to large uncertainty of results as a consequence of a lack of detailed knowledge. This paper discusses approach used in many IPEs, and which assures that the assigned probability for certain question in the event tree represent the probability that the event will or will not happen and that this probability also includes its uncertainty, which is mainly result of lack of knowledge. (author)
International Nuclear Information System (INIS)
Hu Bing; Ye Binbin; Yang Yang; Zhu Kangshun; Kang Zhuang; Kuang Sichi; Luo Lin; Shan Hong
2011-01-01
Purpose: Our aim was to study the quantitative fiber tractography variations and patterns in patients with relapsing-remitting multiple sclerosis (RRMS) and to assess the correlation between quantitative fiber tractography and Expanded Disability Status Scale (EDSS). Material and methods: Twenty-eight patients with RRMS and 28 age-matched healthy volunteers underwent a diffusion tensor MR imaging study. Quantitative deterministic and probabilistic fiber tractography were generated in all subjects. And mean numbers of tracked lines and fiber density were counted. Paired-samples t tests were used to compare tracked lines and fiber density in RRMS patients with those in controls. Bivariate linear regression model was used to determine the relationship between quantitative fiber tractography and EDSS in RRMS. Results: Both deterministic and probabilistic tractography's tracked lines and fiber density in RRMS patients were less than those in controls (P < .001). Both deterministic and probabilistic tractography's tracked lines and fiber density were found negative correlations with EDSS in RRMS (P < .001). The fiber tract disruptions and reductions in RRMS were directly visualized on fiber tractography. Conclusion: Changes of white matter tracts can be detected by quantitative diffusion tensor fiber tractography, and correlate with clinical impairment in RRMS.
Chen, Xi; Diez, Matteo; Kandasamy, Manivannan; Zhang, Zhiguo; Campana, Emilio F.; Stern, Frederick
2015-04-01
Advances in high-fidelity shape optimization for industrial problems are presented, based on geometric variability assessment and design-space dimensionality reduction by Karhunen-Loève expansion, metamodels and deterministic particle swarm optimization (PSO). Hull-form optimization is performed for resistance reduction of the high-speed Delft catamaran, advancing in calm water at a given speed, and free to sink and trim. Two feasible sets (A and B) are assessed, using different geometric constraints. Dimensionality reduction for 95% confidence is applied to high-dimensional free-form deformation. Metamodels are trained by design of experiments with URANS; multiple deterministic PSOs achieve a resistance reduction of 9.63% for A and 6.89% for B. Deterministic PSO is found to be effective and efficient, as shown by comparison with stochastic PSO. The optimum for A has the best overall performance over a wide range of speed. Compared with earlier optimization, the present studies provide an additional resistance reduction of 6.6% at 1/10 of the computational cost.
Lin, Qiang; De Vrieze, Jo; Li, Chaonan; Li, Jiaying; Li, Jiabao; Yao, Minjie; Hedenec, Petr; Li, Huan; Li, Tongtong; Rui, Junpeng; Frouz, Jan; Li, Xiangzhen
2017-10-15
Temperature plays crucial roles in microbial interactions that affect the stability and performance of anaerobic digestion. In this study, the microbial interactions and their succession in the anaerobic digestion process were investigated at three levels, represented by (1) present and (2) active micro-organisms, and (3) gene expressions under a temperature gradient from 25 to 55 °C. Network topological features indicated a global variation in microbial interactions at different temperatures. The variations of microbial interactions in terms of network modularity and deterministic processes based on topological features, corresponded well with the variations of methane productions, but not with temperatures. A common successional pattern of microbial interactions was observed at different temperatures, which showed that both deterministic processes and network modularity increased over time during the digestion process. It was concluded that the increase in temperature-mediated network modularity and deterministic processes on shaping the microbial interactions improved the stability and efficiency of anaerobic digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.
DETERMINISTIC EVALUATION OF DELAYED HYDRIDE CRACKING BEHAVIORS IN PHWR PRESSURE TUBES
Directory of Open Access Journals (Sweden)
YOUNG-JIN OH
2013-04-01
Full Text Available Pressure tubes made of Zr-2.5 wt% Nb alloy are important components consisting reactor coolant pressure boundary of a pressurized heavy water reactor, in which unanticipated through-wall cracks and rupture may occur due to a delayed hydride cracking (DHC. The Canadian Standards Association has provided deterministic and probabilistic structural integrity evaluation procedures to protect pressure tubes against DHC. However, intuitive understanding and subsequent assessment of flaw behaviors are still insufficient due to complex degradation mechanisms and diverse influential parameters of DHC compared with those of stress corrosion cracking and fatigue crack growth phenomena. In the present study, a deterministic flaw assessment program was developed and applied for systematic integrity assessment of the pressure tubes. Based on the examination results dealing with effects of flaw shapes, pressure tube dimensional changes, hydrogen concentrations of pressure tubes and plant operation scenarios, a simple and rough method for effective cooldown operation was proposed to minimize DHC risks. The developed deterministic assessment program for pressure tubes can be used to derive further technical bases for probabilistic damage frequency assessment.
International Nuclear Information System (INIS)
Liu, Shichang; Wang, Guanbo; Wu, Gaochen; Wang, Kan
2015-01-01
Highlights: • DRAGON and DONJON are applied and verified in calculations of research reactors. • Continuous-energy Monte Carlo calculations by RMC are chosen as the references. • “ECCO” option of DRAGON is suitable for the calculations of research reactors. • Manual modifications of cross-sections are not necessary with DRAGON and DONJON. • DRAGON and DONJON agree well with RMC if appropriate treatments are applied. - Abstract: Simulation of the behavior of the plate-type research reactors such as JRR-3M and CARR poses a challenge for traditional neutronics calculation tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity and large leakage of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON and DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic approach. The goal of this research is to examine the capability of the deterministic code system DRAGON and DONJON to reliably simulate the research reactors. The results indicate that the DRAGON and DONJON code system agrees well with the continuous-energy Monte Carlo simulation on both k eff and flux distributions if the appropriate treatments (such as the ECCO option) are applied
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
Energy Technology Data Exchange (ETDEWEB)
Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-12
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.
Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?
Kubota, Noriaki
2018-03-01
The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.
Conservation laws for multidimensional systems and related linear algebra problems
Igonine, Sergei
2002-01-01
We consider multidimensional systems of PDEs of generalized evolution form with t-derivatives of arbitrary order on the left-hand side and with the right-hand side dependent on lower order t-derivatives and arbitrary space derivatives. For such systems we find an explicit necessary condition for the
Posterior Predictive Model Checking for Multidimensionality in Item Response Theory
Levy, Roy; Mislevy, Robert J.; Sinharay, Sandip
2009-01-01
If data exhibit multidimensionality, key conditional independence assumptions of unidimensional models do not hold. The current work pursues posterior predictive model checking, a flexible family of model-checking procedures, as a tool for criticizing models due to unaccounted for dimensions in the context of item response theory. Factors…
Income Tax Preparation Assistance Service Learning Program: A Multidimensional Assessment
Aldridge, Richard; Callahan, Richard A.; Chen, Yining; Wade, Stacy R.
2015-01-01
The authors present a multidimensional assessment of the outcomes and benefits of an income tax preparation assistance (ITPA) service learning program. They measure the perceived proximate benefits at the delivery of the service program, the actual learning outcome benefits prior to graduation, and the perceived long-term benefits from a…
Visual Analysis of Multi-Dimensional Categorical Data Sets
Broeksema, Bertjan; Telea, Alexandru C.; Baudel, Thomas
2013-01-01
We present a set of interactive techniques for the visual analysis of multi-dimensional categorical data. Our approach is based on multiple correspondence analysis (MCA), which allows one to analyse relationships, patterns, trends and outliers among dependent categorical variables. We use MCA as a
Generalizations of Paradoxical Results in Multidimensional Item Response Theory
Jordan, Pascal; Spiess, Martin
2012-01-01
Maximum likelihood and Bayesian ability estimation in multidimensional item response models can lead to paradoxical results as proven by Hooker, Finkelman, and Schwartzman ("Psychometrika" 74(3): 419-442, 2009): Changing a correct response on one item into an incorrect response may produce a higher ability estimate in one dimension.…
The multi-dimensional analysis of social exclusion
Levitas, Ruth; Pantazis, Christina; Fahmy, Eldin; Gordon, David; Lloyd, Eva; Patsios, Demy
2007-01-01
The purpose of this project was to review existing sources on multi-dimensional disadvantage or severe forms of social exclusion characterised as ‘deep exclusion’; to recommend possibilities for secondary analysis of existing data sets to explore the dynamics of ‘deep exclusion’; to identify any relevant gaps in the knowledge base; and to recommend research strategies for filling such gaps.
Multidimensional Scaling of High School Students' Perceptions of Academic Dishonesty
Schmelkin, Liora Pedhazur; Gilbert, Kimberly A.; Silva, Rebecca
2010-01-01
Although cheating on tests and other forms of academic dishonesty are considered rampant, no standard definition of academic dishonesty exists. The current study was conducted to investigate the perceptions of academic dishonesty in high school students, utilizing an innovative methodology, multidimensional scaling (MDS). Two methods were used to…
Turkish Validity Examination of the Multidimensional Students' Life Satisfaction Scale
Irmak, Sezgin; Kuruuzum, Ayse
2009-01-01
The validation studies of the Multidimensional Students' Life Satisfaction Scale (MSLSS) have been conducted with samples from different nations but mostly from western individualistic cultures. Life satisfaction and its constructs could differ depending on cultural characteristics and life satisfaction scales should be validated in different…
A Review of the Brief Multidimensional Students' Life Satisfaction Scale
Huebner, E. Scott; Seligson, Julie L.; Valois, Robert F.; Suldo, Shannon M.
2006-01-01
There are few psychometrically sound measures of life satisfaction suitable for children and adolescents. The purpose of this paper is to describe the rationale, development, and psychometric properties of a brief multidimensional life satisfaction scale appropriate for use with children of ages 8-18. The paper summarizes extant studies of its…
The multidimensional nature of HIV stigma: evidence from ...
African Journals Online (AJOL)
HIV stigma continues to be a major challenge to addressing HIV/AIDS in various countries in sub-Saharan Africa, including Mozambique. This paper explores the multidimensional nature of HIV stigma through the thematic analysis of five qualitative studies conducted in high HIV prevalence provinces in Mozambique ...
Conservation laws for multidimensional systems and related linear algebra problems
Igonin, S.
2002-01-01
We consider multidimensional systems of PDEs of generalized evolution form with $t$-derivatives of arbitrary order on the left-hand side and with the right-hand side dependent on lower order $t$-derivatives and arbitrary space derivatives. For such systems we find an explicit necessary condition for
Loglinear multidimensional IRT models for polytomously scired Items
Kelderman, Henk
1988-01-01
A loglinear item response theory (IRT) model is proposed that relates polytomously scored item responses to a multidimensional latent space. Each item may have a different response function where each item response may be explained by one or more latent traits. Item response functions may follow a
Multidimensional scaling technique for analysis of magnetic storms ...
Indian Academy of Sciences (India)
R.Narasimhan(krishtel emaging) 1461 1996 Oct 15 13:05:22
Multidimensional scaling is a powerful technique for analysis of data. The latitudinal dependence of geomagnetic field ..... at best an approximation of the real situation but still it may contain a surprising amount of useful .... (oscillations) is a function of latitude and local time. Close to the dip equator just south of Trivan-.
A Template Model for Multidimensional Inter-Transactional Association Rules
Feng, L.; Yu, J.X.; Lu, H.J.; Han, J.W.
2002-01-01
Multidimensional inter-transactional association rules extend the traditional association rules to describe more general associations among items with multiple properties across transactions. “After McDonald and Burger King open branches, KFC will open a branch two months later and one mile away��?
Multidimensional Poverty in China: Findings Based on the CHNS
Yu, Jiantuo
2013-01-01
This paper estimates multidimensional poverty in China by applying the Alkire-Foster methodology to the China Health and Nutrition Survey 2000-2009 data. Five dimensions are included: income, living standard, education, health and social security. Results suggest that rapid economic growth has resulted not only in a reduction in income poverty but…
Multidimensional Model of Trauma and Correlated Antisocial Personality Disorder
Martens, Willem H. J.
2005-01-01
Many studies have revealed an important relationship between psychosocial trauma and antisocial personality disorder. A multidimensional model is presented which describes the psychopathological route from trauma to antisocial development. A case report is also included that can illustrate the etiological process from trauma to severe antisocial…
Multi-dimensional passive sampled Port-Hamiltonian systems
Franken, M.C.J.; Reilink, Rob; Misra, Sarthak; Stramigioli, Stefano
2010-01-01
Passivity of virtual environments running in discrete time is a sufficient condition for stability of the system. The framework for passive sampled Port-Hamiltonian systems allows multi-dimensional virtual environments exhibiting internal dynamic behavior to be computed on a discrete medium in a
Development and Validation of Multi-Dimensional Personality ...
African Journals Online (AJOL)
This study was carried out to establish the scientific processes for the development and validation of Multi-dimensional Personality Inventory (MPI). The process of development and validation occurred in three phases with five components of Agreeableness, Conscientiousness, Emotional stability, Extroversion, and ...
A Multidimensional Model for Peer Evaluation of Teaching Effectiveness.
Kumaravadivelu, B.
1995-01-01
Proposes a broader concept of college faculty peer evaluation, in which perspectives of teacher, learner, and observer are considered. Argues that three basic principles (intention/interpretation, advisement/appraisement, acceptability/accessibility) must necessarily and minimally guide peer evaluation. Presents a four-part, multidimensional peer…
Five Evils: Multidimensional Poverty and Race in America
Reeves, Richard; Rodrigue, Edward; Kneebone, Elizabeth
2016-01-01
Poverty is about a lack of money, but it's not only about that. As a lived experience, poverty is also characterized by ill health, insecurity, discomfort, isolation, and more. To put it another way: Poverty is multidimensional, and its dimensions often cluster together to intensify the negative effects of being poor. In this first of a two-part…
Adaptation of the multidimensional scale of perceived social support ...
African Journals Online (AJOL)
Background: The Multidimensional Scale of Perceived Social Support (MSPSS) was developed in the USA. The adequacy of its use in Uganda to guarantee its reliability and validity has not been ascertained. Aim: Thus the aim of the present study was to adapt the MSPSS scale by testing the validity and reliability of the ...
Development and Validation of the Multidimensional State Boredom Scale
Fahlman, Shelley A.; Mercer-Lynn, Kimberley B.; Flora, David B.; Eastwood, John D.
2013-01-01
This article describes the development and validation of the Multidimensional State Boredom Scale (MSBS)--the first and only full-scale measure of state boredom. It was developed based on a theoretically and empirically grounded definition of boredom. A five-factor structure of the scale (Disengagement, High Arousal, Low Arousal, Inattention, and…
Multidimensional profiling of cell surface proteins and nuclear markers
Energy Technology Data Exchange (ETDEWEB)
Han, Ju; Chang, Hang; Andarawewa, Kumari; Yaswen, Paul; Helen Barcellos-Hoff, Mary; Parvin, Bahram
2009-01-30
Cell membrane proteins play an important role in tissue architecture and cell-cell communication. We hypothesize that segmentation and multidimensional characterization of the distribution of cell membrane proteins, on a cell-by-cell basis, enable improved classification of treatment groups and identify important characteristics that can otherwise be hidden. We have developed a series of computational steps to (i) delineate cell membrane protein signals and associate them with a specific nucleus; (ii) compute a coupled representation of the multiplexed DNA content with membrane proteins; (iii) rank computed features associated with such a multidimensional representation; (iv) visualize selected features for comparative evaluation through heatmaps; and (v) discriminate between treatment groups in an optimal fashion. The novelty of our method is in the segmentation of the membrane signal and the multidimensional representation of phenotypic signature on a cell-by-cell basis. To test the utility of this method, the proposed computational steps were applied to images of cells that have been irradiated with different radiation qualities in the presence and absence of other small molecules. These samples are labeled for their DNA content and E-cadherin membrane proteins. We demonstrate that multidimensional representations of cell-by-cell phenotypes improve predictive and visualization capabilities among different treatment groups, and identify hidden variables.
Stylistic Patterns in Language Teaching Research Articles: A Multidimensional Analysis
Kitjaroenpaiboon, Woravit; Getkham, Kanyarat
2016-01-01
This paper presents the results of a multidimensional analysis to investigate stylistic patterns and their communicative functions in language teaching research articles. The findings were that language teaching research articles contained six stylistic patterns and communicative functions. Pattern I consisted of seven salient positive features…
User walkthrough of multimodal access to multidimensional databases
Esch van-Bussemakers, M.P.; Cremers, A.H.M.
2004-01-01
This paper describes a user walkthrough that was conducted with an experimental multimodal dialogue system to access a multidimensional music database using a simulated mobile device (including a technically challenging four-PHANToM-setup). The main objectives of the user walkthrough were to assess
The Past, Present, and Future of Multidimensional Scaling
P.J.F. Groenen (Patrick); I. Borg (Ingwer)
2013-01-01
textabstractMultidimensional scaling (MDS) has established itself as a standard tool for statisticians and applied researchers. Its success is due to its simple and easily interpretable representation of potentially complex structural data. These data are typically embedded into a 2-dimensional map,
multidimensional health locus of control scales: applicability among
African Journals Online (AJOL)
2002-03-03
Mar 3, 2002 ... Background: Primary preventive approaches are likely to be more effective if the motivational factors of health ... among the mostimportant motivational factors, commonly assessed with the multidimensional health locus of control scale ... assessing one internal and one external dimension. The MHLC scale ...
Income and beyond: Multidimensional Poverty in Six Latin American Countries
Battiston, Diego; Cruces, Guillermo; Lopez-Calva, Luis Felipe; Lugo, Maria Ana; Santos, Maria Emma
2013-01-01
This paper studies multidimensional poverty for Argentina, Brazil, Chile, El Salvador, Mexico and Uruguay for the period 1992-2006. The approach overcomes the limitations of the two traditional methods of poverty analysis in Latin America (income-based and unmet basic needs) by combining income with five other dimensions: school attendance for…
Assessing Multidimensional Energy Literacy of Secondary Students Using Contextualized Assessment
Chen, Kuan-Li; Liu, Shiang-Yao; Chen, Po-Hsi
2015-01-01
Energy literacy is multidimensional, comprising broad content knowledge as well as affect and behavior. Our previous study has defined four core dimensions for the assessment framework, including energy concepts, reasoning on energy issues, low-carbon lifestyle, and civic responsibility for a sustainable society. The present study compiled a…
The Measurement of Multidimensional Gender Inequality: Continuing the Debate
Permanyer, Inaki
2010-01-01
The measurement of multidimensional gender inequality is an increasingly important topic that has very relevant policy applications and implications but which has not received much attention from the academic literature. In this paper I make a comprehensive and critical review of the indices proposed in recent years in order to systematise the…
Multidimensional linearizable system of n-wave-type equations
Zenchuk, A. I.
2017-01-01
We propose a linearizable version of a multidimensional system of n-wave-type nonlinear partial differential equations ( PDEs). We derive this system using the spectral representation of its solution via a procedure similar to the dressing method for nonlinear PDEs integrable by the inverse scattering transform method. We show that the proposed system is completely integrable and construct a particular solution.
Multidimensional scaling technique for analysis of magnetic storms ...
Indian Academy of Sciences (India)
R.Narasimhan(krishtel emaging) 1461 1996 Oct 15 13:05:22
the amplitude of H decreases progressively with increasing latitudes at the Indian chain of observa- tories (Rastogi et al 1997). The aim of this study is to apply the method of multidimensional scal- ing technique to examine the accuracy of results in comparison with the conventional method of cor- relation coefficients in the ...
Use of multidimensional analysis for farm decision making following ...
African Journals Online (AJOL)
Use of multidimensional analysis for farm decision making following performance of five cowpea varieties on a farmer's field. ... The growth parameters were fitted to a multiple (step add-delete) regression model with cowpea yield as the response variable. The analysis showed that the growth parameter contributed in some ...
Identification of peaks in multidimensional coincidence {gamma}-ray spectra
Energy Technology Data Exchange (ETDEWEB)
Morhac, Miroslav E-mail: fyzimiro@savba.sk; Kliman, Jan; Matousek, Vladislav; Veselsky, Martin; Turzo, Ivan
2000-03-21
In the paper a new algorithm to find peaks in two, three and multidimensional spectra, measured in large multidetector {gamma}-ray arrays, is derived. Given the dimension m, the algorithm is selective to m-fold coincidence peaks. It is insensitive to intersections of lower-fold coincidences, hereinafter called ridges.
Heuristic Constraint Management Methods in Multidimensional Adaptive Testing
Born, Sebastian; Frey, Andreas
2017-01-01
Although multidimensional adaptive testing (MAT) has been proven to be highly advantageous with regard to measurement efficiency when several highly correlated dimensions are measured, there are few operational assessments that use MAT. This may be due to issues of constraint management, which is more complex in MAT than it is in unidimensional…
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
Characterizing implementable allocation rules in multi-dimensional environments
Berger, A.; Müller, R.J.; Naeemi, S.H.
2014-01-01
We study characterizations of implementable allocation rules when types are multi-dimensional, monetary transfers are allowed, and agents have quasi-linear preferences over outcomes and transfers. Every outcome is associated with a continuous valuation function that maps an agent's type to his value
Multidimensional poverty dynamics in Ethiopia: how do they differ ...
African Journals Online (AJOL)
Poverty can take many different forms, ranging widely over dimensions both monetary, such as consumption or income, and nonmonetary, such as health and education. One large class of nonmonetary measures of poverty is the multidimensional poverty index (MPI); recent studies document that people identified as poor ...
A multidimensional subdiffusion model: An arbitrage-free market
Li, Guo-Hua; Zhang, Hong; Luo, Mao-Kang
2012-12-01
To capture the subdiffusive characteristics of financial markets, the subordinated process, directed by the inverse α-stale subordinator Sα(t) for 0 martingale approach, we prove that the multidimensional subdiffusion model is arbitrage-free, and also gives an arbitrage-free pricing rule for contingent claims associated with the martingale measure.
A MULTIDIMENSIONAL AND MULTIPHYSICS APPROACH TO NUCLEAR FUEL BEHAVIOR SIMULATION
Energy Technology Data Exchange (ETDEWEB)
R. L. Williamson; J. D. Hales; S. R. Novascone; M. R. Tonks; D. R. Gaston; C. J. Permann; D. Andrs; R. C. Martineau
2012-04-01
Important aspects of fuel rod behavior, for example pellet-clad mechanical interaction (PCMI), fuel fracture, oxide formation, non-axisymmetric cooling, and response to fuel manufacturing defects, are inherently multidimensional in addition to being complicated multiphysics problems. Many current modeling tools are strictly 2D axisymmetric or even 1.5D. This paper outlines the capabilities of a new fuel modeling tool able to analyze either 2D axisymmetric or fully 3D models. These capabilities include temperature-dependent thermal conductivity of fuel; swelling and densification; fuel creep; pellet fracture; fission gas release; cladding creep; irradiation growth; and gap mechanics (contact and gap heat transfer). The need for multiphysics, multidimensional modeling is then demonstrated through a discussion of results for a set of example problems. The first, a 10-pellet rodlet, demonstrates the viability of the solution method employed. This example highlights the effect of our smeared cracking model and also shows the multidimensional nature of discrete fuel pellet modeling. The second example relies on our the multidimensional, multiphysics approach to analyze a missing pellet surface problem. As a final example, we show a lower-length-scale simulation coupled to a continuum-scale simulation.
Multi-dimensional interventions of information and communication ...
African Journals Online (AJOL)
Multi-dimensional interventions of information and communication technologies for woman empowerment in Nigeria. Stella MN Anasi. Abstract. No Abstract. Lagos Journal of Library and Information Science Vol. 3(1) 2005: 56-66. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL ...
Confirmatory Factor Analysis of the Hewitt-Multidimensional Perfectionism Scale
Barut, Yasar
2015-01-01
Various studies on the conceptual framework of perfectionism construct use Hewitt Multi-dimensional Perfectionism Scale (HMPS), as a basic approach. The measure has a prominent role with respect to the theoretical considerations of perfectionism dimensions. This study aimed to evaluate the psychometric properties of the Turkish version of the…
A comparison of multidimensional scaling methods for perceptual mapping
Bijmolt, T.H.A.; Wedel, M.
Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare
Application of Andrew's Plots to Visualization of Multidimensional Data
Grinshpun, Vadim
2016-01-01
Importance: The article raises a point of visual representation of big data, recently considered to be demanded for many scientific and real-life applications, and analyzes particulars for visualization of multi-dimensional data, giving examples of the visual analytics-related problems. Objectives: The purpose of this paper is to study application…
Asymptotic time dependent neutron transport in multidimensional systems
International Nuclear Information System (INIS)
Nagy, M.E.; Sawan, M.E.; Wassef, W.A.; El-Gueraly, L.A.
1983-01-01
A model which predicts the asymptotic time behavior of the neutron distribution in multi-dimensional systems is presented. The model is based on the kernel factorization method used for stationary neutron transport in a rectangular parallelepiped. The accuracy of diffusion theory in predicting the asymptotic time dependence is assessed. The use of neutron pulse experiments for predicting the diffusion parameters is also investigated
Analysis of Multidimensional Poverty: Theory and Case Studies ...
International Development Research Centre (IDRC) Digital Library (Canada)
18 août 2009 ... This book presents a new method for measuring multidimensional poverty. The author critically analyzes various statistical approaches, and proposes a new way of applying a factorial technique, Multiple Correspondence Analysis, to poverty analysis. The core of this new approach rests on the identification ...
Energy Technology Data Exchange (ETDEWEB)
Charbonnier, D.
2004-12-15
The physical phenomena observed in turbomachines are generally three-dimensional and unsteady. A recent study revealed that a three-dimensional steady simulation can reproduce the time-averaged unsteady phenomena, since the steady flow field equations integrate deterministic stresses. The objective of this work is thus to develop an unsteady deterministic stresses model. The analogy with turbulence makes it possible to write transport equations for these stresses. The equations are implemented in steady flow solver and e model for the energy deterministic fluxes is also developed and implemented. Finally, this work shows that a three-dimensional steady simulation, by taking into account unsteady effects with transport equations of deterministic stresses, increases the computing time by only approximately 30 %, which remains very interesting compared to an unsteady simulation. (author)
Multi-dimensional photonic states from a quantum dot
Lee, J. P.; Bennett, A. J.; Stevenson, R. M.; Ellis, D. J. P.; Farrer, I.; Ritchie, D. A.; Shields, A. J.
2018-04-01
Quantum states superposed across multiple particles or degrees of freedom offer an advantage in the development of quantum technologies. Creating these states deterministically and with high efficiency is an ongoing challenge. A promising approach is the repeated excitation of multi-level quantum emitters, which have been shown to naturally generate light with quantum statistics. Here we describe how to create one class of higher dimensional quantum state, a so called W-state, which is superposed across multiple time bins. We do this by repeated Raman scattering of photons from a charged quantum dot in a pillar microcavity. We show this method can be scaled to larger dimensions with no reduction in coherence or single-photon character. We explain how to extend this work to enable the deterministic creation of arbitrary time-bin encoded qudits.
Exploring and linking biomedical resources through multidimensional semantic spaces.
Berlanga, Rafael; Jiménez-Ruiz, Ernesto; Nebot, Victoria
2012-01-25
The semantic integration of biomedical resources is still a challenging issue which is required for effective information processing and data analysis. The availability of comprehensive knowledge resources such as biomedical ontologies and integrated thesauri greatly facilitates this integration effort by means of semantic annotation, which allows disparate data formats and contents to be expressed under a common semantic space. In this paper, we propose a multidimensional representation for such a semantic space, where dimensions regard the different perspectives in biomedical research (e.g., population, disease, anatomy and protein/genes). This paper presents a novel method for building multidimensional semantic spaces from semantically annotated biomedical data collections. This method consists of two main processes: knowledge and data normalization. The former one arranges the concepts provided by a reference knowledge resource (e.g., biomedical ontologies and thesauri) into a set of hierarchical dimensions for analysis purposes. The latter one reduces the annotation set associated to each collection item into a set of points of the multidimensional space. Additionally, we have developed a visual tool, called 3D-Browser, which implements OLAP-like operators over the generated multidimensional space. The method and the tool have been tested and evaluated in the context of the Health-e-Child (HeC) project. Automatic semantic annotation was applied to tag three collections of abstracts taken from PubMed, one for each target disease of the project, the Uniprot database, and the HeC patient record database. We adopted the UMLS Meta-thesaurus 2010AA as the reference knowledge resource. Current knowledge resources and semantic-aware technology make possible the integration of biomedical resources. Such an integration is performed through semantic annotation of the intended biomedical data resources. This paper shows how these annotations can be exploited for
Multidimensional Measurement of Poverty among Women in Sub-Saharan Africa
Batana, Yele Maweki
2013-01-01
Since the seminal work of Sen, poverty has been recognized as a multidimensional phenomenon. The recent availability of relevant databases renewed the interest in this approach. This paper estimates multidimensional poverty among women in fourteen Sub-Saharan African countries using the Alkire and Foster multidimensional poverty measures, whose…
Energy Technology Data Exchange (ETDEWEB)
Giffard, F.X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Martinek, Pavel
2017-07-01
The class of multiset languages accepted by deterministic multiset finite automata with detection is strictly included in the class of multiset regular languages. Since multiset regular languages coincide with semilinear languages, the strict inclusion means that some restrictive conditions imposed to semilinear languages can narrow them appropriately. The paper provides a condition which is expressed with help of semilinear languages and which is necessary for the multiset languages accepted by deterministic multiset finite automata with detection.