Sazhin, Sergei S.
2014-08-01
A new multi-dimensional quasi-discrete model is suggested and tested for the analysis of heating and evaporation of Diesel fuel droplets. As in the original quasi-discrete model suggested earlier, the components of Diesel fuel with close thermodynamic and transport properties are grouped together to form quasi-components. In contrast to the original quasi-discrete model, the new model takes into account the contribution of not only alkanes, but also various other groups of hydrocarbons in Diesel fuels; quasi-components are formed within individual groups. Also, in contrast to the original quasi-discrete model, the contributions of individual components are not approximated by the distribution function of carbon numbers. The formation of quasi-components is based on taking into account the contributions of individual components without any approximations. Groups contributing small molar fractions to the composition of Diesel fuel (less than about 1.5%) are replaced with characteristic components. The actual Diesel fuel is simplified to form six groups: alkanes, cycloalkanes, bicycloalkanes, alkylbenzenes, indanes & tetralines, and naphthalenes, and 3 components C19H34 (tricycloalkane), C13H 12 (diaromatic), and C14H10 (phenanthrene). It is shown that the approximation of Diesel fuel by 15 quasi-components and components, leads to errors in estimated temperatures and evaporation times in typical Diesel engine conditions not exceeding about 3.7% and 2.5% respectively, which is acceptable for most engineering applications. © 2014 Published by Elsevier Ltd. All rights reserved.
A Complete Video Coding Chain Based on Multi-Dimensional Discrete Cosine Transform
Directory of Open Access Journals (Sweden)
T. Fryza
2010-09-01
Full Text Available The paper deals with a video compression method based on the multi-dimensional discrete cosine transform. In the text, the encoder and decoder architectures including the deﬁnitions of all mathematical operations like the forward and inverse 3-D DCT, quantization and thresholding are presented. According to the particular number of currently processed pictures, the new quantization tables and entropy code dictionaries are proposed in the paper. The practical properties of the 3-D DCT coding chain compared with the modern video compression methods (such as H.264 and WebM and the computing complexity are presented as well. It will be proved the best compress properties could be achieved by complex H.264 codec. On the other hand the computing complexity - especially on the encoding side - is lower for the 3-D DCT method.
J. McKean; D. Tonina; C. Bohn; C. W. Wright
2014-01-01
New remote sensing technologies and improved computer performance now allow numerical flow modeling over large stream domains. However, there has been limited testing of whether channel topography can be remotely mapped with accuracy necessary for such modeling. We assessed the ability of the Experimental Advanced Airborne Research Lidar, to support a multi-dimensional...
Discretization vs. Rounding Error in Euler's Method
Borges, Carlos F.
2011-01-01
Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…
Delzanno, G. L.
2015-11-01
A spectral method for the numerical solution of the multi-dimensional Vlasov-Maxwell equations is presented. The plasma distribution function is expanded in Fourier (for the spatial part) and Hermite (for the velocity part) basis functions, leading to a truncated system of ordinary differential equations for the expansion coefficients (moments) that is discretized with an implicit, second order accurate Crank-Nicolson time discretization. The discrete non-linear system is solved with a preconditioned Jacobian-Free Newton-Krylov method. It is shown analytically that the Fourier-Hermite method features exact conservation laws for total mass, momentum and energy in discrete form. Standard tests involving plasma waves and the whistler instability confirm the validity of the conservation laws numerically. The whistler instability test also shows that we can step over the fastest time scale in the system without incurring in numerical instabilities. Some preconditioning strategies are presented, showing that the number of linear iterations of the Krylov solver can be drastically reduced and a significant gain in performance can be obtained.
Sazhin, Sergei S.; Al Qubeissi, M.; Nasiri, Rasoul; Gun'ko, Vladimir Moiseevich; Elwardani, Ahmed Elsaid; Lemoine, Fabrice; Grisch, Fré dé ric; Heikal, Morgan Raymond
2014-01-01
thermodynamic and transport properties are grouped together to form quasi-components. In contrast to the original quasi-discrete model, the new model takes into account the contribution of not only alkanes, but also various other groups of hydrocarbons in Diesel
Angular discretization errors in transport theory
International Nuclear Information System (INIS)
Nelson, P.; Yu, F.
1992-01-01
Elements of the information-based complexity theory are computed for several types of information and associated algorithms for angular approximations in the setting of a on-dimensional model problem. For point-evaluation information, the local and global radii of information are computed, a (trivial) optimal algorithm is determined, and the local and global error of a discrete ordinates algorithm are shown to be infinite. For average cone-integral information, the local and global radii of information are computed, the local and global error tends to zero as the underlying partition is indefinitely refined. A central algorithm for such information and an optimal partition (of given cardinality) are described. It is further shown that the analytic first-collision source method has zero error (for the purely absorbing model problem). Implications of the restricted problem domains suitable for the various types of information are discussed
International Nuclear Information System (INIS)
Lopez, C.; Koski, J.A.; Razani, A.
2000-01-01
A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively
Discrete choice models with multiplicative error terms
DEFF Research Database (Denmark)
Fosgerau, Mogens; Bierlaire, Michel
2009-01-01
The conditional indirect utility of many random utility maximization (RUM) discrete choice models is specified as a sum of an index V depending on observables and an independent random term ε. In general, the universe of RUM consistent models is much larger, even fixing some specification of V due...
Javidi, Bahram; Andres, Pedro
2014-01-01
Provides a broad overview of advanced multidimensional imaging systems with contributions from leading researchers in the field Multi-dimensional Imaging takes the reader from the introductory concepts through to the latest applications of these techniques. Split into 3 parts covering 3D image capture, processing, visualization and display, using 1) a Multi-View Approach and 2.) a Holographic Approach, followed by a 3rd part addressing other 3D systems approaches, applications and signal processing for advanced 3D imaging. This book describes recent developments, as well as the prospects and
Error estimates for discretized quantum stochastic differential inclusions
International Nuclear Information System (INIS)
Ayoola, E.O.
2001-09-01
This paper is concerned with the error estimates involved in the solution of a discrete approximation of a quantum stochastic differential inclusion (QSDI). Our main results rely on certain properties of the averaged modulus of continuity for multivalued sesquilinear forms associated with QSDI. We obtained results concerning the estimates of the Hausdorff distance between the set of solutions of the QSDI and the set of solutions of its discrete approximation. This extend the results of Dontchev and Farkhi concerning classical differential inclusions to the present noncommutative Quantum setting involving inclusions in certain locally convex space. (author)
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A
2007-01-01
The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.
Multi-Dimensional Path Queries
DEFF Research Database (Denmark)
Bækgaard, Lars
1998-01-01
to create nested path structures. We present an SQL-like query language that is based on path expressions and we show how to use it to express multi-dimensional path queries that are suited for advanced data analysis in decision support environments like data warehousing environments......We present the path-relationship model that supports multi-dimensional data modeling and querying. A path-relationship database is composed of sets of paths and sets of relationships. A path is a sequence of related elements (atoms, paths, and sets of paths). A relationship is a binary path...
Observations on discretization errors in twisted-mass lattice QCD
International Nuclear Information System (INIS)
Sharpe, Stephen R.
2005-01-01
I make a number of observations concerning discretization errors in twisted-mass lattice QCD that can be deduced by applying chiral perturbation theory including lattice artifacts. (1) The line along which the partially conserved axial current quark mass vanishes in the untwisted-mass-twisted-mass plane makes an angle to the twisted-mass axis which is a direct measure of O(a) terms in the chiral Lagrangian, and is found numerically to be large; (2) Numerical results for pionic quantities in the mass plane show the qualitative properties predicted by chiral perturbation theory, in particular, an asymmetry in slopes between positive and negative untwisted quark masses; (3) By extending the description of the 'Aoki regime' (where m q ∼a 2 Λ QCD 3 ) to next-to-leading order in chiral perturbation theory I show how the phase-transition lines and lines of maximal twist (using different definitions) extend into this region, and give predictions for the functional form of pionic quantities; (4) I argue that the recent claim that lattice artifacts at maximal twist have apparent infrared singularities in the chiral limit results from expanding about the incorrect vacuum state. Shifting to the correct vacuum (as can be done using chiral perturbation theory) the apparent singularities are summed into nonsingular, and furthermore predicted, forms. I further argue that there is no breakdown in the Symanzik expansion in powers of lattice spacing, and no barrier to simulating at maximal twist in the Aoki regime
The Effects of Discrete-Trial Training Commission Errors on Learner Outcomes: An Extension
Jenkins, Sarah R.; Hirst, Jason M.; DiGennaro Reed, Florence D.
2015-01-01
We conducted a parametric analysis of treatment integrity errors during discrete-trial training and investigated the effects of three integrity conditions (0, 50, or 100 % errors of commission) on performance in the presence and absence of programmed errors. The presence of commission errors impaired acquisition for three of four participants.…
Time-discrete higher order ALE formulations: a priori error analysis
Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.
2013-01-01
We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our
Multi-dimensional Fuzzy Euler Approximation
Directory of Open Access Journals (Sweden)
Yangyang Hao
2017-05-01
Full Text Available Multi-dimensional Fuzzy differential equations driven by multi-dimen-sional Liu process, have been intensively applied in many fields. However, we can not obtain the analytic solution of every multi-dimensional fuzzy differential equation. Then, it is necessary for us to discuss the numerical results in most situations. This paper focuses on the numerical method of multi-dimensional fuzzy differential equations. The multi-dimensional fuzzy Taylor expansion is given, based on this expansion, a numerical method which is designed for giving the solution of multi-dimensional fuzzy differential equation via multi-dimensional Euler method will be presented, and its local convergence also will be discussed.
A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction
Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole
2015-01-01
Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…
Koren, B.; Hackbusch, W.; Trottenberg, U.
1991-01-01
Two simple, multi-dimensional upwind discretizations for the steady Euler equations are derived, with the emphasis Iying on bath a good accuracy and a good solvability. The multi-dimensional upwinding consists of applying a one-dimensional Riemann solver with a locally rotated left and right state,
Residual-based Methods for Controlling Discretization Error in CFD
2015-08-24
ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1 . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Directory of Open Access Journals (Sweden)
Baogui Xin
2015-04-01
Full Text Available A projective synchronization scheme for a kind of n-dimensional discrete dynamical system is proposed by means of a linear feedback control technique. The scheme consists of master and slave discrete dynamical systems coupled by linear state error variables. A kind of novel 3-D chaotic discrete system is constructed, to which the test for chaos is applied. By using the stability principles of an upper or lower triangular matrix, two controllers for achieving projective synchronization are designed and illustrated with the novel systems. Lastly some numerical simulations are employed to validate the effectiveness of the proposed projective synchronization scheme.
Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil †
Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao
2018-01-01
An innovative array of magnetic coils (the discrete Rogowski coil—RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors. PMID:29534006
Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil.
Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao
2018-03-13
An innovative array of magnetic coils (the discrete Rogowski coil-RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC's interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.
Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil
Directory of Open Access Journals (Sweden)
Mengyuan Xu
2018-03-01
Full Text Available An innovative array of magnetic coils (the discrete Rogowski coil—RC with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.
Czech Academy of Sciences Publication Activity Database
Papež, Jan; Liesen, J.; Strakoš, Z.
2014-01-01
Roč. 449, 15 May (2014), s. 89-114 ISSN 0024-3795 R&D Projects: GA AV ČR IAA100300802; GA ČR GA201/09/0917 Grant - others:GA MŠk(CZ) LL1202; GA UK(CZ) 695612 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * adaptivity * a posteriori error analysis * discretization error * algebra ic error * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 0.939, year: 2014
Two multi-dimensional uncertainty relations
International Nuclear Information System (INIS)
Skala, L; Kapsa, V
2008-01-01
Two multi-dimensional uncertainty relations, one related to the probability density and the other one related to the probability density current, are derived and discussed. Both relations are stronger than the usual uncertainty relations for the coordinates and momentum
A posteriori error estimator and AMR for discrete ordinates nodal transport methods
International Nuclear Information System (INIS)
Duo, Jose I.; Azmy, Yousry Y.; Zikatanov, Ludmil T.
2009-01-01
In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L 2 error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L 2 error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns
International Nuclear Information System (INIS)
Barros, R.C. de; Larsen, E.W.
1991-01-01
A generalization of the one-group Spectral Green's Function (SGF) method is developed for multigroup, slab-geometry discrete ordinates (S N ) problems. The multigroup SGF method is free from spatial truncation errors; it generated numerical values for the cell-edge and cell-average angular fluxes that agree with the analytic solution of the multigroup S N equations. Numerical results are given to illustrate the method's accuracy
Error analysis for a monolithic discretization of coupled Darcy and Stokes problems
Girault, V.
2014-01-01
© de Gruyter 2014. The coupled Stokes and Darcy equations are approximated by a strongly conservative finite element method. The discrete spaces are the divergence-conforming velocity space with matching pressure space such as the Raviart-Thomas spaces. This work proves optimal error estimate of the velocity in the L2 norm in the domain and on the interface. Lipschitz regularity of the interface is sufficient to obtain the results.
Multi-Dimensional Aggregation for Temporal Data
DEFF Research Database (Denmark)
Böhlen, M. H.; Gamper, J.; Jensen, Christian Søndergaard
2006-01-01
Business Intelligence solutions, encompassing technologies such as multi-dimensional data modeling and aggregate query processing, are being applied increasingly to non-traditional data. This paper extends multi-dimensional aggregation to apply to data with associated interval values that capture...... that the data holds for each point in the interval, as well as the case where the data holds only for the entire interval, but must be adjusted to apply to sub-intervals. The paper reports on an implementation of the new operator and on an empirical study that indicates that the operator scales to large data...
Multi-dimensional quasitoeplitz Markov chains
Directory of Open Access Journals (Sweden)
Alexander N. Dudin
1999-01-01
Full Text Available This paper deals with multi-dimensional quasitoeplitz Markov chains. We establish a sufficient equilibrium condition and derive a functional matrix equation for the corresponding vector-generating function, whose solution is given algorithmically. The results are demonstrated in the form of examples and applications in queues with BMAP-input, which operate in synchronous random environment.
The analytical evolution of NLS solitons due to the numerical discretization error
Hoseini, S. M.; Marchant, T. R.
2011-12-01
Soliton perturbation theory is used to obtain analytical solutions describing solitary wave tails or shelves, due to numerical discretization error, for soliton solutions of the nonlinear Schrödinger equation. Two important implicit numerical schemes for the nonlinear Schrödinger equation, with second-order temporal and spatial discretization errors, are considered. These are the Crank-Nicolson scheme and a scheme, due to Taha [1], based on the inverse scattering transform. The first-order correction for the solitary wave tail, or shelf, is in integral form and an explicit expression is found for large time. The shelf decays slowly, at a rate of t^{-{1\\over 2}}, which is characteristic of the nonlinear Schrödinger equation. Singularity theory, usually used for combustion problems, is applied to the explicit large-time expression for the solitary wave tail. Analytical results are then obtained, such as the parameter regions in which qualitatively different types of solitary wave tails occur, the location of zeros and the location and amplitude of peaks. It is found that three different types of tail occur for the Crank-Nicolson and Taha schemes and that the Taha scheme exhibits some unusual symmetry properties, as the tails for left and right moving solitary waves are different. Optimal choices of the discretization parameters for the numerical schemes are also found, which minimize the amplitude of the solitary wave tail. The analytical solutions are compared with numerical simulations, and an excellent comparison is found.
The analytical evolution of NLS solitons due to the numerical discretization error
International Nuclear Information System (INIS)
Hoseini, S M; Marchant, T R
2011-01-01
Soliton perturbation theory is used to obtain analytical solutions describing solitary wave tails or shelves, due to numerical discretization error, for soliton solutions of the nonlinear Schrödinger equation. Two important implicit numerical schemes for the nonlinear Schrödinger equation, with second-order temporal and spatial discretization errors, are considered. These are the Crank–Nicolson scheme and a scheme, due to Taha, based on the inverse scattering transform. The first-order correction for the solitary wave tail, or shelf, is in integral form and an explicit expression is found for large time. The shelf decays slowly, at a rate of t -1/2 , which is characteristic of the nonlinear Schrödinger equation. Singularity theory, usually used for combustion problems, is applied to the explicit large-time expression for the solitary wave tail. Analytical results are then obtained, such as the parameter regions in which qualitatively different types of solitary wave tails occur, the location of zeros and the location and amplitude of peaks. It is found that three different types of tail occur for the Crank–Nicolson and Taha schemes and that the Taha scheme exhibits some unusual symmetry properties, as the tails for left and right moving solitary waves are different. Optimal choices of the discretization parameters for the numerical schemes are also found, which minimize the amplitude of the solitary wave tail. The analytical solutions are compared with numerical simulations, and an excellent comparison is found. (paper)
Time-discrete higher order ALE formulations: a priori error analysis
Bonito, Andrea
2013-03-16
We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our estimates hold without any restrictions on the time steps for dG with exact integration or Reynolds\\' quadrature. They involve a mild restriction on the time steps for the practical Runge-Kutta-Radau methods of any order. The key ingredients are the stability results shown earlier in Bonito et al. (Time-discrete higher order ALE formulations: stability, 2013) along with a novel ALE projection. Numerical experiments illustrate and complement our theoretical results. © 2013 Springer-Verlag Berlin Heidelberg.
Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.
2017-04-01
In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.
Carter, Jeffrey R.; Simon, Wayne E.
1990-08-01
Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and
International Nuclear Information System (INIS)
Gómez de León, F C; Meroño Pérez, P A
2010-01-01
The traditional method for measuring the velocity and the angular vibration in the shaft of rotating machines using incremental encoders is based on counting the pulses at given time intervals. This method is generically called the time interval measurement system (TIMS). A variant of this method that we have developed in this work consists of measuring the corresponding time of each pulse from the encoder and sampling the signal by means of an A/D converter as if it were an analog signal, that is to say, in discrete time. For this reason, we have denominated this method as the discrete time interval measurement system (DTIMS). This measurement system provides a substantial improvement in the precision and frequency resolution compared with the traditional method of counting pulses. In addition, this method permits modification of the width of some pulses in order to obtain a mark-phase on every lap. This paper explains the theoretical fundamentals of the DTIMS and its application for measuring the angular vibrations of rotating machines. It also displays the required relationship between the sampling rate of the signal, the number of pulses of the encoder and the rotating velocity in order to obtain the required resolution and to delimit the methodological errors in the measurement
Multi-dimensional Laplace transforms and applications
International Nuclear Information System (INIS)
Mughrabi, T.A.
1988-01-01
In this dissertation we establish new theorems for computing certain types of multidimensional Laplace transform pairs from known one-dimensional Laplace transforms. The theorems are applied to the most commonly used special functions and so we obtain many two and three dimensional Laplace transform pairs. As applications, some boundary value problems involving linear partial differential equations are solved by the use of multi-dimensional Laplace transformation. Also we establish some relations between the Laplace transformation and other integral transformation in two variables
Transport stochastic multi-dimensional media
International Nuclear Information System (INIS)
Haran, O.; Shvarts, D.
1996-01-01
Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors)
Transport stochastic multi-dimensional media
Energy Technology Data Exchange (ETDEWEB)
Haran, O; Shvarts, D [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev; Thiberger, R [Ben-Gurion Univ. of the Negev, Beersheba (Israel)
1996-12-01
Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors).
Finite element method for radiation heat transfer in multi-dimensional graded index medium
International Nuclear Information System (INIS)
Liu, L.H.; Zhang, L.; Tan, H.P.
2006-01-01
In graded index medium, ray goes along a curved path determined by Fermat principle, and curved ray-tracing is very difficult and complex. To avoid the complicated and time-consuming computation of curved ray trajectories, a finite element method based on discrete ordinate equation is developed to solve the radiative transfer problem in a multi-dimensional semitransparent graded index medium. Two particular test problems of radiative transfer are taken as examples to verify this finite element method. The predicted dimensionless net radiative heat fluxes are determined by the proposed method and compared with the results obtained by finite volume method. The results show that the finite element method presented in this paper has a good accuracy in solving the multi-dimensional radiative transfer problem in semitransparent graded index medium
Suliman, Mohamed Abdalla Elhag
2016-12-19
This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrix and hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator. Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems. Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods.
Masuyama, Hiroyuki
2014-01-01
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...
International Nuclear Information System (INIS)
Yamamoto, Akio; Tatsumi, Masahiro
2006-01-01
In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)
Error analysis for a monolithic discretization of coupled Darcy and Stokes problems
Girault, V.; Kanschat, G.; Riviè re, B.
2014-01-01
© de Gruyter 2014. The coupled Stokes and Darcy equations are approximated by a strongly conservative finite element method. The discrete spaces are the divergence-conforming velocity space with matching pressure space such as the Raviart
Starns, Jeffrey J; Dubé, Chad; Frelinger, Matthew E
2018-05-01
In this report, we evaluate single-item and forced-choice recognition memory for the same items and use the resulting accuracy and reaction time data to test the predictions of discrete-state and continuous models. For the single-item trials, participants saw a word and indicated whether or not it was studied on a previous list. The forced-choice trials had one studied and one non-studied word that both appeared in the earlier single-item trials and both received the same response. Thus, forced-choice trials always had one word with a previous correct response and one with a previous error. Participants were asked to select the studied word regardless of whether they previously called both words "studied" or "not studied." The diffusion model predicts that forced-choice accuracy should be lower when the word with a previous error had a fast versus a slow single-item RT, because fast errors are associated with more compelling misleading memory retrieval. The two-high-threshold (2HT) model does not share this prediction because all errors are guesses, so error RT is not related to memory strength. A low-threshold version of the discrete state approach predicts an effect similar to the diffusion model, because errors are a mixture of responses based on misleading retrieval and guesses, and the guesses should tend to be slower. Results showed that faster single-trial errors were associated with lower forced-choice accuracy, as predicted by the diffusion and low-threshold models. Copyright © 2018 Elsevier Inc. All rights reserved.
Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R
2016-03-01
This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.
Portable laser synthesizer for high-speed multi-dimensional spectroscopy
Demos, Stavros G [Livermore, CA; Shverdin, Miroslav Y [Sunnyvale, CA; Shirk, Michael D [Brentwood, CA
2012-05-29
Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.
Investigation of multi-dimensional computational models for calculating pollutant transport
International Nuclear Information System (INIS)
Pepper, D.W.; Cooper, R.E.; Baker, A.J.
1980-01-01
A performance study of five numerical solution algorithms for multi-dimensional advection-diffusion prediction on mesoscale grids was made. Test problems include transport of point and distributed sources, and a simulation of a continuous source. In all cases, analytical solutions are available to assess relative accuracy. The particle-in-cell and second-moment algorithms, both of which employ sub-grid resolution coupled with Lagrangian advection, exhibit superior accuracy in modeling a point source release. For modeling of a distributed source, algorithms based upon the pseudospectral and finite element interpolation concepts, exhibit improved accuracy on practical discretizations
Uncertainty quantification in a chemical system using error estimate-based mesh adaption
International Nuclear Information System (INIS)
Mathelin, Lionel; Le Maitre, Olivier P.
2012-01-01
This paper describes a rigorous a posteriori error analysis for the stochastic solution of non-linear uncertain chemical models. The dual-based a posteriori stochastic error analysis extends the methodology developed in the deterministic finite elements context to stochastic discretization frameworks. It requires the resolution of two additional (dual) problems to yield the local error estimate. The stochastic error estimate can then be used to adapt the stochastic discretization. Different anisotropic refinement strategies are proposed, leading to a cost-efficient tool suitable for multi-dimensional problems of moderate stochastic dimension. The adaptive strategies allow both for refinement and coarsening of the stochastic discretization, as needed to satisfy a prescribed error tolerance. The adaptive strategies were successfully tested on a model for the hydrogen oxidation in supercritical conditions having 8 random parameters. The proposed methodologies are however general enough to be also applicable for a wide class of models such as uncertain fluid flows. (authors)
International Nuclear Information System (INIS)
Uko, L.U.
1990-02-01
We study a scheme for the time-discretization of parabolic variational inequalities that is often easier to use than the classical method of Rothe. We show that if the data are compatible in a certain sense, then this scheme is of order ≥1/2. (author). 10 refs
International Nuclear Information System (INIS)
Fournier, D.; Le Tellier, R.; Suteau, C.
2011-01-01
We present an error estimator for the S N neutron transport equation discretized with an arbitrary high-order discontinuous Galerkin method. As a starting point, the estimator is obtained for conforming Cartesian meshes with a uniform polynomial order for the trial space then adapted to deal with non-conforming meshes and a variable polynomial order. Some numerical tests illustrate the properties of the estimator and its limitations. Finally, a simple shielding benchmark is analyzed in order to show the relevance of the estimator in an adaptive process.
Discretization errors at free boundaries of the Grad-Schlueter-Shafranov equation
International Nuclear Information System (INIS)
Meyer-Spasche, R.; Fornberg, B.
1990-10-01
The numerical error of standard finite-difference schemes is analyzed at free boundaries of the Grad-Schlueter-Shafranov equation of plasma physics. A simple correction strategy is devised to eliminate (to leading order) the errors which arise as the free boundary crosses the rectangular grid at irregular locations. The resulting scheme can be solved by Gauss-Newton or Inverse iterations, or by multigrid iterations. Extrapolation (from 2nd to 3rd order of accuracy) is possible for the new scheme. (orig.)
Multi-Dimensional Bitmap Indices for Optimising Data Access within Object Oriented Databases at CERN
Stockinger, K
2001-01-01
Efficient query processing in high-dimensional search spaces is an important requirement for many analysis tools. In the literature on index data structures one can find a wide range of methods for optimising database access. In particular, bitmap indices have recently gained substantial popularity in data warehouse applications with large amounts of read mostly data. Bitmap indices are implemented in various commercial database products and are used for querying typical business applications. However, scientific data that is mostly characterised by non-discrete attribute values cannot be queried efficiently by the techniques currently supported. In this thesis we propose a novel access method based on bitmap indices that efficiently handles multi-dimensional queries against typical scientific data. The algorithm is called GenericRangeEval and is an extension of a bitmap index for discrete attribute values. By means of a cost model we study the performance of queries with various selectivities against uniform...
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Karátson, J.
2017-01-01
Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Karátson, J.
2017-01-01
Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub
Multi-Dimensional Customer Data Analysis in Online Auctions
Institute of Scientific and Technical Information of China (English)
LAO Guoling; XIONG Kuan; QIN Zheng
2007-01-01
In this paper, we designed a customer-centered data warehouse system with five subjects: listing, bidding, transaction,accounts, and customer contact based on the business process of online auction companies. For each subject, we analyzed its fact indexes and dimensions. Then take transaction subject as example,analyzed the data warehouse model in detail, and got the multi-dimensional analysis structure of transaction subject. At last, using data mining to do customer segmentation, we divided customers into four types: impulse customer, prudent customer, potential customer, and ordinary customer. By the result of multi-dimensional customer data analysis, online auction companies can do more target marketing and increase customer loyalty.
Decay rate in a multi-dimensional fission problem
Energy Technology Data Exchange (ETDEWEB)
Brink, D M; Canto, L F
1986-06-01
The multi-dimensional diffusion approach of Zhang Jing Shang and Weidenmueller (1983 Phys. Rev. C28, 2190) is used to study a simplified model for induced fission. In this model it is shown that the coupling of the fission coordinate to the intrinsic degrees of freedom is equivalent to an extra friction and a mass correction in the corresponding one-dimensional problem.
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
Development and Validation of Multi-Dimensional Personality ...
African Journals Online (AJOL)
This study was carried out to establish the scientific processes for the development and validation of Multi-dimensional Personality Inventory (MPI). The process of development and validation occurred in three phases with five components of Agreeableness, Conscientiousness, Emotional stability, Extroversion, and ...
International Nuclear Information System (INIS)
Downar, T.
2009-01-01
The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system. The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multidimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system. Specifically, the methods here utilize the existing continuous energy SCALE5 module, CENTRM, and the multi-dimensional discrete ordinates solver, NEWT to develop a new code, CENTRM( ) NEWT. The work here addresses specific theoretical limitations in existing CENTRM resonance treatment, as well as investigates advanced numerical and parallel computing algorithms for CENTRM and NEWT in order to reduce the computational burden. The result of the work here will be a new computer code capable of performing problem dependent self-shielding analysis for both existing and proposed GENIV fuel designs. The objective of the work was to have an immediate impact on the safety analysis of existing reactors through improvements in the calculation of fuel temperature effects, as well as on the analysis of more sophisticated GENIV/NGNP systems through improvements in the depletion/transmutation of actinides for Advanced Fuel Cycle Initiatives.
Spectral analysis of multi-dimensional self-similar Markov processes
International Nuclear Information System (INIS)
Modarresi, N; Rezakhah, S
2010-01-01
In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R + } with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points α k , k in W, where α is obtained by the equality l = α T and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(.) with the parameter space {α k , k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R + } is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {R H j (1), R j H (0), j = 0, 1, ..., T - 1}, where R H j (τ) is the covariance function of jth and (j + τ)th observations of the process.
Balanced sensitivity functions for tuning multi-dimensional Bayesian network classifiers
Bolt, J.H.; van der Gaag, L.C.
Multi-dimensional Bayesian network classifiers are Bayesian networks of restricted topological structure, which are tailored to classifying data instances into multiple dimensions. Like more traditional classifiers, multi-dimensional classifiers are typically learned from data and may include
Multi-dimensional Bin Packing Problems with Guillotine Constraints
DEFF Research Database (Denmark)
Amossen, Rasmus Resen; Pisinger, David
2010-01-01
The problem addressed in this paper is the decision problem of determining if a set of multi-dimensional rectangular boxes can be orthogonally packed into a rectangular bin while satisfying the requirement that the packing should be guillotine cuttable. That is, there should exist a series of face...... parallel straight cuts that can recursively cut the bin into pieces so that each piece contains a box and no box has been intersected by a cut. The unrestricted problem is known to be NP-hard. In this paper we present a generalization of a constructive algorithm for the multi-dimensional bin packing...... problem, with and without the guillotine constraint, based on constraint programming....
Multi-dimensional Code Development for Safety Analysis of LMR
International Nuclear Information System (INIS)
Ha, K. S.; Jeong, H. Y.; Kwon, Y. M.; Lee, Y. B.
2006-08-01
A liquid metal reactor loaded a metallic fuel has the inherent safety mechanism due to the several negative reactivity feedback. Although this feature demonstrated through experiments in the EBR-II, any of the computer programs until now did not exactly analyze it because of the complexity of the reactivity feedback mechanism. A multi-dimensional detail program was developed through the International Nuclear Energy Research Initiative(INERI) from 2003 to 2005. This report includes the numerical coupling the multi-dimensional program and SSC-K code which is used to the safety analysis of liquid metal reactors in KAERI. The coupled code has been proved by comparing the analysis results using the code with the results using SAS-SASSYS code of ANL for the UTOP, ULOF, and ULOHS applied to the safety analysis for KALIMER-150
A nodal collocation approximation for the multi-dimensional PL equations - 2D applications
International Nuclear Information System (INIS)
Capilla, M.; Talavera, C.F.; Ginestar, D.; Verdu, G.
2008-01-01
A classical approach to solve the neutron transport equation is to apply the spherical harmonics method obtaining a finite approximation known as the P L equations. In this work, the derivation of the P L equations for multi-dimensional geometries is reviewed and a nodal collocation method is developed to discretize these equations on a rectangular mesh based on the expansion of the neutronic fluxes in terms of orthogonal Legendre polynomials. The performance of the method and the dominant transport Lambda Modes are obtained for a homogeneous 2D problem, a heterogeneous 2D anisotropic scattering problem, a heterogeneous 2D problem and a benchmark problem corresponding to a MOX fuel reactor core
Peer Pressure in Multi-Dimensional Work Tasks
Felix Ebeling; Gerlinde Fellner; Johannes Wahlig
2012-01-01
We study the influence of peer pressure in multi-dimensional work tasks theoretically and in a controlled laboratory experiment. Thereby, workers face peer pressure in only one work dimension. We find that effort provision increases in the dimension where peer pressure is introduced. However, not all of this increase translates into a productivity gain, since the effect is partly offset by a decrease of effort in the work dimension without peer pressure. Furthermore, this tradeoff is stronger...
Multi-dimensional virtual system introduced to enhance canonical sampling
Higo, Junichi; Kasahara, Kota; Nakamura, Haruki
2017-10-01
When an important process of a molecular system occurs via a combination of two or more rare events, which occur almost independently to one another, computational sampling for the important process is difficult. Here, to sample such a process effectively, we developed a new method, named the "multi-dimensional Virtual-system coupled Monte Carlo (multi-dimensional-VcMC)" method, where the system interacts with a virtual system expressed by two or more virtual coordinates. Each virtual coordinate controls sampling along a reaction coordinate. By setting multiple reaction coordinates to be related to the corresponding rare events, sampling of the important process can be enhanced. An advantage of multi-dimensional-VcMC is its simplicity: Namely, the conformation moves widely in the multi-dimensional reaction coordinate space without knowledge of canonical distribution functions of the system. To examine the effectiveness of the algorithm, we introduced a toy model where two molecules (receptor and its ligand) bind and unbind to each other. The receptor has a deep binding pocket, to which the ligand enters for binding. Furthermore, a gate is set at the entrance of the pocket, and the gate is usually closed. Thus, the molecular binding takes place via the two events: ligand approach to the pocket and gate opening. In two-dimensional (2D)-VcMC, the two molecules exhibited repeated binding and unbinding, and an equilibrated distribution was obtained as expected. A conventional canonical simulation, which was 200 times longer than 2D-VcMC, failed in sampling the binding/unbinding effectively. The current method is applicable to various biological systems.
Code Coupling for Multi-Dimensional Core Transient Analysis
International Nuclear Information System (INIS)
Park, Jin-Woo; Park, Guen-Tae; Park, Min-Ho; Ryu, Seok-Hee; Um, Kil-Sup; Lee Jae-Il
2015-01-01
After the CEA ejection, the nuclear power of the reactor dramatically increases in an exponential behavior until the Doppler effect becomes important and turns the reactivity balance and power down to lower levels. Although this happens in a very short period of time, only few seconds, the energy generated can be very significant and cause fuel failures. The current safety analysis methodology which is based on overly conservative assumptions with the point kinetics model results in quite adverse consequences. Thus, KEPCO Nuclear Fuel(KNF) is developing the multi-dimensional safety analysis methodology to mitigate the consequences of the single CEA ejection accident. For this purpose, three-dimensional core neutron kinetics code ASTRA, sub-channel analysis code THALES, and fuel performance analysis code FROST, which have transient calculation performance, were coupled using message passing interface (MPI). This paper presents the methodology used for code coupling and the preliminary simulation results with the coupled code system (CHASER). Multi-dimensional core transient analysis code system, CHASER, has been developed and it was applied to simulate a single CEA ejection accident. CHASER gave a good prediction of multi-dimensional core transient behaviors during transient. In the near future, the multi-dimension CEA ejection analysis methodology using CHASER is planning to be developed. CHASER is expected to be a useful tool to gain safety margin for reactivity initiated accidents (RIAs), such as a single CEA ejection accident
Code Coupling for Multi-Dimensional Core Transient Analysis
Energy Technology Data Exchange (ETDEWEB)
Park, Jin-Woo; Park, Guen-Tae; Park, Min-Ho; Ryu, Seok-Hee; Um, Kil-Sup; Lee Jae-Il [KEPCO NF, Daejeon (Korea, Republic of)
2015-05-15
After the CEA ejection, the nuclear power of the reactor dramatically increases in an exponential behavior until the Doppler effect becomes important and turns the reactivity balance and power down to lower levels. Although this happens in a very short period of time, only few seconds, the energy generated can be very significant and cause fuel failures. The current safety analysis methodology which is based on overly conservative assumptions with the point kinetics model results in quite adverse consequences. Thus, KEPCO Nuclear Fuel(KNF) is developing the multi-dimensional safety analysis methodology to mitigate the consequences of the single CEA ejection accident. For this purpose, three-dimensional core neutron kinetics code ASTRA, sub-channel analysis code THALES, and fuel performance analysis code FROST, which have transient calculation performance, were coupled using message passing interface (MPI). This paper presents the methodology used for code coupling and the preliminary simulation results with the coupled code system (CHASER). Multi-dimensional core transient analysis code system, CHASER, has been developed and it was applied to simulate a single CEA ejection accident. CHASER gave a good prediction of multi-dimensional core transient behaviors during transient. In the near future, the multi-dimension CEA ejection analysis methodology using CHASER is planning to be developed. CHASER is expected to be a useful tool to gain safety margin for reactivity initiated accidents (RIAs), such as a single CEA ejection accident.
The 'thousand words' problem: Summarizing multi-dimensional data
International Nuclear Information System (INIS)
Scott, David M.
2011-01-01
Research highlights: → Sophisticated process sensors produce large multi-dimensional data sets. → Plant control systems cannot handle images or large amounts of data. → Various techniques reduce the dimensionality, extracting information from raw data. → Simple 1D and 2D methods can often be extended to 3D and 4D applications. - Abstract: An inherent difficulty in the application of multi-dimensional sensing to process monitoring and control is the extraction and interpretation of useful information. Ultimately the measured data must be collapsed into a relatively small number of values that capture the salient characteristics of the process. Although multiple dimensions are frequently necessary to isolate a particular physical attribute (such as the distribution of a particular chemical species in a reactor), plant control systems are not equipped to use such data directly. The production of a multi-dimensional data set (often displayed as an image) is not the final step of the measurement process, because information must still be extracted from the raw data. In the metaphor of one picture being equal to a thousand words, the problem becomes one of paraphrasing a lengthy description of the image with one or two well-chosen words. Various approaches to solving this problem are discussed using examples from the fields of particle characterization, image processing, and process tomography.
Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Hoa T. [Univ. of Utah, Salt Lake City, UT (United States); Stone, Daithi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-01-01
An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.
Schulz, Wolfgang; Hermanns, Torsten; Al Khawli, Toufik
2017-07-01
Decision making for competitive production in high-wage countries is a daily challenge where rational and irrational methods are used. The design of decision making processes is an intriguing, discipline spanning science. However, there are gaps in understanding the impact of the known mathematical and procedural methods on the usage of rational choice theory. Following Benjamin Franklin's rule for decision making formulated in London 1772, he called "Prudential Algebra" with the meaning of prudential reasons, one of the major ingredients of Meta-Modelling can be identified finally leading to one algebraic value labelling the results (criteria settings) of alternative decisions (parameter settings). This work describes the advances in Meta-Modelling techniques applied to multi-dimensional and multi-criterial optimization by identifying the persistence level of the corresponding Morse-Smale Complex. Implementations for laser cutting and laser drilling are presented, including the generation of fast and frugal Meta-Models with controlled error based on mathematical model reduction Reduced Models are derived to avoid any unnecessary complexity. Both, model reduction and analysis of multi-dimensional parameter space are used to enable interactive communication between Discovery Finders and Invention Makers. Emulators and visualizations of a metamodel are introduced as components of Virtual Production Intelligence making applicable the methods of Scientific Design Thinking and getting the developer as well as the operator more skilled.
A multi-dimensional sampling method for locating small scatterers
International Nuclear Information System (INIS)
Song, Rencheng; Zhong, Yu; Chen, Xudong
2012-01-01
A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)
Multi-dimensional cubic interpolation for ICF hydrodynamics simulation
International Nuclear Information System (INIS)
Aoki, Takayuki; Yabe, Takashi.
1991-04-01
A new interpolation method is proposed to solve the multi-dimensional hyperbolic equations which appear in describing the hydrodynamics of inertial confinement fusion (ICF) implosion. The advection phase of the cubic-interpolated pseudo-particle (CIP) is greatly improved, by assuming the continuities of the second and the third spatial derivatives in addition to the physical value and the first derivative. These derivatives are derived from the given physical equation. In order to evaluate the new method, Zalesak's example is tested, and we obtain successfully good results. (author)
Multi-dimensional beam emittance and β-functions
International Nuclear Information System (INIS)
Buon, J.
1993-05-01
The concept of r.m.s. emittance is extended to the case of several degrees of freedom that are coupled. That multi-dimensional emittance is lower than the product of the emittances attached to each degree of freedom, but is conserved in a linear motion. An envelope-hyperellipsoid is introduced to define the β-functions of the beam envelope. On the contrary of an one-degree of freedom motion, it is emphasized that these envelope functions differ from the amplitude functions of the normal modes of motion as a result of the difference between the Liouville and Lagrange invariants. (author) 4 refs
Multi-dimensional technology-enabled social learning approach
DEFF Research Database (Denmark)
Petreski, Hristijan; Tsekeridou, Sofia; Prasad, Neeli R.
2013-01-01
’t respond to this systemic and structural changes and/or challenges and retains its status quo than it is jeopardizing its own existence or the existence of the education, as we know it. This paper aims to precede one step further by proposing a multi-dimensional approach for technology-enabled social...... in learning while socializing within their learning communities. However, their “educational” usage is still limited to facilitation of online learning communities and to collaborative authoring of learning material complementary to existing formal (e-) learning services. If the educational system doesn...
Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.
2016-01-01
This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model
Multi-dimensional medical images compressed and filtered with wavelets
International Nuclear Information System (INIS)
Boyen, H.; Reeth, F. van; Flerackers, E.
2002-01-01
Full text: Using the standard wavelet decomposition methods, multi-dimensional medical images can be compressed and filtered by repeating the wavelet-algorithm on 1D-signals in an extra loop per extra dimension. In the non-standard decomposition for multi-dimensional images the areas that must be zero-filled in case of band- or notch-filters are more complex than geometric areas such as rectangles or cubes. Adding an additional dimension in this algorithm until 4D (e.g. a 3D beating heart) increases the geometric complexity of those areas even more. The aim of our study was to calculate the boundaries of the formed complex geometric areas, so we can use the faster non-standard decomposition to compress and filter multi-dimensional medical images. Because a lot of 3D medical images taken by PET- or SPECT-cameras have only a few layers in the Z-dimension and compressing images in a dimension with a few voxels is usually not worthwhile, we provided a solution in which one can choose which dimensions will be compressed or filtered. With the proposal of non-standard decomposition on Daubechies' wavelets D2 to D20 by Steven Gollmer in 1992, 1D data can be compressed and filtered. Each additional level works only on the smoothed data, so the transformation-time halves per extra level. Zero-filling a well-defined area alter the wavelet-transform and then performing the inverse transform will do the filtering. To be capable to compress and filter up to 4D-Images with the faster non-standard wavelet decomposition method, we have investigated a new method for calculating the boundaries of the areas which must be zero-filled in case of filtering. This is especially true for band- and notch filtering. Contrary to the standard decomposition method, the areas are no longer rectangles in 2D or cubes in 3D or a row of cubes in 4D: they are rectangles expanded with a half-sized rectangle in the other direction for 2D, cubes expanded with half cubes in one and quarter cubes in the
Analysis of UPTF downcomer tests with the Cathare multi-dimensional model
International Nuclear Information System (INIS)
Dor, I.
1993-01-01
This paper presents the analysis and the modelling - with the system code CATHARE - of UPTF downcomer refill tests simulating the refill phase of a large break LOCA. The modelling approach in a system code is discussed. First the reasons why in this particular case available flooding correlations are difficult to use in system code are developed. Then the use of a 1 - D modelling of the downcomer with specific closure relations for the annular geometry is examined. But UPTF 1:1 scale tests and CREARE reduced scale tests point out some weaknesses of this modelling due to the particular multi-dimensional nature of the flow in the upper part of the downcomer. Thus a 2-D model is elaborated and implemented into CATHARE version 1.3e code. The assessment of the model is based on UPTF 1:1 scale tests (saturated and subcooled conditions). Discretization and meshing influence are investigated. On the basis of saturated tests a new discretization is proposed for different terms of the momentum balance equations (interfacial friction, momentum transport terms) which results in a significant improvement. Sensitivity studies performed on subcooled tests show that the water downflow predictions are improved by increasing the condensation in the downcomer. (author). 8 figs., 5 tabs., 9 refs., 2 appendix
International Nuclear Information System (INIS)
Dinh Nho Hao; Nguyen Trung Thanh; Sahli, Hichem
2008-01-01
In this paper we consider a multi-dimensional inverse heat conduction problem with time-dependent coefficients in a box, which is well-known to be severely ill-posed, by a variational method. The gradient of the functional to be minimized is obtained by aids of an adjoint problem and the conjugate gradient method with a stopping rule is then applied to this ill-posed optimization problem. To enhance the stability and the accuracy of the numerical solution to the problem we apply this scheme to the discretized inverse problem rather than to the continuous one. The difficulties with large dimensions of discretized problems are overcome by a splitting method which only requires the solution of easy-to-solve one-dimensional problems. The numerical results provided by our method are very good and the techniques seem to be very promising.
Calculation of multi-dimensional dose distribution in medium due to proton beam incidence
International Nuclear Information System (INIS)
Kawachi, Kiyomitsu; Inada, Tetsuo
1978-01-01
The method of analyzing the multi-dimensional dose distribution in a medium due to proton beam incidence is presented to obtain the reliable and simplified method from clinical viewpoint, especially for the medical treatment of cancer. The heavy ion beam being taken out of an accelerator has to be adjusted to fit cancer location and size, utilizing a modified range modulator, a ridge filter, a bolus and a special scanning apparatus. The precise calculation of multi-dimensional dose distribution of proton beam is needed to fit treatment to a limit part. The analytical formulas consist of those for the fluence distribution in a medium, the divergence of flying range, the energy distribution itself, the dose distribution in side direction and the two-dimensional dose distribution. The fluence distribution in polystyrene in case of the protons with incident energy of 40 and 60 MeV, the energy distribution of protons at the position of a Bragg peak for various values of incident energy, the depth dose distribution in polystyrene in case of the protons with incident energy of 40 and 60 MeV and average energy of 100 MeV, the proton fluence and dose distribution as functions of depth for the incident average energy of 250 MeV, the statistically estimated percentage errors in the proton fluence and dose distribution, the estimated minimum detectable tumor thickness as a function of the number of incident protons for the different incident spectra with average energy of 250 MeV, the isodose distribution in a plane containing the central axis in case of the incident proton beam of 3 mm diameter and 40 MeV and so on are presented as the analytical results, and they are evaluated. (Nakai, Y.)
International Nuclear Information System (INIS)
Riaz, Nadeem; Wiersma, Rodney; Mao Weihua; Xing Lei; Shanker, Piyush; Gudmundsson, Olafur; Widrow, Bernard
2009-01-01
Intra-fraction tumor tracking methods can improve radiation delivery during radiotherapy sessions. Image acquisition for tumor tracking and subsequent adjustment of the treatment beam with gating or beam tracking introduces time latency and necessitates predicting the future position of the tumor. This study evaluates the use of multi-dimensional linear adaptive filters and support vector regression to predict the motion of lung tumors tracked at 30 Hz. We expand on the prior work of other groups who have looked at adaptive filters by using a general framework of a multiple-input single-output (MISO) adaptive system that uses multiple correlated signals to predict the motion of a tumor. We compare the performance of these two novel methods to conventional methods like linear regression and single-input, single-output adaptive filters. At 400 ms latency the average root-mean-square-errors (RMSEs) for the 14 treatment sessions studied using no prediction, linear regression, single-output adaptive filter, MISO and support vector regression are 2.58, 1.60, 1.58, 1.71 and 1.26 mm, respectively. At 1 s, the RMSEs are 4.40, 2.61, 3.34, 2.66 and 1.93 mm, respectively. We find that support vector regression most accurately predicts the future tumor position of the methods studied and can provide a RMSE of less than 2 mm at 1 s latency. Also, a multi-dimensional adaptive filter framework provides improved performance over single-dimension adaptive filters. Work is underway to combine these two frameworks to improve performance.
Optimal sensor configuration for flexible structures with multi-dimensional mode shapes
International Nuclear Information System (INIS)
Chang, Minwoo; Pakzad, Shamim N
2015-01-01
A framework for deciding the optimal sensor configuration is implemented for civil structures with multi-dimensional mode shapes, which enhances the applicability of structural health monitoring for existing structures. Optimal sensor placement (OSP) algorithms are used to determine the best sensor configuration for structures with a priori knowledge of modal information. The signal strength at each node is evaluated by effective independence and modified variance methods. Euclidean norm of signal strength indices associated with each node is used to expand OSP applicability into flexible structures. The number of sensors for each method is determined using the threshold for modal assurance criterion (MAC) between estimated (from a set of observations) and target mode shapes. Kriging is utilized to infer the modal estimates for unobserved locations with a weighted sum of known neighbors. A Kriging model can be expressed as a sum of linear regression and random error which is assumed as the realization of a stochastic process. This study presents the effects of Kriging parameters for the accurate estimation of mode shapes and the minimum number of sensors. The feasible ranges to satisfy MAC criteria are investigated and used to suggest the adequate searching bounds for associated parameters. The finite element model of a tall building is used to demonstrate the application of optimal sensor configuration. The dynamic modes of flexible structure at centroid are appropriately interpreted into the outermost sensor locations when OSP methods are implemented. Kriging is successfully used to interpolate the mode shapes from a set of sensors and to monitor structures associated with multi-dimensional mode shapes. (paper)
Device for multi-dimensional γ-γ-coincidence study
International Nuclear Information System (INIS)
Gruzinova, T.M.; Erokhina, K.I.; Kutuzov, V.I.; Lemberg, I.Kh.; Petrov, S.A.; Revenko, V.S.; Senin, A.T.; Chugunov, I.N.; Shishlinov, V.M.
1977-01-01
A device for studying multi-dimensional γ-γ coincidences is described which operates on-line with the BESM-4 computer. The device comprises Ge(Li) detectors, analog-to-digital converters, shaper discriminators and fast amplifiers. To control the device operation as a whole and to elaborate necessary commands, an information distributor has been developed. The following specific features of the device operation are noted: the device may operate both in the regime of recording spectra of direct γ radiation in the block memory of multi-channel analyzer, and in the regime of data transfer to the computer memory; the device performs registration of coincidences; it transfers information to the computer which has a channel of direct access to the memory. The procedure of data processing is considered, the data being recorded on a magnetic tape. Partial spectra obtained are in a good agreement with data obtained elsewhere
Benchmarking multi-dimensional large strain consolidation analyses
International Nuclear Information System (INIS)
Priestley, D.; Fredlund, M.D.; Van Zyl, D.
2010-01-01
Analyzing the consolidation of tailings slurries and dredged fills requires a more extensive formulation than is used for common (small strain) consolidation problems. Large strain consolidation theories have traditionally been limited to 1-D formulations. SoilVision Systems has developed the capacity to analyze large strain consolidation problems in 2 and 3-D. The benchmarking of such formulations is not a trivial task. This paper presents several examples of modeling large strain consolidation in the beta versions of the new software. These examples were taken from the literature and were used to benchmark the large strain formulation used by the new software. The benchmarks reported here are: a comparison to the consolidation software application CONDES0, Townsend's Scenario B and a multi-dimensional analysis of long-term column tests performed on oil sands tailings. All three of these benchmarks were attained using the SVOffice suite. (author)
A Multi-Dimensional Classification Model for Scientific Workflow Characteristics
Energy Technology Data Exchange (ETDEWEB)
Ramakrishnan, Lavanya; Plale, Beth
2010-04-05
Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.
Anonymous voting for multi-dimensional CV quantum system
International Nuclear Information System (INIS)
Shi Rong-Hua; Xiao Yi; Shi Jin-Jing; Guo Ying; Lee, Moon-Ho
2016-01-01
We investigate the design of anonymous voting protocols, CV-based binary-valued ballot and CV-based multi-valued ballot with continuous variables (CV) in a multi-dimensional quantum cryptosystem to ensure the security of voting procedure and data privacy. The quantum entangled states are employed in the continuous variable quantum system to carry the voting information and assist information transmission, which takes the advantage of the GHZ-like states in terms of improving the utilization of quantum states by decreasing the number of required quantum states. It provides a potential approach to achieve the efficient quantum anonymous voting with high transmission security, especially in large-scale votes. (paper)
Fast multi-dimensional NMR by minimal sampling
Kupče, Ēriks; Freeman, Ray
2008-03-01
A new scheme is proposed for very fast acquisition of three-dimensional NMR spectra based on minimal sampling, instead of the customary step-wise exploration of all of evolution space. The method relies on prior experiments to determine accurate values for the evolving frequencies and intensities from the two-dimensional 'first planes' recorded by setting t1 = 0 or t2 = 0. With this prior knowledge, the entire three-dimensional spectrum can be reconstructed by an additional measurement of the response at a single location (t1∗,t2∗) where t1∗ and t2∗ are fixed values of the evolution times. A key feature is the ability to resolve problems of overlap in the acquisition dimension. Applied to a small protein, agitoxin, the three-dimensional HNCO spectrum is obtained 35 times faster than systematic Cartesian sampling of the evolution domain. The extension to multi-dimensional spectroscopy is outlined.
Advanced concepts in multi-dimensional radiation detection and imaging
International Nuclear Information System (INIS)
Vetter, Kai; Barnowski, Ross; Pavlovsky, Ryan; Haefner, Andy; Torii, Tatsuo; Shikaze, Yoshiaki; Sanada, Yukihisa
2016-01-01
Recent developments in the detector fabrication, signal readout, and data processing enable new concepts in radiation detection that are relevant for applications ranging from fundamental physics to medicine as well as nuclear security and safety. We present recent progress in multi-dimensional radiation detection and imaging in the Berkeley Applied Nuclear Physics program. It is based on the ability to reconstruct scenes in three dimensions and fuse it with gamma-ray image information. We are using the High-Efficiency Multimode Imager HEMI in its Compton imaging mode and combining it with contextual sensors such as the Microsoft Kinect or visual cameras. This new concept of volumetric imaging or scene data fusion provides unprecedented capabilities in radiation detection and imaging relevant for the detection and mapping of radiological and nuclear materials. This concept brings us one step closer to the seeing the world with gamma-ray eyes. (author)
MEASURING PERFORMANCE IN ORGANIZATIONS FROM MULTI-DIMENSIONAL PERSPECTIVE
Directory of Open Access Journals (Sweden)
ȘTEFĂNESCU CRISTIAN
2017-08-01
Full Text Available In turbulent financial and economic present conditions a major challenge for the general management of organizations and in particular for the strategic human resources management is to establish a clear, coherent and consistent framework in terms of measuring organizational performance and economic efficiency. This paper aims to conduct an exploratory research of literature concerning measuring organizational performance. Based on the results of research the paper proposes a multi-dimensional model for measuring organizational performance providing a mechanism that will allow quantification of performance based on selected criteria. The model will attempt to eliminate inconsistencies and incongruities of organizational effectiveness models developed by specialists from organization theory area, performance measurement models developed by specialists from accounting management area and models of measuring the efficiency and effectiveness developed by specialists from strategic management and entrepreneurship areas.
Devaney chaos, Li-Yorke chaos, and multi-dimensional Li-Yorke chaos for topological dynamics
Dai, Xiongping; Tang, Xinjia
2017-11-01
Let π : T × X → X, written T↷π X, be a topological semiflow/flow on a uniform space X with T a multiplicative topological semigroup/group not necessarily discrete. We then prove: If T↷π X is non-minimal topologically transitive with dense almost periodic points, then it is sensitive to initial conditions. As a result of this, Devaney chaos ⇒ Sensitivity to initial conditions, for this very general setting. Let R+↷π X be a C0-semiflow on a Polish space; then we show: If R+↷π X is topologically transitive with at least one periodic point p and there is a dense orbit with no nonempty interior, then it is multi-dimensional Li-Yorke chaotic; that is, there is a uncountable set Θ ⊆ X such that for any k ≥ 2 and any distinct points x1 , … ,xk ∈ Θ, one can find two time sequences sn → ∞ ,tn → ∞ with Moreover, let X be a non-singleton Polish space; then we prove: Any weakly-mixing C0-semiflow R+↷π X is densely multi-dimensional Li-Yorke chaotic. Any minimal weakly-mixing topological flow T↷π X with T abelian is densely multi-dimensional Li-Yorke chaotic. Any weakly-mixing topological flow T↷π X is densely Li-Yorke chaotic. We in addition construct a completely Li-Yorke chaotic minimal SL (2 , R)-acting flow on the compact metric space R ∪ { ∞ }. Our various chaotic dynamics are sensitive to the choices of the topology of the phase semigroup/group T.
Advanced multi-dimensional imaging of gamma-ray radiation
International Nuclear Information System (INIS)
Woodring, Mitchell; Beddingfield, David; Souza, David; Entine, Gerald; Squillante, Michael; Christian, James; Kogan, Alex
2003-01-01
The tracking of radiation contamination and distribution has become a high-priority US DOE task. To support DOE needs, Radiation Monitoring Devices Inc. has been actively carrying out research and development on a gamma-radiation imager, RadCam 2000 TM . The imager is based upon a position-sensitive PMT coupled to a scintillator near a MURA coded aperture. The modulated gamma flux detected by the PSPMT is mathematically decoded to produce images that are computer displayed in near real time. Additionally, we have developed a data-manipulation scheme which allows a multi-dimensional data array, comprised of x position, y position, and energy, to be used in the imaging process. In the imager software a gate can be set on a specific isotope energy to reveal where in the field of view the gated data lies or, conversely, a gate can be set on an area in the field of view to examine what isotopes are present in that area. This process is complicated by the FFT decoding process used with the coded aperture; however, we have achieved excellent performance and results are presented here
Secondary Channel Bifurcation Geometry: A Multi-dimensional Problem
Gaeuman, D.; Stewart, R. L.
2017-12-01
The construction of secondary channels (or side channels) is a popular strategy for increasing aquatic habitat complexity in managed rivers. Such channels, however, frequently experience aggradation that prevents surface water from entering the side channels near their bifurcation points during periods of relatively low discharge. This failure to maintain an uninterrupted surface water connection with the main channel can reduce the habitat value of side channels for fish species that prefer lotic conditions. Various factors have been proposed as potential controls on the fate of side channels, including water surface slope differences between the main and secondary channels, the presence of main channel secondary circulation, transverse bed slopes, and bifurcation angle. A quantitative assessment of more than 50 natural and constructed secondary channels in the Trinity River of northern California indicates that bifurcations can assume a variety of configurations that are formed by different processes and whose longevity is governed by different sets of factors. Moreover, factors such as bifurcation angle and water surface slope vary with discharge level and are continuously distributed in space, such that they must be viewed as a multi-dimensional field rather than a single-valued attribute that can be assigned to a particular bifurcation.
MULTI-DIMENSIONAL PATTERN DISCOVERY OF TRAJECTORIES USING CONTEXTUAL INFORMATION
Directory of Open Access Journals (Sweden)
M. Sharif
2017-10-01
Full Text Available Movement of point objects are highly sensitive to the underlying situations and conditions during the movement, which are known as contexts. Analyzing movement patterns, while accounting the contextual information, helps to better understand how point objects behave in various contexts and how contexts affect their trajectories. One potential solution for discovering moving objects patterns is analyzing the similarities of their trajectories. This article, therefore, contextualizes the similarity measure of trajectories by not only their spatial footprints but also a notion of internal and external contexts. The dynamic time warping (DTW method is employed to assess the multi-dimensional similarities of trajectories. Then, the results of similarity searches are utilized in discovering the relative movement patterns of the moving point objects. Several experiments are conducted on real datasets that were obtained from commercial airplanes and the weather information during the flights. The results yielded the robustness of DTW method in quantifying the commonalities of trajectories and discovering movement patterns with 80 % accuracy. Moreover, the results revealed the importance of exploiting contextual information because it can enhance and restrict movements.
The development of a multi-dimensional gambling accessibility scale.
Hing, Nerilee; Haw, John
2009-12-01
The aim of the current study was to develop a scale of gambling accessibility that would have theoretical significance to exposure theory and also serve to highlight the accessibility risk factors for problem gambling. Scale items were generated from the Productivity Commission's (Australia's Gambling Industries: Report No. 10. AusInfo, Canberra, 1999) recommendations and tested on a group with high exposure to the gambling environment. In total, 533 gaming venue employees (aged 18-70 years; 67% women) completed a questionnaire that included six 13-item scales measuring accessibility across a range of gambling forms (gaming machines, keno, casino table games, lotteries, horse and dog racing, sports betting). Also included in the questionnaire was the Problem Gambling Severity Index (PGSI) along with measures of gambling frequency and expenditure. Principal components analysis indicated that a common three factor structure existed across all forms of gambling and these were labelled social accessibility, physical accessibility and cognitive accessibility. However, convergent validity was not demonstrated with inconsistent correlations between each subscale and measures of gambling behaviour. These results are discussed in light of exposure theory and the further development of a multi-dimensional measure of gambling accessibility.
Multi-Dimensional Pattern Discovery of Trajectories Using Contextual Information
Sharif, M.; Alesheikh, A. A.
2017-10-01
Movement of point objects are highly sensitive to the underlying situations and conditions during the movement, which are known as contexts. Analyzing movement patterns, while accounting the contextual information, helps to better understand how point objects behave in various contexts and how contexts affect their trajectories. One potential solution for discovering moving objects patterns is analyzing the similarities of their trajectories. This article, therefore, contextualizes the similarity measure of trajectories by not only their spatial footprints but also a notion of internal and external contexts. The dynamic time warping (DTW) method is employed to assess the multi-dimensional similarities of trajectories. Then, the results of similarity searches are utilized in discovering the relative movement patterns of the moving point objects. Several experiments are conducted on real datasets that were obtained from commercial airplanes and the weather information during the flights. The results yielded the robustness of DTW method in quantifying the commonalities of trajectories and discovering movement patterns with 80 % accuracy. Moreover, the results revealed the importance of exploiting contextual information because it can enhance and restrict movements.
The multi-dimensional roles of astrocytes in ALS.
Yamanaka, Koji; Komine, Okiru
2018-01-01
Despite significant progress in understanding the molecular and genetic aspects of amyotrophic lateral sclerosis (ALS), a fatal neurodegenerative disease characterized by the progressive loss of motor neurons, the precise and comprehensive pathomechanisms remain largely unknown. In addition to motor neuron involvement, recent studies using cellular and animal models of ALS indicate that there is a complex interplay between motor neurons and neighboring non-neuronal cells, such as astrocytes, in non-cell autonomous neurodegeneration. Astrocytes are key homeostatic cells that play numerous supportive roles in maintaining the brain environment. In neurodegenerative diseases such as ALS, astrocytes change their shape and molecular expression patterns and are referred to as reactive or activated astrocytes. Reactive astrocytes in ALS lose their beneficial functions and gain detrimental roles. In addition, interactions between motor neurons and astrocytes are impaired in ALS. In this review, we summarize growing evidence that astrocytes are critically involved in the survival and demise of motor neurons through several key molecules and cascades in astrocytes in both sporadic and inherited ALS. These observations strongly suggest that astrocytes have multi-dimensional roles in disease and are a viable therapeutic target for ALS. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in
International Nuclear Information System (INIS)
Lydia, Emilio J.; Barros, Ricardo C.
2011-01-01
In this paper we describe a response matrix method for one-speed slab-geometry discrete ordinates (SN) neutral particle transport problems that is completely free from spatial truncation errors. The unknowns in the method are the cell-edge angular fluxes of particles. The numerical results generated for these quantities are exactly those obtained from the analytic solution of the SN problem apart from finite arithmetic considerations. Our method is based on a spectral analysis that we perform in the SN equations with scattering inside a discretization cell of the spatial grid set up on the slab. As a result of this spectral analysis, we are able to obtain an expression for the local general solution of the SN equations. With this local general solution, we determine the response matrix and use the prescribed boundary conditions and continuity conditions to sweep across the discretization cells from left to right and from right to left across the slab, until a prescribed convergence criterion is satisfied. (author)
Directory of Open Access Journals (Sweden)
K. V. Dobrego
2015-01-01
Full Text Available Differential approximation is derived from radiation transfer equation by averaging over the solid angle. It is one of the more effective methods for engineering calculations of radia- tive heat transfer in complex three-dimensional thermal power systems with selective and scattering media. The new method for improvement of accuracy of the differential approximation based on using of auto-adaptable boundary conditions is introduced in the paper. The efficiency of the named method is proved for the test 2D-systems. Self-consistent auto-adaptable boundary conditions taking into consideration the nonorthogonal component of the incident to the boundary radiation flux are formulated. It is demonstrated that taking in- to consideration of the non- orthogonal incident flux in multi-dimensional systems, such as furnaces, boilers, combustion chambers improves the accuracy of the radiant flux simulations and to more extend in the zones adjacent to the edges of the chamber.Test simulations utilizing the differential approximation method with traditional boundary conditions, new self-consistent boundary conditions and “precise” discrete ordinates method were performed. The mean square errors of the resulting radiative fluxes calculated along the boundary of rectangular and triangular test areas were decreased 1.5–2 times by using auto- adaptable boundary conditions. Radiation flux gaps in the corner points of non-symmetric sys- tems are revealed by using auto-adaptable boundary conditions which can not be obtained by using the conventional boundary conditions.
Masuyama, Hiroyuki
2015-01-01
This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...
Craft, David
2010-10-01
A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Multi-dimensional discovery of biomarker and phenotype complexes
Directory of Open Access Journals (Sweden)
Huang Kun
2010-10-01
Full Text Available Abstract Background Given the rapid growth of translational research and personalized healthcare paradigms, the ability to relate and reason upon networks of bio-molecular and phenotypic variables at various levels of granularity in order to diagnose, stage and plan treatments for disease states is highly desirable. Numerous techniques exist that can be used to develop networks of co-expressed or otherwise related genes and clinical features. Such techniques can also be used to create formalized knowledge collections based upon the information incumbent to ontologies and domain literature. However, reports of integrative approaches that bridge such networks to create systems-level models of disease or wellness are notably lacking in the contemporary literature. Results In response to the preceding gap in knowledge and practice, we report upon a prototypical series of experiments that utilize multi-modal approaches to network induction. These experiments are intended to elicit meaningful and significant biomarker-phenotype complexes spanning multiple levels of granularity. This work has been performed in the experimental context of a large-scale clinical and basic science data repository maintained by the National Cancer Institute (NCI funded Chronic Lymphocytic Leukemia Research Consortium. Conclusions Our results indicate that it is computationally tractable to link orthogonal networks of genes, clinical features, and conceptual knowledge to create multi-dimensional models of interrelated biomarkers and phenotypes. Further, our results indicate that such systems-level models contain interrelated bio-molecular and clinical markers capable of supporting hypothesis discovery and testing. Based on such findings, we propose a conceptual model intended to inform the cross-linkage of the results of such methods. This model has as its aim the identification of novel and knowledge-anchored biomarker-phenotype complexes.
Multi-dimensional conversion to the ion-hybrid mode
International Nuclear Information System (INIS)
Tracy, E.R.; Kaufman, A.N.; Brizard, A.J.; Morehead, J.J.
1996-01-01
We first demonstrate that the dispersion matrix for linear conversion of a magnetosonic wave to an ion-hybrid wave (as in a D-T plasma) can be congruently transformed to Friedland's normal form. As a result, this conversion can be represented as a two-step process of successive linear conversions in phase space. We then proceed to study the multi-dimensional case of tokamak geometry. After fourier transforming the toroidal dependence, we deal with the two-dimensional poloidal xy-plane and the two-dimensional k x k y -plane, forming a four-dimensional phase space. The dispersion manifolds for the magnetosonic wave [D M (x, k) = 0] and the ion-hybrid wave [D H (x, k) = 0] are each three-dimensional. (Their intersection, on which mode conversion occurs, is two-dimensional.) The incident magnetosonic wave (radiated by an antenna) is a two-dimensional set of rays (a lagrangian manifold): k(x) = ∇θ(x), with θ(x) the phase of the magnetosonic wave. When these rays pierce the ion-hybrid dispersion manifold, they convert to a set of ion-hybrid rays. Then, when those rays intersect the magnetosonic dispersion manifold, they convert to a set of open-quotes reflectedclose quotes magnetosonic rays. This set of rays is distinct from the set of incident rays that have been reflected by the inner surface of the tokamak plasma. As a result, the total destructive interference that can occur in the one-dimensional case may become only partial. We explore the implications of this startling phenomenon both analytically and geometrically
Wind Farm Power Forecasting for Less Than an Hour Using Multi Dimensional Models
DEFF Research Database (Denmark)
Knudsen, Torben; Bak, Thomas; Jensen, Tom Nørgaard
2018-01-01
The paper focus on prediction of wind farm power for horizons of 0-10 minutes and not more than one hour using statistical methods. These short term predictions are relevant for both transmission system operators, wind farm operators and traders. Previous research indicates that for short time ho...... the prediction error variance estimate compared to the persistence method. We also present convincing examples showing that the predictions follow the wind farm power over a window of an hour.......The paper focus on prediction of wind farm power for horizons of 0-10 minutes and not more than one hour using statistical methods. These short term predictions are relevant for both transmission system operators, wind farm operators and traders. Previous research indicates that for short time...... horizons the persistence method performs as well as more complex methods. However, these results are based on accumulated power for an entire wind farm. The contribution in this paper is to develop multi-dimensional linear methods based on measurements of power or wind speed from individual wind turbine...
A Generic multi-dimensional feature extraction method using multiobjective genetic programming.
Zhang, Yang; Rockett, Peter I
2009-01-01
In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.
Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.
1978-01-01
Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.
Multi-Dimensional Damage Detection for Surfaces and Structures
Williams, Martha; Lewis, Mark; Roberson, Luke; Medelius, Pedro; Gibson, Tracy; Parks, Steen; Snyder, Sarah
2013-01-01
Current designs for inflatable or semi-rigidized structures for habitats and space applications use a multiple-layer construction, alternating thin layers with thicker, stronger layers, which produces a layered composite structure that is much better at resisting damage. Even though such composite structures or layered systems are robust, they can still be susceptible to penetration damage. The ability to detect damage to surfaces of inflatable or semi-rigid habitat structures is of great interest to NASA. Damage caused by impacts of foreign objects such as micrometeorites can rupture the shell of these structures, causing loss of critical hardware and/or the life of the crew. While not all impacts will have a catastrophic result, it will be very important to identify and locate areas of the exterior shell that have been damaged by impacts so that repairs (or other provisions) can be made to reduce the probability of shell wall rupture. This disclosure describes a system that will provide real-time data regarding the health of the inflatable shell or rigidized structures, and information related to the location and depth of impact damage. The innovation described here is a method of determining the size, location, and direction of damage in a multilayered structure. In the multi-dimensional damage detection system, layers of two-dimensional thin film detection layers are used to form a layered composite, with non-detection layers separating the detection layers. The non-detection layers may be either thicker or thinner than the detection layers. The thin-film damage detection layers are thin films of materials with a conductive grid or striped pattern. The conductive pattern may be applied by several methods, including printing, plating, sputtering, photolithography, and etching, and can include as many detection layers that are necessary for the structure construction or to afford the detection detail level required. The damage is detected using a detector or
Malas, Tareq M.
2016-07-21
Understanding and optimizing the properties of solar cells is becoming a key issue in the search for alternatives to nuclear and fossil energy sources. A theoretical analysis via numerical simulations involves solving Maxwell\\'s Equations in discretized form and typically requires substantial computing effort. We start from a hybrid-parallel (MPI+OpenMP) production code that implements the Time Harmonic Inverse Iteration Method (THIIM) with Finite-Difference Frequency Domain (FDFD) discretization. Although this algorithm has the characteristics of a strongly bandwidth-bound stencil update scheme, it is significantly different from the popular stencil types that have been exhaustively studied in the high performance computing literature to date. We apply a recently developed stencil optimization technique, multicore wavefront diamond tiling with multi-dimensional cache block sharing, and describe in detail the peculiarities that need to be considered due to the special stencil structure. Concurrency in updating the components of the electric and magnetic fields provides an additional level of parallelism. The dependence of the cache size requirement of the optimized code on the blocking parameters is modeled accurately, and an auto-tuner searches for optimal configurations in the remaining parameter space. We were able to completely decouple the execution from the memory bandwidth bottleneck, accelerating the implementation by a factor of three to four compared to an optimal implementation with pure spatial blocking on an 18-core Intel Haswell CPU.
International Nuclear Information System (INIS)
Anusha, L. S.; Nagendra, K. N.
2011-01-01
In two previous papers, we solved the polarized radiative transfer (RT) equation in multi-dimensional (multi-D) geometries with partial frequency redistribution as the scattering mechanism. We assumed Rayleigh scattering as the only source of linear polarization (Q/I, U/I) in both these papers. In this paper, we extend these previous works to include the effect of weak oriented magnetic fields (Hanle effect) on line scattering. We generalize the technique of Stokes vector decomposition in terms of the irreducible spherical tensors T K Q , developed by Anusha and Nagendra, to the case of RT with Hanle effect. A fast iterative method of solution (based on the Stabilized Preconditioned Bi-Conjugate-Gradient technique), developed by Anusha et al., is now generalized to the case of RT in magnetized three-dimensional media. We use the efficient short-characteristics formal solution method for multi-D media, generalized appropriately to the present context. The main results of this paper are the following: (1) a comparison of emergent (I, Q/I, U/I) profiles formed in one-dimensional (1D) media, with the corresponding emergent, spatially averaged profiles formed in multi-D media, shows that in the spatially resolved structures, the assumption of 1D may lead to large errors in linear polarization, especially in the line wings. (2) The multi-D RT in semi-infinite non-magnetic media causes a strong spatial variation of the emergent (Q/I, U/I) profiles, which is more pronounced in the line wings. (3) The presence of a weak magnetic field modifies the spatial variation of the emergent (Q/I, U/I) profiles in the line core, by producing significant changes in their magnitudes.
Oceans 2.0: Interactive tools for the Visualization of Multi-dimensional Ocean Sensor Data
Biffard, B.; Valenzuela, M.; Conley, P.; MacArthur, M.; Tredger, S.; Guillemot, E.; Pirenne, B.
2016-12-01
Ocean Networks Canada (ONC) operates ocean observatories on all three of Canada's coasts. The instruments produce 280 gigabytes of data per day with 1/2 petabyte archived so far. In 2015, 13 terabytes were downloaded by over 500 users from across the world. ONC's data management system is referred to as "Oceans 2.0" owing to its interactive, participative features. A key element of Oceans 2.0 is real time data acquisition and processing: custom device drivers implement the input-output protocol of each instrument. Automatic parsing and calibration takes place on the fly, followed by event detection and quality control. All raw data are stored in a file archive, while the processed data are copied to fast databases. Interactive access to processed data is provided through data download and visualization/quick look features that are adapted to diverse data types (scalar, acoustic, video, multi-dimensional, etc). Data may be post or re-processed to add features, analysis or correct errors, update calibrations, etc. A robust storage structure has been developed consisting of an extensive file system and a no-SQL database (Cassandra). Cassandra is a node-based open source distributed database management system. It is scalable and offers improved performance for big data. A key feature is data summarization. The system has also been integrated with web services and an ERDDAP OPeNDAP server, capable of serving scalar and multidimensional data from Cassandra for fixed or mobile devices.A complex data viewer has been developed making use of the big data capability to interactively display live or historic echo sounder and acoustic Doppler current profiler data, where users can scroll, apply processing filters and zoom through gigabytes of data with simple interactions. This new technology brings scientists one step closer to a comprehensive, web-based data analysis environment in which visual assessment, filtering, event detection and annotation can be integrated.
Towards Semantic Web Services on Large, Multi-Dimensional Coverages
Baumann, P.
2009-04-01
Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it
Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA
Messer, O. E. B.; Harris, J. A.; Hix, W. R.; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, A.
2018-04-01
Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport, and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.
Towards Optimal Multi-Dimensional Query Processing with BitmapIndices
Energy Technology Data Exchange (ETDEWEB)
Rotem, Doron; Stockinger, Kurt; Wu, Kesheng
2005-09-30
Bitmap indices have been widely used in scientific applications and commercial systems for processing complex, multi-dimensional queries where traditional tree-based indices would not work efficiently. This paper studies strategies for minimizing the access costs for processing multi-dimensional queries using bitmap indices with binning. Innovative features of our algorithm include (a) optimally placing the bin boundaries and (b) dynamically reordering the evaluation of the query terms. In addition, we derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.
Development of multi-dimensional body image scale for malaysian female adolescents.
Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin
2008-01-01
The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs.
Development and empirical validation of symmetric component measures of multi-dimensional constructs
DEFF Research Database (Denmark)
Sørensen, Hans Eibe; Slater, Stanley F.
2008-01-01
Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multi-dimensional constructs. We place particular emphasis on establ...
Developing a Multi-Dimensional Evaluation Framework for Faculty Teaching and Service Performance
Baker, Diane F.; Neely, Walter P.; Prenshaw, Penelope J.; Taylor, Patrick A.
2015-01-01
A task force was created in a small, AACSB-accredited business school to develop a more comprehensive set of standards for faculty performance. The task force relied heavily on faculty input to identify and describe key dimensions that capture effective teaching and service performance. The result is a multi-dimensional framework that will be used…
Liu, Gi-Zen; Liu, Zih-Hui; Hwang, Gwo-Jen
2011-01-01
Many English learning websites have been developed worldwide, but little research has been conducted concerning the development of comprehensive evaluation criteria. The main purpose of this study is thus to construct a multi-dimensional set of criteria to help learners and teachers evaluate the quality of English learning websites. These…
A Replication Study on the Multi-Dimensionality of Online Social Presence
Mykota, David B.
2015-01-01
The purpose of the present study is to conduct an external replication into the multi-dimensionality of social presence as measured by the Computer-Mediated Communication Questionnaire (Tu, 2005). Online social presence is one of the more important constructs for determining the level of interaction and effectiveness of learning in an online…
Multi-dimensional microanalysis of masklessly implanted atoms using focused heavy ion beam
International Nuclear Information System (INIS)
Mokuno, Yoshiaki; Iiorino, Yuji; Chayahara, Akiyoshi; Kiuchi, Masato; Fujii, Kanenaga; Satou, Mamoru
1992-01-01
Multi-dimensional structure fabricated by maskless MeV gold implantation in silicon wafer was analyzed by 3 MeV carbon ion microprobe using a microbeam line developed at GIRIO. The minimum line width of the implanted region was estimated to be about 5 μm. The advantages of heavy ions for microanalysis were demonstrated. (author)
Multi-dimensional database design and implementation of dam safety monitoring system
Directory of Open Access Journals (Sweden)
Zhao Erfeng
2008-09-01
Full Text Available To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design was achieved in multi-dimensional database mode. The optimal data model was confirmed by identifying data objects, defining relations and reviewing entities. The conversion of relations among entities to external keys and entities and physical attributes to tables and fields was interpreted completely. On this basis, a multi-dimensional database that reflects the management and analysis of a dam safety monitoring system on monitoring data information has been established, for which factual tables and dimensional tables have been designed. Finally, based on service design and user interface design, the dam safety monitoring system has been developed with Delphi as the development tool. This development project shows that the multi-dimensional database can simplify the development process and minimize hidden dangers in the database structure design. It is superior to other dam safety monitoring system development models and can provide a new research direction for system developers.
Skip-webs: Efficient distributed data structures for multi-dimensional data sets
DEFF Research Database (Denmark)
Arge, Lars; Eppstein, David; Goodrich, Michael T.
2005-01-01
querying scenarios, which include linear (one-dimensional) data, such as sorted sets, as well as multi-dimensional data, such as d-dimensional octrees and digital tries of character strings defined over a fixed alphabet. We show how to perform a query over such a set of n items spread among n hosts using O...
Uncertainty Evaluation with Multi-Dimensional Model of LBLOCA in OPR1000 Plant
Energy Technology Data Exchange (ETDEWEB)
Kim, Jieun; Oh, Deog Yeon; Seul, Kwang-Won; Lee, Jin Ho [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2016-10-15
KINS has used KINS-REM (KINS-Realistic Evaluation Methodology) which developed for Best- Estimate (BE) calculation and uncertainty quantification for regulatory audit. This methodology has been improved continuously by numerous studies, such as uncertainty parameters and uncertainty ranges. In this study, to evaluate the applicability of improved KINS-REM for OPR1000 plant, uncertainty evaluation with multi-dimensional model for confirming multi-dimensional phenomena was conducted with MARS-KS code. In this study, the uncertainty evaluation with multi- dimensional model of OPR1000 plant was conducted for confirming the applicability of improved KINS- REM The reactor vessel modeled using MULTID component of MARS-KS code, and total 29 uncertainty parameters were considered by 124 sampled calculations. Through 124 calculations using Mosaique program with MARS-KS code, peak cladding temperature was calculated and final PCT was determined by the 3rd order Wilks' formula. The uncertainty parameters which has strong influence were investigated by Pearson coefficient analysis. They were mostly related with plant operation and fuel material properties. Evaluation results through the 124 calculations and sensitivity analysis show that improved KINS-REM could be reasonably applicable for uncertainty evaluation with multi-dimensional model calculations of OPR1000 plants.
Multi-dimensional information diffusion and balancing market supply: an agent-based approach
Osinga, S.A.; Kramer, M.R.; Hofstede, G.J.; Beulens, A.J.M.
2013-01-01
This agent-based information management model is designed to explore how multi-dimensional information, spreading through a population of agents (for example farmers) affects market supply. Farmers make quality decisions that must be aligned with available markets. Markets distinguish themselves by
Exact asymptotic expansions for solutions of multi-dimensional renewal equations
International Nuclear Information System (INIS)
Sgibnev, M S
2006-01-01
We derive expansions with exact asymptotic expressions for the remainders for solutions of multi-dimensional renewal equations. The effect of the roots of the characteristic equation on the asymptotic representation of solutions is taken into account. The resulting formulae are used to investigate the asymptotic behaviour of the average number of particles in age-dependent branching processes having several types of particles
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model׳s parameters and transformed according to a specific
Woods, Carl T; Raynor, Annette J; Bruce, Lyndell; McDonald, Zane; Robertson, Sam
2016-07-01
This study investigated whether a multi-dimensional assessment could assist with talent identification in junior Australian football (AF). Participants were recruited from an elite under 18 (U18) AF competition and classified into two groups; talent identified (State U18 Academy representatives; n = 42; 17.6 ± 0.4 y) and non-talent identified (non-State U18 Academy representatives; n = 42; 17.4 ± 0.5 y). Both groups completed a multi-dimensional assessment, which consisted of physical (standing height, dynamic vertical jump height and 20 m multistage fitness test), technical (kicking and handballing tests) and perceptual-cognitive (video decision-making task) performance outcome tests. A multivariate analysis of variance tested the main effect of status on the test criterions, whilst a receiver operating characteristic curve assessed the discrimination provided from the full assessment. The talent identified players outperformed their non-talent identified peers in each test (P talent identified and non-talent identified participants, respectively. When compared to single assessment approaches, this multi-dimensional assessment reflects a more comprehensive means of talent identification in AF. This study further highlights the importance of assessing multi-dimensional performance qualities when identifying talented team sports.
International Nuclear Information System (INIS)
Lee, Seok Min; Lee, Un Chul; Bae, Sung Won; Chung, Bub Dong
2004-01-01
The Multi-Dimensional flow models in system code have been developed during the past many years. RELAP5-3D, CATHARE and TRACE has its specific multi-dimensional flow models and successfully applied it to the system safety analysis. In KAERI, also, MARS(Multi-dimensional Analysis of Reactor Safety) code was developed by integrating RELAP5/MOD3 code and COBRA-TF code. Even though COBRA-TF module can analyze three-dimensional flow models, it has a limitation to apply 3D shear stress dominant phenomena or cylindrical geometry. Therefore, Multi-dimensional analysis models are newly developed by implementing three-dimensional momentum flux and diffusion terms. The multi-dimensional model has been assessed compared with multi-dimensional conceptual problems and CFD code results. Although the assessment results were reasonable, the multi-dimensional model has not been validated to two-phase flow using experimental data. In this paper, the multi-dimensional air-water two-phase flow experiment was simulated and analyzed
Andreev, Valentin I.
2014-01-01
The main aim of this research is to disclose the essence of students' multi-dimensional thinking, also to reveal the rating of factors which stimulate the raising of effectiveness of self-development of students' multi-dimensional thinking in terms of subject-oriented teaching. Subject-oriented learning is characterized as a type of learning where…
An Overview of Multi-Dimensional Models of the Sacramento–San Joaquin Delta
Directory of Open Access Journals (Sweden)
Michael L. MacWilliams
2016-12-01
Full Text Available doi: https://doi.org/10.15447/sfews.2016v14iss4art2Over the past 15 years, the development and application of multi-dimensional hydrodynamic models in San Francisco Bay and the Sacramento–San Joaquin Delta has transformed our ability to analyze and understand the underlying physics of the system. Initial applications of three-dimensional models focused primarily on salt intrusion, and provided a valuable resource for investigating how sea level rise and levee failures in the Delta could influence water quality in the Delta under future conditions. However, multi-dimensional models have also provided significant insights into some of the fundamental biological relationships that have shaped our thinking about the system by exploring the relationship among X2, flow, fish abundance, and the low salinity zone. Through the coupling of multi-dimensional models with wind wave and sediment transport models, it has been possible to move beyond salinity to understand how large-scale changes to the system are likely to affect sediment dynamics, and to assess the potential effects on species that rely on turbidity for habitat. Lastly, the coupling of multi-dimensional hydrodynamic models with particle tracking models has led to advances in our thinking about residence time, the retention of food organisms in the estuary, the effect of south Delta exports on larval entrainment, and the pathways and behaviors of salmonids that travel through the Delta. This paper provides an overview of these recent advances and how they have increased our understanding of the distribution and movement of fish and food organisms. The applications presented serve as a guide to the current state of the science of Delta modeling and provide examples of how we can use multi-dimensional models to predict how future Delta conditions will affect both fish and water supply.
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
Multi dimensional analysis of Design Basis Events using MARS-LMR
International Nuclear Information System (INIS)
Woo, Seung Min; Chang, Soon Heung
2012-01-01
Highlights: ► The one dimensional analyzed sodium hot pool is modified to a three dimensional node system, because the one dimensional analysis cannot represent the phenomena of the inside pool of a big size pool with many compositions. ► The results of the multi-dimensional analysis compared with the one dimensional analysis results in normal operation, TOP (Transient of Over Power), LOF (Loss of Flow), and LOHS (Loss of Heat Sink) conditions. ► The difference of the sodium flow pattern due to structure effect in the hot pool and mass flow rates in the core lead the different sodium temperature and temperature history under transient condition. - Abstract: KALIMER-600 (Korea Advanced Liquid Metal Reactor), which is a pool type SFR (Sodium-cooled Fast Reactor), was developed by KAERI (Korea Atomic Energy Research Institute). DBE (Design Basis Events) for KALIMER-600 has been analyzed in the one dimension. In this study, the one dimensional analyzed sodium hot pool is modified to a three dimensional node system, because the one dimensional analysis cannot represent the phenomena of the inside pool of a big size pool with many compositions, such as UIS (Upper Internal Structure), IHX (Intermediate Heat eXchanger), DHX (Decay Heat eXchanger), and pump. The results of the multi-dimensional analysis compared with the one dimensional analysis results in normal operation, TOP (Transient of Over Power), LOF (Loss of Flow), and LOHS (Loss of Heat Sink) conditions. First, the results in normal operation condition show the good agreement between the one and multi-dimensional analysis. However, according to the sodium temperatures of the core inlet, outlet, the fuel central line, cladding and PDRC (Passive Decay heat Removal Circuit), the temperatures of the one dimensional analysis are generally higher than the multi-dimensional analysis in conditions except the normal operation state, and the PDRC operation time in the one dimensional analysis is generally longer than
International Nuclear Information System (INIS)
Aydin, Alhun; Sisman, Altug
2016-01-01
By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.
Energy Technology Data Exchange (ETDEWEB)
Aydin, Alhun; Sisman, Altug, E-mail: sismanal@itu.edu.tr
2016-03-22
By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.
Multi-dimensional analysis of the ECC behavior in the UPI plant Kori Unit 1
International Nuclear Information System (INIS)
Bae, Sungwon; Chung, Bub-Dong; Bang, Young Seok
2008-01-01
A multi-dimensional transient analysis during the LBLOCA of the Kori Unit 1 has been performed by using the MARS code. Based on 1-D nodalization of the Kori Unit 1, the reactor vessel nodalizations have been replaced by the multi-dimensional component. The multi-dimensional component for the reactor vessel is designed as 5 radial, 8 peripheral, and 21 vertical grids. It is assumed that the fuel assemblies are homogeneously distributed in inner 3 radial grids. The outer 1 radial grid region is modeled as the core bypass. The outer-model 1 radial grid is used for the downcomer region. The corresponding heat structures and fuels are modified to fit for the multi-dimensional reactor vessel model. The form drag coefficients for the upper plenum and the core have been designated as 0.6 and 9.39, respectively. The form drag coefficients for the radial and peripheral directions are assigned to the same on the assumption of homogeneous distribution of the flow obstacles. After obtaining the 102% power steady operation condition, cold leg LOCA simulation is performed during 400 second period. The multi-dimensional steady run results show no severe differences compared to the traditional 1-D nodalization results. After the ECC injection starts, a liquid pool is maintained at the upper plenum because the ECCS water can not overcome the upward gas flow that comes from the reactor core through the upper tie plate. The depth of ECCS water pool is predicted as about 20% of the total height from the upper tie plate and the center line of the hot leg pipe. At the vicinity region of the active ECCS show higher depth of liquid pool. The accumulated water flow rate passing the upper tie plate is calculated by the transient result. Much downward water flow is obtained at the outer-most region of upper plenum space. The downward flow dominant region is about 32.3% of the total upper tie plate area. The accumulated ECCS bypass ratio is predicted as 27.64% at 300 second. It is calculated
A multi-dimensional assessment of urban vulnerability to climate change in Sub-Saharan Africa
DEFF Research Database (Denmark)
Herslund, Lise Byskov; Jalyer, Fatameh; Jean-Baptiste, Nathalie
2016-01-01
In this paper, we develop and apply a multi-dimensional vulnerability assessment framework for understanding the impacts of climate change-induced hazards in Sub- Saharan African cities. The research was carried out within the European/African FP7 project CLimate change and Urban Vulnerability...... in Africa, which investigated climate change-induced risks, assessed vulnerability and proposed policy initiatives in five African cities. Dar es Salaam (Tanzania) was used as a main case with a particular focus on urban flooding. The multi-dimensional assessment covered the physical, institutional...... encroachment on green and flood-prone land). Scenario modeling suggests that vulnerability will continue to increase strongly due to the expected loss of agricultural land at the urban fringes and loss of green space within the city. However, weak institutional commitment and capacity limit the potential...
International Nuclear Information System (INIS)
Mamun, A.A.; Russel, S.M.; Mendoza-Briceno, C.A.; Alam, M.N.; Datta, T.K.; Das, A.K.
1999-05-01
A rigorous theoretical investigation has been made of multi-dimensional instability of obliquely propagating electrostatic solitary structures in a hot magnetized nonthermal dusty plasma which consists of a negatively charged hot dust fluid, Boltzmann distributed electrons, and nonthermally distributed ions. The Zakharov-Kuznetsov equation for the electrostatic solitary structures that exist in such a dusty plasma system is derived by the reductive perturbation method. The multi-dimensional instability of these solitary waves is also studied by the small-k (long wavelength plane wave) perturbation expansion method. The nature of these solitary structures, the instability criterion, and their growth rate depending on dust-temperature, external magnetic field, and obliqueness are discussed. The implications of these results to some space and astrophysical dusty plasma situations are briefly mentioned. (author)
A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube
Zou, Shuzhi; Zhao, Li; Hu, Kongfa
The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.
Anku, Sitsofe E.
1997-09-01
Using the reform documents of the National Council of Teachers of Mathematics (NCTM) (NCTM, 1989, 1991, 1995), a theory-based multi-dimensional assessment framework (the "SEA" framework) which should help expand the scope of assessment in mathematics is proposed. This framework uses a context based on mathematical reasoning and has components that comprise mathematical concepts, mathematical procedures, mathematical communication, mathematical problem solving, and mathematical disposition.
Ionizing Shocks in Argon. Part 2: Transient and Multi-Dimensional Effects (Preprint)
2010-09-09
stability in ionizing monatomic gases. Part 1. Argon ,” J. Fluid Mech., 84, 55 (1978). 2M. P. F. Bristow and I. I. Glass, “ Polarizability of singly...Article 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Ionizing Shocks in Argon . Part 2: Transient...Physics. 14. ABSTRACT We extend the computations of ionizing shocks in argon to unsteady and multi-dimensional, using a collisional-radiative
Development of multi-dimensional body image scale for malaysian female adolescents
Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin
2008-01-01
The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965...
Functional consequences of trust in the construction supply chain: a multi-dimensional view
Manu, E; Ankrah, N; Chinyio, EA; Proverbs, D
2016-01-01
Trust is often linked to the emergence of cooperative behaviours that contribute to successful project outcomes. However, some have questioned the functional relevance of trust in contractual relations, arguing that control-induced cooperation can emerge from enforcement of contracts. These mixed views are further complicated by the multi-dimensional nature of trust, as different trust dimensions could have varying functional consequences. The aim of this study was to provide some clarity on ...
Development of MARS for multi-dimensional and multi-purpose thermal-hydraulic system analysis
Energy Technology Data Exchange (ETDEWEB)
Lee, Won Jae; Chung, Bub Dong; Kim, Kyung Doo; Hwang, Moon Kyu; Jeong, Jae Jun; Ha, Kwi Seok; Joo, Han Gyu [Korea Atomic Energy Research Institute, T/H Safety Research Team, Yusung, Daejeon (Korea)
2000-10-01
MARS (Multi-dimensional Analysis of Reactor Safety) code is being developed by KAERI for the realistic thermal-hydraulic simulation of light water reactor system transients. MARS 1.4 has been developed as a final version of basic code frame for the multi-dimensional analysis of system thermal-hydraulics. Since MARS 1.3, MARS 1.4 has been improved to have the enhanced code capability and user friendliness through the unification of input/output features, code models and code functions, and through the code modernization. Further improvements of thermal-hydraulic models, numerical method and user friendliness are being carried out for the enhanced code accuracy. As a multi-purpose safety analysis code system, a coupled analysis system, MARS/MASTER/CONTEMPT, has been developed using multiple DLL (Dynamic Link Library) techniques of Windows system. This code system enables the coupled, that is, more realistic analysis of multi-dimensional thermal-hydraulics (MARS 2.0), three-dimensional core kinetics (MASTER) and containment thermal-hydraulics (CONTEMPT). This paper discusses the MARS development program, and the developmental progress of the MARS 1.4 and the MARS/MASTER/CONTEMPT focusing on major features of the codes and their verification. It also discusses thermal hydraulic models and new code features under development. (author)
Development of MARS for multi-dimensional and multi-purpose thermal-hydraulic system analysis
International Nuclear Information System (INIS)
Lee, Won Jae; Chung, Bub Dong; Kim, Kyung Doo; Hwang, Moon Kyu; Jeong, Jae Jun; Ha, Kwi Seok; Joo, Han Gyu
2000-01-01
MARS (Multi-dimensional Analysis of Reactor Safety) code is being developed by KAERI for the realistic thermal-hydraulic simulation of light water reactor system transients. MARS 1.4 has been developed as a final version of basic code frame for the multi-dimensional analysis of system thermal-hydraulics. Since MARS 1.3, MARS 1.4 has been improved to have the enhanced code capability and user friendliness through the unification of input/output features, code models and code functions, and through the code modernization. Further improvements of thermal-hydraulic models, numerical method and user friendliness are being carried out for the enhanced code accuracy. As a multi-purpose safety analysis code system, a coupled analysis system, MARS/MASTER/CONTEMPT, has been developed using multiple DLL (Dynamic Link Library) techniques of Windows system. This code system enables the coupled, that is, more realistic analysis of multi-dimensional thermal-hydraulics (MARS 2.0), three-dimensional core kinetics (MASTER) and containment thermal-hydraulics (CONTEMPT). This paper discusses the MARS development program, and the developmental progress of the MARS 1.4 and the MARS/MASTER/CONTEMPT focusing on major features of the codes and their verification. It also discusses thermal hydraulic models and new code features under development. (author)
Study on the construction of multi-dimensional Remote Sensing feature space for hydrological drought
International Nuclear Information System (INIS)
Xiang, Daxiang; Tan, Debao; Wen, Xiongfei; Shen, Shaohong; Li, Zhe; Cui, Yuanlai
2014-01-01
Hydrological drought refers to an abnormal water shortage caused by precipitation and surface water shortages or a groundwater imbalance. Hydrological drought is reflected in a drop of surface water, decrease of vegetation productivity, increase of temperature difference between day and night and so on. Remote sensing permits the observation of surface water, vegetation, temperature and other information from a macro perspective. This paper analyzes the correlation relationship and differentiation of both remote sensing and surface measured indicators, after the selection and extraction a series of representative remote sensing characteristic parameters according to the spectral characterization of surface features in remote sensing imagery, such as vegetation index, surface temperature and surface water from HJ-1A/B CCD/IRS data. Finally, multi-dimensional remote sensing features such as hydrological drought are built on a intelligent collaborative model. Further, for the Dong-ting lake area, two drought events are analyzed for verification of multi-dimensional features using remote sensing data with different phases and field observation data. The experiments results proved that multi-dimensional features are a good method for hydrological drought
Minimizing I/O Costs of Multi-Dimensional Queries with BitmapIndices
Energy Technology Data Exchange (ETDEWEB)
Rotem, Doron; Stockinger, Kurt; Wu, Kesheng
2006-03-30
Bitmap indices have been widely used in scientific applications and commercial systems for processing complex,multi-dimensional queries where traditional tree-based indices would not work efficiently. A common approach for reducing the size of a bitmap index for high cardinality attributes is to group ranges of values of an attribute into bins and then build a bitmap for each bin rather than a bitmap for each value of the attribute. Binning reduces storage costs,however, results of queries based on bins often require additional filtering for discarding it false positives, i.e., records in the result that do not satisfy the query constraints. This additional filtering,also known as ''candidate checking,'' requires access to the base data on disk and involves significant I/O costs. This paper studies strategies for minimizing the I/O costs for ''candidate checking'' for multi-dimensional queries. This is done by determining the number of bins allocated for each dimension and then placing bin boundaries in optimal locations. Our algorithms use knowledge of data distribution and query workload. We derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.
Liu, Bing-Chun; Binaykia, Arihant; Chang, Pei-Chann; Tiwari, Manoj Kumar; Tsao, Cheng-Chin
2017-01-01
Today, China is facing a very serious issue of Air Pollution due to its dreadful impact on the human health as well as the environment. The urban cities in China are the most affected due to their rapid industrial and economic growth. Therefore, it is of extreme importance to come up with new, better and more reliable forecasting models to accurately predict the air quality. This paper selected Beijing, Tianjin and Shijiazhuang as three cities from the Jingjinji Region for the study to come up with a new model of collaborative forecasting using Support Vector Regression (SVR) for Urban Air Quality Index (AQI) prediction in China. The present study is aimed to improve the forecasting results by minimizing the prediction error of present machine learning algorithms by taking into account multiple city multi-dimensional air quality information and weather conditions as input. The results show that there is a decrease in MAPE in case of multiple city multi-dimensional regression when there is a strong interaction and correlation of the air quality characteristic attributes with AQI. Also, the geographical location is found to play a significant role in Beijing, Tianjin and Shijiazhuang AQI prediction.
Carroll, Regina A.; Kodak, Tiffany; Adolf, Kari J.
2016-01-01
We used an adapted alternating treatments design to compare skill acquisition during discrete-trial instruction using immediate reinforcement, delayed reinforcement with immediate praise, and delayed reinforcement for 2 children with autism spectrum disorder. Participants acquired the skills taught with immediate reinforcement; however, delayed…
Directory of Open Access Journals (Sweden)
Chantal Olckers
2010-11-01
Full Text Available Orientation: Empathy is a core competency in aiding individuals to address the challenges of social living. An indicator of emotional intelligence, it is useful in a globalising and cosmopolitan world. Moreover, managing staff, stakeholders and conflict in many social settings relies on communicative skills, of which empathy forms a large part. Empathy plays a pivotal role in negotiating, persuading and influencing behaviour. The skill of being able to empathise thus enables the possessor to attune to the needs of clients and employees and provides opportunities to become responsive to these needs. Research purpose: This study attempted to determine the construct validity of the Multi-dimensional Emotional Empathy Scale within the South African context. Motivation for the study: In South Africa, a large number of psychometrical instruments have been adopted directly from abroad. Studies determining the construct validity of several of these imported instruments, however, have shown that these instruments are not suited for use in the South African context. Research design, approach and method: The study was based on a quantitative research method with a survey design. A convenience sample of 212 respondents completed the Multi-dimensional Emotional Empathy Scale. The constructs explored were Suffering, Positive Sharing, Responsive Crying, Emotional Attention, a Feel for Others and Emotional Contagion. The statistical procedure used was a confirmatory factor analysis. Main findings: The study showed that, from a South African perspective, the Multi-dimensional Emotional Empathy Scale lacks sufficient construct validity. Practical/managerial implications: Further refinement of the model would provide valuable information that would aid people to be more appreciative of individual contributions, to meet client needs and to understand the motivations of others. Contribution/value-add: From a South African perspective, the findings of this study are
Structural diversity: a multi-dimensional approach to assess recreational services in urban parks.
Voigt, Annette; Kabisch, Nadja; Wurster, Daniel; Haase, Dagmar; Breuste, Jürgen
2014-05-01
Urban green spaces provide important recreational services for urban residents. In general, when park visitors enjoy "the green," they are in actuality appreciating a mix of biotic, abiotic, and man-made park infrastructure elements and qualities. We argue that these three dimensions of structural diversity have an influence on how people use and value urban parks. We present a straightforward approach for assessing urban parks that combines multi-dimensional landscape mapping and questionnaire surveys. We discuss the method as well the results from its application to differently sized parks in Berlin and Salzburg.
MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES
Institute of Scientific and Technical Information of China (English)
程乾生
1990-01-01
The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.
On the use of multi-dimensional scaling and electromagnetic tracking in high dose rate brachytherapy
Götz, Th I.; Ermer, M.; Salas-González, D.; Kellermeier, M.; Strnad, V.; Bert, Ch; Hensel, B.; Tomé, A. M.; Lang, E. W.
2017-10-01
High dose rate brachytherapy affords a frequent reassurance of the precise dwell positions of the radiation source. The current investigation proposes a multi-dimensional scaling transformation of both data sets to estimate dwell positions without any external reference. Furthermore, the related distributions of dwell positions are characterized by uni—or bi—modal heavy—tailed distributions. The latter are well represented by α—stable distributions. The newly proposed data analysis provides dwell position deviations with high accuracy, and, furthermore, offers a convenient visualization of the actual shapes of the catheters which guide the radiation source during the treatment.
International Nuclear Information System (INIS)
Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G.R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V.E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng
2010-01-01
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies 'such as efficient data management' supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Energy Technology Data Exchange (ETDEWEB)
Baak, M., E-mail: max.baak@cern.ch [CERN, CH-1211 Geneva 23 (Switzerland); Gadatsch, S., E-mail: stefan.gadatsch@nikhef.nl [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands); Harrington, R. [School of Physics and Astronomy, University of Edinburgh, Mayfield Road, Edinburgh, EH9 3JZ, Scotland (United Kingdom); Verkerke, W. [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands)
2015-01-21
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
International Nuclear Information System (INIS)
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2014-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
MXA: a customizable HDF5-based data format for multi-dimensional data sets
International Nuclear Information System (INIS)
Jackson, M; Simmons, J P; De Graef, M
2010-01-01
A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files
Energy Technology Data Exchange (ETDEWEB)
Yang, Jin-Hwa [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Korea Atomic Energy Research Institute, 989-111, Daedeok-daero, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Choi, Chi-Jin [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Cho, Hyoung-Kyu, E-mail: chohk@snu.ac.kr [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Euh, Dong-Jin [Korea Atomic Energy Research Institute, 989-111, Daedeok-daero, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Park, Goon-Cherl [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of)
2017-02-15
Recently, high precision and high accuracy analysis on multi-dimensional thermal hydraulic phenomena in a nuclear power plant has been considered as state-of-the-art issues. System analysis code, MARS, also adopted a multi-dimensional module to simulate them more accurately. Even though it was applied to represent the multi-dimensional phenomena, but implemented models and correlations in that are one-dimensional empirical ones based on one-dimensional pipe experimental results. Prior to the application of the multi-dimensional simulation tools, however, the constitutive models for a two-phase flow need to be carefully validated, such as the wall friction model. Especially, in a Direct Vessel Injection (DVI) system, the injected emergency core coolant (ECC) on the upper part of the downcomer interacts with the lateral steam flow during the reflood phase in the Large-Break Loss-Of-Coolant-Accident (LBLOCA). The interaction between the falling film and lateral steam flow induces a multi-dimensional two-phase flow. The prediction of ECC flow behavior plays a key role in determining the amount of coolant that can be used as core cooling. Therefore, the wall friction model which is implemented to simulate the multi-dimensional phenomena should be assessed by multidimensional experimental results. In this paper, the air–water cross film flow experiments simulating the multi-dimensional phenomenon in upper part of downcomer as a conceptual problem will be introduced. The two-dimensional local liquid film velocity and thickness data were used as benchmark data for code assessment. And then the previous wall friction model of the MARS-MultiD in the annular flow regime was modified. As a result, the modified MARS-MultiD produced improved calculation result than previous one.
Improvement of multi-dimensional realistic thermal-hydraulic system analysis code, MARS 1.3
Energy Technology Data Exchange (ETDEWEB)
Lee, Won Jae; Chung, Bub Dong; Jeong, Jae Jun; Ha, Kwi Seok
1998-09-01
The MARS (Multi-dimensional Analysis of Reactor Safety) code is a multi-dimensional, best-estimate thermal-hydraulic system analysis code. This report describes the new features that have been improved in the MARS 1.3 code since the release of MARS 1.3 in July 1998. The new features include: - implementation of point kinetics model into the 3D module - unification of the heat structure model - extension of the control function to the 3D module variables - improvement of the 3D module input check function. Each of the items has been implemented in the developmental version of the MARS 1.3.1 code and, then, independently verified and assessed. The effectiveness of the new features is well verified and it is shown that these improvements greatly extend the code capability and enhance the user friendliness. Relevant input data changes are also described. In addition to the improvements, this report briefly summarizes the future code developmental activities that are being carried out or planned, such as coupling of MARS 1.3 with the containment code CONTEMPT and the three-dimensional reactor kinetics code MASTER 2.0. (author). 8 refs.
Directory of Open Access Journals (Sweden)
Siti Asmaul Mustaniroh
2016-11-01
Full Text Available Potato chips are one of the main products of Batu city. Based on data from Batu government’s in 2002, there are only 2 selling units. In 2008, amount of potato chips and another selling unit, so the research on positioning of potato chips in Batu city is important to do. The purpose of this research are to understand which attributes which influence custumer consideration to buy and to consume potato chips, and to analyze positioning which is formed between four potato chips brand (Cita Mandiri, Gizi Food, Leo, Rimbaku based on costumer perception in Batu city by using Multi Dimensional Scaling method. Attributes that influence costumer to buy and to consume potato chips are product (taste and crunchy level, price (product price compare with quality, and considerable price products, distribution (the local stock of the products or how strategic is the selling location, promotion (the using of advertising or promotion media (such as internet, radio, or brochure. Based on the Multi Dimensional Scaling Method, positioning follow this structure are Gizi Food as market leader, Leo as market challenger, and Rimbaku and Cita Mandiri as market follower.
Multi-dimensional analysis of high resolution γ-ray data
International Nuclear Information System (INIS)
Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.
1992-01-01
High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig
Estimate of pulse-sequence data acquisition system for multi-dimensional measurement
International Nuclear Information System (INIS)
Kitamura, Yasunori; Sakae, Takeji; Nohtomi, Akihiro; Matoba, Masaru; Matsumoto, Yuzuru.
1996-01-01
A pulse-sequence data acquisition system has been newly designed and estimated for the measurement of one- or multi-dimensional pulse train coming from radiation detectors. In this system, in order to realize the pulse-sequence data acquisition, the arrival time of each pulse is recorded to a memory of a personal computer (PC). For the multi-dimensional data acquisition with several input channels, each arrival-time data is tagged with a 'flag' which indicates the input channel of arriving pulse. Counting losses due to the existence of processing time of the PC are expected to be reduced by using a First-In-First-Out (FIFO) memory unit. In order to verify this system, a computer simulation was performed, Various sets of random pulse trains with different mean pulse rate (1-600 kcps) were generated by using Monte Carlo simulation technique. Those pulse trains were dealt with another code which simulates the newly-designed data acquisition system including a FIFO memory unit; the memory size was assumed to be 0-100 words. And the recorded pulse trains on the PC with the various FIFO memory sizes have been observed. From the result of the simulation, it appears that the system with 3 words FIFO memory unit works successfully up to the pulse rate of 10 kcps without any severe counting losses. (author)
Improvement of multi-dimensional realistic thermal-hydraulic system analysis code, MARS 1.3
International Nuclear Information System (INIS)
Lee, Won Jae; Chung, Bub Dong; Jeong, Jae Jun; Ha, Kwi Seok
1998-09-01
The MARS (Multi-dimensional Analysis of Reactor Safety) code is a multi-dimensional, best-estimate thermal-hydraulic system analysis code. This report describes the new features that have been improved in the MARS 1.3 code since the release of MARS 1.3 in July 1998. The new features include: - implementation of point kinetics model into the 3D module - unification of the heat structure model - extension of the control function to the 3D module variables - improvement of the 3D module input check function. Each of the items has been implemented in the developmental version of the MARS 1.3.1 code and, then, independently verified and assessed. The effectiveness of the new features is well verified and it is shown that these improvements greatly extend the code capability and enhance the user friendliness. Relevant input data changes are also described. In addition to the improvements, this report briefly summarizes the future code developmental activities that are being carried out or planned, such as coupling of MARS 1.3 with the containment code CONTEMPT and the three-dimensional reactor kinetics code MASTER 2.0. (author). 8 refs
Directory of Open Access Journals (Sweden)
Sanchez-Vazquez Manuel J
2012-08-01
Full Text Available Abstract Background Abattoir detected pathologies are of crucial importance to both pig production and food safety. Usually, more than one pathology coexist in a pig herd although it often remains unknown how these different pathologies interrelate to each other. Identification of the associations between different pathologies may facilitate an improved understanding of their underlying biological linkage, and support the veterinarians in encouraging control strategies aimed at reducing the prevalence of not just one, but two or more conditions simultaneously. Results Multi-dimensional machine learning methodology was used to identify associations between ten typical pathologies in 6485 batches of slaughtered finishing pigs, assisting the comprehension of their biological association. Pathologies potentially associated with septicaemia (e.g. pericarditis, peritonitis appear interrelated, suggesting on-going bacterial challenges by pathogens such as Haemophilus parasuis and Streptococcus suis. Furthermore, hepatic scarring appears interrelated with both milk spot livers (Ascaris suum and bacteria-related pathologies, suggesting a potential multi-pathogen nature for this pathology. Conclusions The application of novel multi-dimensional machine learning methodology provided new insights into how typical pig pathologies are potentially interrelated at batch level. The methodology presented is a powerful exploratory tool to generate hypotheses, applicable to a wide range of studies in veterinary research.
Sanchez-Vazquez, Manuel J; Nielen, Mirjam; Edwards, Sandra A; Gunn, George J; Lewis, Fraser I
2012-08-31
Abattoir detected pathologies are of crucial importance to both pig production and food safety. Usually, more than one pathology coexist in a pig herd although it often remains unknown how these different pathologies interrelate to each other. Identification of the associations between different pathologies may facilitate an improved understanding of their underlying biological linkage, and support the veterinarians in encouraging control strategies aimed at reducing the prevalence of not just one, but two or more conditions simultaneously. Multi-dimensional machine learning methodology was used to identify associations between ten typical pathologies in 6485 batches of slaughtered finishing pigs, assisting the comprehension of their biological association. Pathologies potentially associated with septicaemia (e.g. pericarditis, peritonitis) appear interrelated, suggesting on-going bacterial challenges by pathogens such as Haemophilus parasuis and Streptococcus suis. Furthermore, hepatic scarring appears interrelated with both milk spot livers (Ascaris suum) and bacteria-related pathologies, suggesting a potential multi-pathogen nature for this pathology. The application of novel multi-dimensional machine learning methodology provided new insights into how typical pig pathologies are potentially interrelated at batch level. The methodology presented is a powerful exploratory tool to generate hypotheses, applicable to a wide range of studies in veterinary research.
High-frequency stock linkage and multi-dimensional stationary processes
Wang, Xi; Bao, Si; Chen, Jingchao
2017-02-01
In recent years, China's stock market has experienced dramatic fluctuations; in particular, in the second half of 2014 and 2015, the market rose sharply and fell quickly. Many classical financial phenomena, such as stock plate linkage, appeared repeatedly during this period. In general, these phenomena have usually been studied using daily-level data or minute-level data. Our paper focuses on the linkage phenomenon in Chinese stock 5-second-level data during this extremely volatile period. The method used to select the linkage points and the arbitrage strategy are both based on multi-dimensional stationary processes. A new program method for testing the multi-dimensional stationary process is proposed in our paper, and the detailed program is presented in the paper's appendix. Because of the existence of the stationary process, the strategy's logarithmic cumulative average return will converge under the condition of the strong ergodic theorem, and this ensures the effectiveness of the stocks' linkage points and the more stable statistical arbitrage strategy.
Estimate of pulse-sequence data acquisition system for multi-dimensional measurement
Energy Technology Data Exchange (ETDEWEB)
Kitamura, Yasunori; Sakae, Takeji; Nohtomi, Akihiro; Matoba, Masaru [Kyushu Univ., Fukuoka (Japan). Faculty of Engineering; Matsumoto, Yuzuru
1996-07-01
A pulse-sequence data acquisition system has been newly designed and estimated for the measurement of one- or multi-dimensional pulse train coming from radiation detectors. In this system, in order to realize the pulse-sequence data acquisition, the arrival time of each pulse is recorded to a memory of a personal computer (PC). For the multi-dimensional data acquisition with several input channels, each arrival-time data is tagged with a `flag` which indicates the input channel of arriving pulse. Counting losses due to the existence of processing time of the PC are expected to be reduced by using a First-In-First-Out (FIFO) memory unit. In order to verify this system, a computer simulation was performed, Various sets of random pulse trains with different mean pulse rate (1-600 kcps) were generated by using Monte Carlo simulation technique. Those pulse trains were dealt with another code which simulates the newly-designed data acquisition system including a FIFO memory unit; the memory size was assumed to be 0-100 words. And the recorded pulse trains on the PC with the various FIFO memory sizes have been observed. From the result of the simulation, it appears that the system with 3 words FIFO memory unit works successfully up to the pulse rate of 10 kcps without any severe counting losses. (author)
Multi-dimensional analysis of high resolution {gamma}-ray data
Energy Technology Data Exchange (ETDEWEB)
Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)
1992-08-01
High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.
Directory of Open Access Journals (Sweden)
Ming-wei Ma
2013-01-01
Full Text Available The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for multi-dimensional copulas. A goodness-of-fit test based on Rosenblatt's transformation was mathematically expanded from two dimensions to three dimensions and procedures of a bootstrap version of the test were provided. Through stochastic copula simulation, an empirical application of historical drought data at the Lintong Gauge Station shows that the goodness-of-fit tests perform well, revealing that both trivariate Gaussian and Student t copulas are acceptable for modeling the dependence structures of the observed drought duration, severity, and peak. The goodness-of-fit tests for multi-dimensional copulas can provide further support and help a lot in the potential applications of a wider range of copulas to describe the associations of correlated hydrological variables. However, for the application of copulas with the number of dimensions larger than three, more complicated computational efforts as well as exploration and parameterization of corresponding copulas are required.
Adherence is a multi-dimensional construct in the POUNDS LOST trial
Williamson, Donald A.; Anton, Stephen D.; Han, Hongmei; Champagne, Catherine M.; Allen, Ray; LeBlanc, Eric; Ryan, Donna H.; McManus, Katherine; Laranjo, Nancy; Carey, Vincent J.; Loria, Catherine M.; Bray, George A.; Sacks, Frank M.
2011-01-01
Research on the conceptualization of adherence to treatment has not addressed a key question: Is adherence best defined as being a uni-dimensional or multi-dimensional behavioral construct? The primary aim of this study was to test which of these conceptual models best described adherence to a weight management program. This ancillary study was conducted as a part of the POUNDS LOST trial that tested the efficacy of four dietary macro-nutrient compositions for promoting weight loss. A sample of 811 overweight/obese adults was recruited across two clinical sites, and each participant was randomly assigned to one of four macronutrient prescriptions: (1) Low fat (20% of energy), average protein (15% of energy); (2) High fat (40%), average protein (15%); (3) Low fat (20%), high protein (25%); (4) High fat (40%), high protein (25%). Throughout the first 6 months of the study, a computer tracking system collected data on eight indicators of adherence. Computer tracking data from the initial 6 months of the intervention were analyzed using exploratory and confirmatory analyses. Two factors (accounting for 66% of the variance) were identified and confirmed: (1) behavioral adherence and (2) dietary adherence. Behavioral adherence did not differ across the four interventions, but prescription of a high fat diet (vs. a low fat diet) was found to be associated with higher levels of dietary adherence. The findings of this study indicated that adherence to a weight management program was best conceptualized as being multi-dimensional, with two dimensions: behavioral and dietary adherence. PMID:19856202
Multi-dimensional self-esteem and magnitude of change in the treatment of anorexia nervosa.
Collin, Paula; Karatzias, Thanos; Power, Kevin; Howard, Ruth; Grierson, David; Yellowlees, Alex
2016-03-30
Self-esteem improvement is one of the main targets of inpatient eating disorder programmes. The present study sought to examine multi-dimensional self-esteem and magnitude of change in eating psychopathology among adults participating in a specialist inpatient treatment programme for anorexia nervosa. A standardised assessment battery, including multi-dimensional measures of eating psychopathology and self-esteem, was completed pre- and post-treatment for 60 participants (all white Scottish female, mean age=25.63 years). Statistical analyses indicated that self-esteem improved with eating psychopathology and weight over the course of treatment, but that improvements were domain-specific and small in size. Global self-esteem was not predictive of treatment outcome. Dimensions of self-esteem at baseline (Lovability and Moral Self-approval), however, were predictive of magnitude of change in dimensions of eating psychopathology (Shape and Weight Concern). Magnitude of change in Self-Control and Lovability dimensions were predictive of magnitude of change in eating psychopathology (Global, Dietary Restraint, and Shape Concern). The results of this study demonstrate that the relationship between self-esteem and eating disorder is far from straightforward, and suggest that future research and interventions should focus less exclusively on self-esteem as a uni-dimensional psychological construct. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Multi-dimensional design window search system using neural networks in reactor core design
International Nuclear Information System (INIS)
Kugo, Teruhiko; Nakagawa, Masayuki
2000-02-01
In the reactor core design, many parametric survey calculations should be carried out to decide an optimal set of basic design parameter values. They consume a large amount of computation time and labor in the conventional way. To support directly design work, we investigate a procedure to search efficiently a design window, which is defined as feasible design parameter ranges satisfying design criteria and requirements, in a multi-dimensional space composed of several basic design parameters. We apply the present method to the neutronics and thermal hydraulics fields and develop the multi-dimensional design window search system using it. The principle of the present method is to construct the multilayer neural network to simulate quickly a response of an analysis code through a training process, and to reduce computation time using the neural network without parametric study using analysis codes. The system works on an engineering workstation (EWS) with efficient man-machine interface for pre- and post-processing. This report describes the principle of the present method, the structure of the system, the guidance of the usages of the system, the guideline for the efficient training of neural networks, the instructions of the input data for analysis calculation and so on. (author)
Zhou, Xiaolu; Li, Dongying
2018-05-09
Advancement in location-aware technologies, and information and communication technology in the past decades has furthered our knowledge of the interaction between human activities and the built environment. An increasing number of studies have collected data regarding individual activities to better understand how the environment shapes human behavior. Despite this growing interest, some challenges exist in collecting and processing individual's activity data, e.g., capturing people's precise environmental contexts and analyzing data at multiple spatial scales. In this study, we propose and implement an innovative system that integrates smartphone-based step tracking with an app and the sequential tile scan techniques to collect and process activity data. We apply the OpenStreetMap tile system to aggregate positioning points at various scales. We also propose duration, step and probability surfaces to quantify the multi-dimensional attributes of activities. Results show that, by running the app in the background, smartphones can measure multi-dimensional attributes of human activities, including space, duration, step, and location uncertainty at various spatial scales. By coordinating Global Positioning System (GPS) sensor with accelerometer sensor, this app can save battery which otherwise would be drained by GPS sensor quickly. Based on a test dataset, we were able to detect the recreational center and sports center as the space where the user was most active, among other places visited. The methods provide techniques to address key issues in analyzing human activity data. The system can support future studies on behavioral and health consequences related to individual's environmental exposure.
International Nuclear Information System (INIS)
Kweon, T. S.; Yun, B. J.; Ah, D. J.; Ju, I. C.; Song, C. H.; Park, J. K.
2001-01-01
Multi-dimensional thermal-hydraulic hehavior, such as ECC (Emergency Core Cooling) bypass, ECC penetration, steam-water condensation and accumulated water level, in an annular downcomer of a PWR (Pressurized Water Reactor) reactor vessel with a DVI(Direct Vessel Injection) injection mode is presented based on the experimental observations in the MIDAS (Multi-dimensional Investigation in Downcomer Annulus Simulation) steam-water facility. From the steady-state tests to similate a late reflood phase of LBLOCA (Large Break Loss-of-Coolant Accidents), major thermal-hydraulic phenomena in the downcomer are quantified under a wide range of test conditions. Especially, isothermal lines show well multi-dimensional phenomena of phase interaction between steam and water in the annulus downcomer. Overall test results show that multi-dimensional thermal-hydraulic behaviors occur in the downcomer annulus region as expected. The MIDAS test facility is a steam-water separate effect test facility, which is 1/4.93 linearly scaled-down of a 1400 MWe PWR type of nuclear reactor, with focusing on understanding multi-dimensional thermal-hydraulic phenomena in annulus downcomer with various types of safety injection location during refill or reflood phase of a LBLOCA in PWR
Risk-based design of process systems using discrete-time Bayesian networks
International Nuclear Information System (INIS)
Khakzad, Nima; Khan, Faisal; Amyotte, Paul
2013-01-01
Temporal Bayesian networks have gained popularity as a robust technique to model dynamic systems in which the components' sequential dependency, as well as their functional dependency, cannot be ignored. In this regard, discrete-time Bayesian networks have been proposed as a viable alternative to solve dynamic fault trees without resort to Markov chains. This approach overcomes the drawbacks of Markov chains such as the state-space explosion and the error-prone conversion procedure from dynamic fault tree. It also benefits from the inherent advantages of Bayesian networks such as probability updating. However, effective mapping of the dynamic gates of dynamic fault trees into Bayesian networks while avoiding the consequent huge multi-dimensional probability tables has always been a matter of concern. In this paper, a new general formalism has been developed to model two important elements of dynamic fault tree, i.e., cold spare gate and sequential enforcing gate, with any arbitrary probability distribution functions. Also, an innovative Neutral Dependency algorithm has been introduced to model dynamic gates such as priority-AND gate, thus reducing the dimension of conditional probability tables by an order of magnitude. The second part of the paper is devoted to the application of discrete-time Bayesian networks in the risk assessment and safety analysis of complex process systems. It has been shown how dynamic techniques can effectively be applied for optimal allocation of safety systems to obtain maximum risk reduction.
Energy Technology Data Exchange (ETDEWEB)
Chung, B. D.; Bae, S. W.; Jeong, J. J.; Lee, S. M
2005-04-15
A new multi-dimensional component has been developed to allow for more flexible 3D capabilities in the system code, MARS. This component can be applied in the Cartesian and cylindrical coordinates. For the development of this model, the 3D convection and diffusion terms are implemented in the momentum and energy equation. And a simple Prandtl's mixing length model is applied for the turbulent viscosity. The developed multi-dimensional component was assessed against five conceptual problems with analytic solution. And some SETs are calculated and compared with experimental data. With this newly developed multi-dimensional flow module, the MARS code can realistic calculate the flow fields in pools such as those occurring in the core, steam generators and IRWST.
International Nuclear Information System (INIS)
Chung, B. D.; Bae, S. W.; Jeong, J. J.; Lee, S. M.
2005-04-01
A new multi-dimensional component has been developed to allow for more flexible 3D capabilities in the system code, MARS. This component can be applied in the Cartesian and cylindrical coordinates. For the development of this model, the 3D convection and diffusion terms are implemented in the momentum and energy equation. And a simple Prandtl's mixing length model is applied for the turbulent viscosity. The developed multi-dimensional component was assessed against five conceptual problems with analytic solution. And some SETs are calculated and compared with experimental data. With this newly developed multi-dimensional flow module, the MARS code can realistic calculate the flow fields in pools such as those occurring in the core, steam generators and IRWST
Best-estimated multi-dimensional calculation during LB LOCA for APR1400
International Nuclear Information System (INIS)
Oh, D. Y.; Bang, Y. S.; Cheong, A. J.; Woong, S.; Korea, W.
2010-01-01
Best-estimated (BE) calculation with uncertainty quantification for the emergency core cooling system (ECCS) performance analysis during Loss of Coolant Accident (LOCA) is more broadly used in nuclear industries and regulations. In Korea, demand on regulatory audit calculation is continuously increasing to support the safety review for life extension, power up-rating and advanced nuclear reactor design. The thermal-hydraulic system code, MARS (Multi-dimensional Analysis of Reactor Safety), with multi-dimensional capability is used for audit calculation. It achieves to describe the complicated phenomena in reactor coolant system by very effectively consolidating the one dimensional RELAP5/MOD3 with the multidimensional COBRA-TF codes. The advanced power reactors (APR1400) to be evaluated has four separated hydraulic trains of the high pressure injection system (HPSI) with direct vessel injection (DVI) which is different from the existing commercial PWRs. Also, the therma-hydraulic behavior of DVI plant would be considerably different from that of a cold-leg safety injection since the low pressure safety injection system are eliminated and the high pressure safety flow are injected into the specific elevation of reactor vessel downcomer. The ECCS bypass induced by the downcomer boiling due to hot wall heating of reactor vessel during reflooding phase is one of the important phenomena which should be considered in DVI plants. Therefore, in this study, BE calculation with one-dimensional (1-D) and multi-dimensional (multi-D) MARS models during LBLOCA are performed for APR1400 plant. In the multi-D evaluation, the reactor vessel is modeled by multi-D components and the specific treatment of flow path inside reactor vessel, e.g., upper guide structure, is essential. The concept of hot zone is adopted to simulate the limiting thermal-hydraulic conditions surrounding hot rod, which is similar to hot channel in 1-D. Also, alternative treatment of the hot rods in multi-D is
Singh, Brajesh K; Srivastava, Vineet K
2015-04-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.
Rarefaction and shock waves for multi-dimensional hyperbolic conservation laws
International Nuclear Information System (INIS)
Dening, Li
1991-01-01
In this paper, the author wants to show the local existence of a solution of combination of shock and rarefaction waves for the multi-dimensional hyperbolic system of conservation laws. The typical example he has in mind is the Euler equations for compressible fluid. More generally, he studies the hyperbolic system of conservation laws ∂ t F 0 (u) + Σ j=1 n ∂ x j F j (u)=0 where u=(u 1 ....,u m ) and F j (u), j=0,...,n are m-dimensional vector-valued functions. He'll impose some conditions in the following on the systems (1.2). All these conditions are satisfied by the Euler equations
A new analytical method to solve the heat equation for a multi-dimensional composite slab
International Nuclear Information System (INIS)
Lu, X; Tervola, P; Viljanen, M
2005-01-01
A novel analytical approach has been developed for heat conduction in a multi-dimensional composite slab subject to time-dependent boundary changes of the first kind. Boundary temperatures are represented as Fourier series. Taking advantage of the periodic properties of boundary changes, the analytical solution is obtained and expressed explicitly. Nearly all the published works necessitate searching for associated eigenvalues in solving such a problem even for a one-dimensional composite slab. In this paper, the proposed method involves no iterative computation such as numerically searching for eigenvalues and no residue evaluation. The adopted method is simple which represents an extension of the novel analytical approach derived for the one-dimensional composite slab. Moreover, the method of 'separation of variables' employed in this paper is new. The mathematical formula for solutions is concise and straightforward. The physical parameters are clearly shown in the formula. Further comparison with numerical calculations is presented
Gattol, Valentin; Sääksjärvi, Maria; Carbon, Claus-Christian
2011-01-05
The authors present a procedural extension of the popular Implicit Association Test (IAT) that allows for indirect measurement of attitudes on multiple dimensions (e.g., safe-unsafe; young-old; innovative-conventional, etc.) rather than on a single evaluative dimension only (e.g., good-bad). In two within-subjects studies, attitudes toward three automobile brands were measured on six attribute dimensions. Emphasis was placed on evaluating the methodological appropriateness of the new procedure, providing strong evidence for its reliability, validity, and sensitivity. This new procedure yields detailed information on the multifaceted nature of brand associations that can add up to a more abstract overall attitude. Just as the IAT, its multi-dimensional extension/application (dubbed md-IAT) is suited for reliably measuring attitudes consumers may not be consciously aware of, able to express, or willing to share with the researcher.
Analysis of multi-dimensional and countercurrent effects in a BWR loss-of-coolant accident
International Nuclear Information System (INIS)
Shiralkar, B.S.; Dix, G.E.; Alamgir, M.
1989-01-01
The presence of parallel enclosed channels in a BWR provides opportunities for multiple flow regimes in co-current and countercurrent flow under Loss-of-Coolant Accident (LOCA) conditions. To address and understand these phenomena, an integrated experimental and analytical study has been conducted. The primary experimental facility was the Steam Sector Test Facility (SSTF) which simulated a full scale 30deg sector of a BWR/6 reactor vessel. Both steady-state separate effects tests and integral transients with vessel blowdown and refill were performed. The present of multi-dimensional and parallel channel effects was found to be very beneficial to BWR LOCA performance. The best estimate TRAC-BWR computer code was extended as part of this study by incorporation of a phenomenological upper plenum mixing model. TRAC-BWR was applied to the analysis of these full scale experiments. Excellent predictions of phenomena and experimental trends were achieved. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Muramatsu, Toshiharu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
1998-08-01
This report explains the numerical methods and the set-up method of input data for a single-phase multi-dimensional thermohydraulics direct numerical simulation code DINUS-3 (Direct Numerical Simulation using a 3rd-order upwind scheme). The code was developed to simulate non-stationary temperature fluctuation phenomena related to thermal striping phenomena, developed at Power Reactor and Nuclear Fuel Development Corporation (PNC). The DINUS-3 code was characterized by the use of a third-order upwind scheme for convection terms in instantaneous Navier-Stokes and energy equations, and an adaptive control system based on the Fuzzy theory to control time step sizes. Author expect this report is very useful to utilize the DINUS-3 code for the evaluation of various non-stationary thermohydraulic phenomena in reactor applications. (author)
Application of neural network to multi-dimensional design window search
International Nuclear Information System (INIS)
Kugo, T.; Nakagawa, M.
1996-01-01
In the reactor core design, many parametric survey calculations should be carried out to decide an optimal set of basic design parameter values. They consume a large amount of computation time and labor in the conventional way. To support directly such a work, we investigate a procedure to search efficiently a design window, which is defined as feasible design parameter ranges satisfying design criteria and requirements, in a multi-dimensional space composed of several basic design parameters. A principle of the present method is to construct the multilayer neural network to simulate quickly a response of an analysis code through a training process, and to reduce computation time using the neural network as a substitute of an analysis code. We apply the present method to a fuel pin design of high conversion light water reactors for the neutronics and thermal hydraulics fields to demonstrate performances of the method. (author)
Multi-dimensional analysis of high resolution {gamma}-ray data
Energy Technology Data Exchange (ETDEWEB)
Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J. [Strasbourg-1 Univ., 67 (France). Centre de Recherches Nucleaires
1992-12-31
A new generation of high resolution {gamma}-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold {gamma}-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs.
Catley, Christina; McGregor, Carolyn; Percival, Jennifer; Curry, Joanne; James, Andrew
2008-01-01
This paper presents a multi-dimensional approach to knowledge translation, enabling results obtained from a survey evaluating the uptake of Information Technology within Neonatal Intensive Care Units to be translated into knowledge, in the form of health informatics capacity audits. Survey data, having multiple roles, patient care scenarios, levels, and hospitals, is translated using a structured data modeling approach, into patient journey models. The data model is defined such that users can develop queries to generate patient journey models based on a pre-defined Patient Journey Model architecture (PaJMa). PaJMa models are then analyzed to build capacity audits. Capacity audits offer a sophisticated view of health informatics usage, providing not only details of what IT solutions a hospital utilizes, but also answering the questions: when, how and why, by determining when the IT solutions are integrated into the patient journey, how they support the patient information flow, and why they improve the patient journey.
Multi-dimensional analysis of high resolution γ-ray data
International Nuclear Information System (INIS)
Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.
1992-01-01
A new generation of high resolution γ-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold γ-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs
Han, Xianlin; Yang, Kui; Gross, Richard W.
2011-01-01
Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525
Challenges in Constructing a Multi-dimensional European Job Quality Index
DEFF Research Database (Denmark)
Leschke, Janine; Watt, Andrew
2014-01-01
quality performances and the outcomes in six sub-dimensions of job quality and compare them with each other, across gender and over time. At the same time, the limitations of such a composite index need to be borne in mind. The most important challenges are the availability (over time), timeliness......There are few attempts to benchmark job quality in a multi-dimensional perspective across Europe. Against this background, we have created a synthetic job quality index (JQI) for the EU27 countries in an attempt to shed light on the question of how European countries compare with each other and how...... they are developing over time in terms of job quality. Taking account of the multi-faceted nature of job quality, the JQI is compiled on the basis of six sub-indices which cover the most important dimensions of job quality as identified in the literature. The paper addresses the methods used to construct the JQI...
Energy method for multi-dimensional balance laws with non-local dissipation
Duan, Renjun
2010-06-01
In this paper, we are concerned with a class of multi-dimensional balance laws with a non-local dissipative source which arise as simplified models for the hydrodynamics of radiating gases. At first we introduce the energy method in the setting of smooth perturbations and study the stability of constants states. Precisely, we use Fourier space analysis to quantify the energy dissipation rate and recover the optimal time-decay estimates for perturbed solutions via an interpolation inequality in Fourier space. As application, the developed energy method is used to prove stability of smooth planar waves in all dimensions n2, and also to show existence and stability of time-periodic solutions in the presence of the time-periodic source. Optimal rates of convergence of solutions towards the planar waves or time-periodic states are also shown provided initially L1-perturbations. © 2009 Elsevier Masson SAS.
Multi-dimensional fiber-optic radiation sensor for ocular proton therapy dosimetry
International Nuclear Information System (INIS)
Jang, K.W.; Yoo, W.J.; Moon, J.; Han, K.T.; Park, B.G.; Shin, D.; Park, S-Y.; Lee, B.
2012-01-01
In this study, we fabricated a multi-dimensional fiber-optic radiation sensor, which consists of organic scintillators, plastic optical fibers and a water phantom with a polymethyl methacrylate structure for the ocular proton therapy dosimetry. For the purpose of sensor characterization, we measured the spread out Bragg-peak of 120 MeV proton beam using a one-dimensional sensor array, which has 30 fiber-optic radiation sensors with a 1.5 mm interval. A uniform region of spread out Bragg-peak using the one-dimensional fiber-optic radiation sensor was obtained from 20 to 25 mm depth of a phantom. In addition, the Bragg-peak of 109 MeV proton beam was measured at the depth of 11.5 mm of a phantom using a two-dimensional sensor array, which has 10×3 sensor array with a 0.5 mm interval.
International Nuclear Information System (INIS)
Mueller, Bernhard
2009-01-01
In this thesis, we have presented the first multi-dimensional models of core-collapse supernovae that combine a detailed, up-to-date treatment of neutrino transport, the equation of state, and - in particular - general relativistic gravity. Building on the well-tested neutrino transport code VERTEX and the GR hydrodynamics code CoCoNuT, we developed and implemented a relativistic generalization of a ray-by-ray-plus method for energy-dependent neutrino transport. The result of these effort, the VERTEX-CoCoNuT code, also incorporates a number of improved numerical techniques that have not been used in the code components VERTEX and CoCoNuT before. In order to validate the VERTEX-CoCoNuT code, we conducted several test simulations in spherical symmetry, most notably a comparison with the one-dimensional relativistic supernova code AGILE-BOLTZTRAN and the Newtonian PROMETHEUSVERTEX code. (orig.)
Multi-dimensional single-spin nano-optomechanics with a levitated nanodiamond
Neukirch, Levi P.; von Haartman, Eva; Rosenholm, Jessica M.; Nick Vamivakas, A.
2015-10-01
Considerable advances made in the development of nanomechanical and nano-optomechanical devices have enabled the observation of quantum effects, improved sensitivity to minute forces, and provided avenues to probe fundamental physics at the nanoscale. Concurrently, solid-state quantum emitters with optically accessible spin degrees of freedom have been pursued in applications ranging from quantum information science to nanoscale sensing. Here, we demonstrate a hybrid nano-optomechanical system composed of a nanodiamond (containing a single nitrogen-vacancy centre) that is levitated in an optical dipole trap. The mechanical state of the diamond is controlled by modulation of the optical trapping potential. We demonstrate the ability to imprint the multi-dimensional mechanical motion of the cavity-free mechanical oscillator into the nitrogen-vacancy centre fluorescence and manipulate the mechanical system's intrinsic spin. This result represents the first step towards a hybrid quantum system based on levitating nanoparticles that simultaneously engages optical, phononic and spin degrees of freedom.
Energy Technology Data Exchange (ETDEWEB)
Choi, Jong Ho; Ohn, M. Y.; Cho, C. H. [KOPEC, Taejon (Korea)
2002-03-01
The trip coverage analysis model requires the geometry network for primary and secondary circuit as well as the plant control system to simulate all the possible plant operating conditions throughout the plant life. The model was validated for the power maneuvering and the Wolsong 4 commissioning test. The trip coverage map was produced for the large break loss of coolant accident and the complete loss of class IV power event. The reliable multi-dimensional hydrogen analysis requires the high capability for thermal hydraulic modelling. To acquire such a basic capability and verify the applicability of GOTHIC code, the assessment of heat transfer model, hydrogen mixing and combustion model was performed. Also, the assessment methodology for flame acceleration and deflagration-to-detonation transition is established. 22 refs., 120 figs., 31 tabs. (Author)
Multi-dimensional diagnostics of high power ion beams by Arrayed Pinhole Camera System
International Nuclear Information System (INIS)
Yasuike, K.; Miyamoto, S.; Shirai, N.; Akiba, T.; Nakai, S.; Imasaki, K.; Yamanaka, C.
1993-01-01
The authors developed multi-dimensional beam diagnostics system (with spatially and time resolution). They used newly developed Arrayed Pinhole Camera (APC) for this diagnosis. The APC can get spatial distribution of divergence and flux density. They use two types of particle detectors in this study. The one is CR-39 can get time integrated images. The other one is gated Micro-Channel-Plate (MCP) with CCD camera. It enables time resolving diagnostics. The diagnostics systems have resolution better than 10mrad divergence, 0.5mm spatial resolution on the objects respectively. The time resolving system has 10ns time resolution. The experiments are performed on Reiden-IV and Reiden-SHVS induction linac. The authors get time integrated divergence distributions on Reiden-IV proton beam. They also get time resolved image on Reiden-SHVS
Directory of Open Access Journals (Sweden)
Naser Azad
2013-08-01
Full Text Available This paper presents an empirical investigation to find important factors influencing multi-dimensional organizational culture. The proposed study designs a questionnaire in Likert scale consists of 21 questions, distributes it among 300 people who worked for different business units and collects 283 filled ones. Cronbach alpha is calculated as 0.799. In addition, Kaiser-Meyer-Olkin Measure of Sampling Adequacy and Approx. Chi-Square are 0.821 and 1395.74, respectively. The study has implemented principal component analysis and the results have indicated that there were four factors influencing organizational culture including, diversity in culture, connection based culture, integrated culture and structure of culture. In terms of diversity in culture, sensitivity to quality data and cultural flexibility are the most influential sub-factors while connection based marketing and relational satisfaction are two important sub-factors associated with diversity in culture. The study discusses other issues.
Racial-ethnic self-schemas: Multi-dimensional identity-based motivation
Oyserman, Daphna
2008-01-01
Prior self-schema research focuses on benefits of being schematic vs. aschematic in stereotyped domains. The current studies build on this work, examining racial-ethnic self-schemas as multi-dimensional, containing multiple, conflicting, and non-integrated images. A multidimensional perspective captures complexity; examining net effects of dimensions predicts within-group differences in academic engagement and well-being. When racial-ethnicity self-schemas focus attention on membership in both in-group and broader society, engagement with school should increase since school is not seen as out-group defining. When racial-ethnicity self-schemas focus attention on inclusion (not obstacles to inclusion) in broader society, risk of depressive symptoms should decrease. Support for these hypotheses was found in two separate samples (8th graders, n = 213, 9th graders followed to 12th grade n = 141). PMID:19122837
Development of Multi-Dimensional RELAP5 with Conservative Momentum Flux
Energy Technology Data Exchange (ETDEWEB)
Jang, Hyung Wook; Lee, Sang Yong [KINGS, Ulsan (Korea, Republic of)
2016-10-15
The non-conservative form of the momentum equations are used in many codes. It tells us that using the non-conservative form in the non-porous or open body problem may not be good. In this paper, two aspects concerning the multi-dimensional codes will be discussed. Once the validity of the modified code is confirmed, it is applied to the analysis of the large break LOCA for APR-1400. One of them is the properness of the type of the momentum equations. The other discussion will be the implementation of the conservative momentum flux term in RELAP5. From the present study and former, it is shown that the RELAP5 Multi-D with conservative convective terms is applicable to LOCA analysis. And the implementation of the conservative convective terms in RELAP5 seems to be successful. Further efforts have to be made on making it more robust.
Fuzzy Regression Prediction and Application Based on Multi-Dimensional Factors of Freight Volume
Xiao, Mengting; Li, Cheng
2018-01-01
Based on the reality of the development of air cargo, the multi-dimensional fuzzy regression method is used to determine the influencing factors, and the three most important influencing factors of GDP, total fixed assets investment and regular flight route mileage are determined. The system’s viewpoints and analogy methods, the use of fuzzy numbers and multiple regression methods to predict the civil aviation cargo volume. In comparison with the 13th Five-Year Plan for China’s Civil Aviation Development (2016-2020), it is proved that this method can effectively improve the accuracy of forecasting and reduce the risk of forecasting. It is proved that this model predicts civil aviation freight volume of the feasibility, has a high practical significance and practical operation.
Multi Dimensional Honey Bee Foraging Algorithm Based on Optimal Energy Consumption
Saritha, R.; Vinod Chandra, S. S.
2017-10-01
In this paper a new nature inspired algorithm is proposed based on natural foraging behavior of multi-dimensional honey bee colonies. This method handles issues that arise when food is shared from multiple sources by multiple swarms at multiple destinations. The self organizing nature of natural honey bee swarms in multiple colonies is based on the principle of energy consumption. Swarms of multiple colonies select a food source to optimally fulfill the requirements of its colonies. This is based on the energy requirement for transporting food between a source and destination. Minimum use of energy leads to maximizing profit in each colony. The mathematical model proposed here is based on this principle. This has been successfully evaluated by applying it on multi-objective transportation problem for optimizing cost and time. The algorithm optimizes the needs at each destination in linear time.
Energy method for multi-dimensional balance laws with non-local dissipation
Duan, Renjun; Fellner, Klemens; Zhu, Changjiang
2010-01-01
In this paper, we are concerned with a class of multi-dimensional balance laws with a non-local dissipative source which arise as simplified models for the hydrodynamics of radiating gases. At first we introduce the energy method in the setting of smooth perturbations and study the stability of constants states. Precisely, we use Fourier space analysis to quantify the energy dissipation rate and recover the optimal time-decay estimates for perturbed solutions via an interpolation inequality in Fourier space. As application, the developed energy method is used to prove stability of smooth planar waves in all dimensions n2, and also to show existence and stability of time-periodic solutions in the presence of the time-periodic source. Optimal rates of convergence of solutions towards the planar waves or time-periodic states are also shown provided initially L1-perturbations. © 2009 Elsevier Masson SAS.
Energy Technology Data Exchange (ETDEWEB)
Mueller, Bernhard
2009-05-07
In this thesis, we have presented the first multi-dimensional models of core-collapse supernovae that combine a detailed, up-to-date treatment of neutrino transport, the equation of state, and - in particular - general relativistic gravity. Building on the well-tested neutrino transport code VERTEX and the GR hydrodynamics code CoCoNuT, we developed and implemented a relativistic generalization of a ray-by-ray-plus method for energy-dependent neutrino transport. The result of these effort, the VERTEX-CoCoNuT code, also incorporates a number of improved numerical techniques that have not been used in the code components VERTEX and CoCoNuT before. In order to validate the VERTEX-CoCoNuT code, we conducted several test simulations in spherical symmetry, most notably a comparison with the one-dimensional relativistic supernova code AGILE-BOLTZTRAN and the Newtonian PROMETHEUSVERTEX code. (orig.)
Analytical modeling for fractional multi-dimensional diffusion equations by using Laplace transform
Directory of Open Access Journals (Sweden)
Devendra Kumar
2015-01-01
Full Text Available In this paper, we propose a simple numerical algorithm for solving multi-dimensional diffusion equations of fractional order which describes density dynamics in a material undergoing diffusion by using homotopy analysis transform method. The fractional derivative is described in the Caputo sense. This homotopy analysis transform method is an innovative adjustment in Laplace transform method and makes the calculation much simpler. The technique is not limited to the small parameter, such as in the classical perturbation method. The scheme gives an analytical solution in the form of a convergent series with easily computable components, requiring no linearization or small perturbation. The numerical solutions obtained by the proposed method indicate that the approach is easy to implement and computationally very attractive.
A multi-dimensional framework to assist in the design of successful shared services centres
Directory of Open Access Journals (Sweden)
Mark Borman
2012-04-01
Full Text Available Organisations are increasingly looking to realise the benefits of shared services yet there is little guidance available as to the best way to proceed. A multi-dimensional framework is presented that considers the service provided, the design of the shared services centre and the organisational context it sits within. Case studies are then used to determine what specific attributes from each dimension are associated with success and how they should be aligned. It is concluded that there appears to be a single, broadly standard pattern of attributes for successful Shared Services Centres (SSCs across the proposed dimensions of Activity, Environment, History, Resources, Strategy, Structure, Management, Technology and Individual Skills. It should also be noted though that some deviation from the identified standard along some dimensions is possible without adverse effect – ie that the alignment identified appears to be relatively soft.
TWO-DIMENSIONAL CORE-COLLAPSE SUPERNOVA MODELS WITH MULTI-DIMENSIONAL TRANSPORT
International Nuclear Information System (INIS)
Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun
2015-01-01
We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant O(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate O(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying ''ray-by-ray'' approach employed by all other groups may be compromising their results. We show that ''ray-by-ray'' calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
van Overveld, Mark; de Jong, Peter J.; Peters, Madelon L.
The Multi-Dimensional Blood Phobia Inventory (MBPI: Wenzel & Holt, 2003) is the only instrument available that assesses both disgust and anxiety for blood-phobic stimuli. As inflated levels of disgust propensity (i.e., tendency to experience disgust more readily) are often observed in blood phobia,
Thunissen, M.; Arensbergen, P. van
2015-01-01
- Purpose – The purpose of this paper is to contribute to the development of a broader, multi-dimensional approach to talent that helps scholars and practitioners to fully understand the nuances and complexity of talent in the organizational context. - Design/methodology/approach – The data were
DEFF Research Database (Denmark)
Sørensen, John Aasted
2011-01-01
The objectives of Discrete Mathematics (IDISM2) are: The introduction of the mathematics needed for analysis, design and verification of discrete systems, including the application within programming languages for computer systems. Having passed the IDISM2 course, the student will be able...... to accomplish the following: -Understand and apply formal representations in discrete mathematics. -Understand and apply formal representations in problems within discrete mathematics. -Understand methods for solving problems in discrete mathematics. -Apply methods for solving problems in discrete mathematics......; construct a finite state machine for a given application. Apply these concepts to new problems. The teaching in Discrete Mathematics is a combination of sessions with lectures and students solving problems, either manually or by using Matlab. Furthermore a selection of projects must be solved and handed...
International Nuclear Information System (INIS)
Bae, B. U.; Park, Y. S.; Kim, J. R.; Kang, K. H.; Choi, K. Y.; Sung, H. J.; Hwang, M. J.; Kang, D. H.; Lim, S. G.; Jun, S. S.
2015-01-01
Participants of DSP-03 were divided in three groups and each group has focused on the specific subject related to the enhancement of the code analysis. The group A tried to investigate scaling capability of ATLAS test data by comparing to the code analysis for a prototype, and the group C studied to investigate effect of various models in the one-dimensional codes. This paper briefly summarizes the code analysis result from the group B participants in the DSP-03 of the ATLAS test facility. The code analysis by Group B focuses highly on investigating the multi-dimensional thermal hydraulic phenomena in the ATLAS facility during the SLB transient. Even though the one-dimensional system analysis code cannot simulate the whole system of the ATLAS facility with a nodalization of the CFD (Computational Fluid Dynamics) scale, a reactor pressure vessel can be considered with multi-dimensional components to reflect the thermal mixing phenomena inside a downcomer and a core. Also, the CFD could give useful information for understanding complex phenomena in specific components such as the reactor pressure vessel. From the analysis activity of Group B in ATLAS DSP-03, participants adopted a multi-dimensional approach to the code analysis for the SLB transient in the ATLAS test facility. The main purpose of the analysis was to investigate prediction capability of multi-dimensional analysis tools for the SLB experiment result. In particular, the asymmetric cooling and thermal mixing phenomena in the reactor pressure vessel could be significantly focused for modeling the multi-dimensional components
International Nuclear Information System (INIS)
Zou, Zhengping; Liu, Jingyuan; Zhang, Weihao; Wang, Peng
2016-01-01
Multi-dimensional coupling simulation is an effective approach for evaluating the flow and aero-thermal performance of shrouded turbines, which can balance the simulation accuracy and computing cost effectively. In this paper, 1D leakage models are proposed based on classical jet theories and dynamics equations, which can be used to evaluate most of the main features of shroud leakage flow, including the mass flow rate, radial and circumferential momentum, temperature and the jet width. Then, the 1D models are expanded to 2D distributions on the interface by using a multi-dimensional scaling method. Based on the models and multi-dimensional scaling, a multi-dimensional coupling simulation method for shrouded turbines is developed, in which, some boundary source and sink are set on the interface between the shroud and the main flow passage. To verify the precision, some simulations on the design point and off design points of a 1.5 stage turbine are conducted. It is indicated that the models and methods can give predictions with sufficient accuracy for most of the flow field features and will contribute to pursue deeper understanding and better design methods of shrouded axial turbines, which are the important devices in energy engineering. - Highlights: • Free and wall attached jet theories are used to model the leakage flow in shrouds. • Leakage flow rate is modeled by virtual labyrinth number and residual-energy factor. • A scaling method is applied to 1D model to obtain 2D distributions on interfaces. • A multi-dimensional coupling CFD method for shrouded turbines is proposed. • The proposed coupling method can give accurate predictions with low computing cost.
Analysis of Phenix End-of-Life asymmetry test with multi-dimensional pool modeling of MARS-LMR code
International Nuclear Information System (INIS)
Jeong, H.-Y.; Ha, K.-S.; Choi, C.-W.; Park, M.-G.
2015-01-01
Highlights: • Pool behaviors under asymmetrical condition in an SFR were evaluated with MARS-LMR. • The Phenix asymmetry test was analyzed one-dimensionally and multi-dimensionally. • One-dimensional modeling has limitation to predict the cold pool temperature. • Multi-dimensional modeling shows improved prediction of stratification and mixing. - Abstract: The understanding of complicated pool behaviors and its modeling is essential for the design and safety analysis of a pool-type Sodium-cooled Fast Reactor. One of the remarkable recent efforts on the study of pool thermal–hydraulic behaviors is the asymmetrical test performed as a part of Phenix End-of-Life tests by the CEA. To evaluate the performance of MARS-LMR code, which is a key system analysis tool for the design of an SFR in Korea, in the prediction of thermal hydraulic behaviors during an asymmetrical condition, the Phenix asymmetry test is analyzed with MARS-LMR in the present study. Pool regions are modeled with two different approaches, one-dimensional modeling and multi-dimensional one, and the prediction results are analyzed to identify the appropriateness of each modeling method. The prediction with one-dimensional pool modeling shows a large deviation from the measured data at the early stage of the test, which suggests limitations to describe the complicated thermal–hydraulic phenomena. When the pool regions are modeled multi-dimensionally, the prediction gives improved results quite a bit. This improvement is explained by the enhanced modeling of pool mixing with the multi-dimensional modeling. On the basis of the results from the present study, it is concluded that an accurate modeling of pool thermal–hydraulics is a prerequisite for the evaluation of design performance and safety margin quantification in the future SFR developments
Samejima, Fumiko
In latent trait theory the latent space, or space of the hypothetical construct, is usually represented by some unidimensional or multi-dimensional continuum of real numbers. Like the latent space, the item response can either be treated as a discrete variable or as a continuous variable. Latent trait theory relates the item response to the latent…
Integral and discrete inequalities and their applications
Qin, Yuming
2016-01-01
This book focuses on one- and multi-dimensional linear integral and discrete Gronwall-Bellman type inequalities. It provides a useful collection and systematic presentation of known and new results, as well as many applications to differential (ODE and PDE), difference, and integral equations. With this work the author fills a gap in the literature on inequalities, offering an ideal source for researchers in these topics. The present volume is part 1 of the author’s two-volume work on inequalities. Integral and discrete inequalities are a very important tool in classical analysis and play a crucial role in establishing the well-posedness of the related equations, i.e., differential, difference and integral equations.
Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng
2014-08-01
Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.
Amado, Diana; Del Villar, Fernando; Leo, Francisco Miguel; Sánchez-Oliva, David; Sánchez-Miguel, Pedro Antonio; García-Calvo, Tomás
2014-01-01
This research study purports to verify the effect produced on the motivation of physical education students of a multi-dimensional programme in dance teaching sessions. This programme incorporates the application of teaching skills directed towards supporting the needs of autonomy, competence and relatedness. A quasi-experimental design was carried out with two natural groups of 4th year Secondary Education students - control and experimental -, delivering 12 dance teaching sessions. A prior training programme was carried out with the teacher in the experimental group to support these needs. An initial and final measurement was taken in both groups and the results revealed that the students from the experimental group showed an increase of the perception of autonomy and, in general, of the level of self-determination towards the curricular content of corporal expression focused on dance in physical education. To this end, we highlight the programme's usefulness in increasing the students' motivation towards this content, which is so complicated for teachers of this area to develop. PMID:24454831
Stochastic volatility and multi-dimensional modeling in the European energy market
Energy Technology Data Exchange (ETDEWEB)
Vos, Linda
2012-07-01
In energy prices there is evidence for stochastic volatility. Stochastic volatility has effect on the price of path-dependent options and therefore has to be modeled properly. We introduced a multi-dimensional non-Gaussian stochastic volatility model with leverage which can be used in energy pricing. It captures special features of energy prices like price spikes, mean-reversion, stochastic volatility and inverse leverage. Moreover it allows modeling dependencies between different commodities.The derived forward price dynamics based on this multi-variate spot price model, provides a very flexible structure. It includes cotango, backwardation and hump shape forward curves.Alternatively energy prices could be modeled by a 2-factor model consisting of a non-Gaussian stable CARMA process and a non-stationary trend models by a Levy process. Also this model is able to capture special features like price spikes, mean reversion and the low frequency dynamics in the market. An robust L1-filter is introduced to filter out the states of the CARMA process. When applying to German electricity EEX exchange data an overall negative risk-premium is found. However close to delivery a positive risk-premium is observed.(Author)
A Simple Free Surface Tracking Model for Multi-dimensional Two-Fluid Approaches
International Nuclear Information System (INIS)
Lee, Seungjun; Yoon, Han Young
2014-01-01
The development in two-phase experiments devoted to find unknown phenomenological relationships modified conventional flow pattern maps into a sophisticated one and even extended to the multi-dimensional usage. However, for a system including a large void fraction gradient, such as a pool with the free surface, the flow patterns varies spatially throughout small number of cells and sometimes results in an unstable and unrealistic prediction of flows at the large gradient void fraction cells. Then, the numerical stability problem arising from the free surface is the major interest in the analyses of a passive cooling pool convecting the decay heat naturally, which has become a design issue to increase the safety level of nuclear reactors recently. In this research, a new and simple free surface tracking method combined with a simplified topology map is presented. The method modified the interfacial drag coefficient only for the cells defined as the free surface. The performance is shown by comparing the natural convection analysis of a small scale pool with respect to single- and two-phase condition. A simple free surface tracking model with a simplified topology map is developed
Application of neural network to multi-dimensional design window search in reactor core design
International Nuclear Information System (INIS)
Kugo, Teruhiko; Nakagawa, Masayuki
1999-01-01
In the reactor core design, many parametric survey calculations should be carried out to decide an optimal set of basic design parameter values. They consume a large amount of computation time and labor in the conventional way. To support design work, we investigate a procedure to search efficiently a design window, which is defined as feasible design parameter ranges satisfying design criteria and requirements, in a multi-dimensional space composed of several basic design parameters. The present method is applied to the neutronics and thermal hydraulics fields. The principle of the present method is to construct the multilayer neural network to simulate quickly a response of an analysis code through a training process, and to reduce computation time using the neural network without parametric study using analysis codes. To verify the applicability of the present method to the neutronics and the thermal hydraulics design, we have applied it to high conversion water reactors and examined effects of the structure of the neural network and the number of teaching patterns on the accuracy of the design window estimated by the neural network. From the results of the applications, a guideline to apply the present method is proposed and the present method can predict an appropriate design window in a reasonable computation time by following the guideline. (author)
Sidani, Souraya; Epstein, Dana R; Fox, Mary
2017-10-01
Treatment satisfaction is recognized as an essential aspect in the evaluation of an intervention's effectiveness, but there is no measure that provides for its comprehensive assessment with regard to behavioral interventions. Informed by a conceptualization generated from a literature review, we developed a measure that covers several domains of satisfaction with behavioral interventions. In this paper, we briefly review its conceptualization and describe the Multi-Dimensional Treatment Satisfaction Measure (MDTSM) subscales. Satisfaction refers to the appraisal of the treatment's process and outcome attributes. The MDTSM has 11 subscales assessing treatment process and outcome attributes: treatment components' suitability and utility, attitude toward treatment, desire for continued treatment use, therapist competence and interpersonal style, format and dose, perceived benefits of the health problem and everyday functioning, discomfort, and attribution of outcomes to treatment. The MDTSM was completed by persons (N = 213) in the intervention group in a large trial of a multi-component behavioral intervention for insomnia within 1 week following treatment completion. The MDTSM's subscales demonstrated internal consistency reliability (α: .65 - .93) and validity (correlated with self-reported adherence and perceived insomnia severity at post-test). The MDTSM subscales can be used to assess satisfaction with behavioral interventions and point to aspects of treatments that are viewed favorably or unfavorably. © 2017 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Buzwell Simone
2011-10-01
Full Text Available Abstract Background The concept of resilience has captured the imagination of researchers and policy makers over the past two decades. However, despite the ever growing body of resilience research, there is a paucity of relevant, comprehensive measurement tools. In this article, the development of a theoretically based, comprehensive multi-dimensional measure of resilience in adolescents is described. Methods Extensive literature review and focus groups with young people living with chronic illness informed the conceptual development of scales and items. Two sequential rounds of factor and scale analyses were undertaken to revise the conceptually developed scales using data collected from young people living with a chronic illness and a general population sample. Results The revised Adolescent Resilience Questionnaire comprises 93 items and 12 scales measuring resilience factors in the domains of self, family, peer, school and community. All scales have acceptable alpha coefficients. Revised scales closely reflect conceptually developed scales. Conclusions It is proposed that, with further psychometric testing, this new measure of resilience will provide researchers and clinicians with a comprehensive and developmentally appropriate instrument to measure a young person's capacity to achieve positive outcomes despite life stressors.
Directory of Open Access Journals (Sweden)
Tzu-Chien Hsiao
2013-11-01
Full Text Available Excitation-emission matrix (EEM fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes.
Amado, Diana; Del Villar, Fernando; Leo, Francisco Miguel; Sánchez-Oliva, David; Sánchez-Miguel, Pedro Antonio; García-Calvo, Tomás
2014-01-01
This research study purports to verify the effect produced on the motivation of physical education students of a multi-dimensional programme in dance teaching sessions. This programme incorporates the application of teaching skills directed towards supporting the needs of autonomy, competence and relatedness. A quasi-experimental design was carried out with two natural groups of 4(th) year Secondary Education students--control and experimental -, delivering 12 dance teaching sessions. A prior training programme was carried out with the teacher in the experimental group to support these needs. An initial and final measurement was taken in both groups and the results revealed that the students from the experimental group showed an increase of the perception of autonomy and, in general, of the level of self-determination towards the curricular content of corporal expression focused on dance in physical education. To this end, we highlight the programme's usefulness in increasing the students' motivation towards this content, which is so complicated for teachers of this area to develop.
Multi-dimensional SAR tomography for monitoring the deformation of newly built concrete buildings
Ma, Peifeng; Lin, Hui; Lan, Hengxing; Chen, Fulong
2015-08-01
Deformation often occurs in buildings at early ages, and the constant inspection of deformation is of significant importance to discover possible cracking and avoid wall failure. This paper exploits the multi-dimensional SAR tomography technique to monitor the deformation performances of two newly built buildings (B1 and B2) with a special focus on the effects of concrete creep and shrinkage. To separate the nonlinear thermal expansion from total deformations, the extended 4-D SAR technique is exploited. The thermal map estimated from 44 TerraSAR-X images demonstrates that the derived thermal amplitude is highly related to the building height due to the upward accumulative effect of thermal expansion. The linear deformation velocity map reveals that B1 is subject to settlement during the construction period, in addition, the creep and shrinkage of B1 lead to wall shortening that is a height-dependent movement in the downward direction, and the asymmetrical creep of B2 triggers wall deflection that is a height-dependent movement in the deflection direction. It is also validated that the extended 4-D SAR can rectify the bias of estimated wall shortening and wall deflection by 4-D SAR.
Measurement of multi-dimensional flow structure for flow boiling in a tube
International Nuclear Information System (INIS)
Adachi, Yu; Ito, Daisuke; Saito, Yasushi
2014-01-01
With an aim of the measurement of multi-dimensional flow structure of in-tube boiling two-phase flow, the authors built their own wire mesh measurement system based on electrical conductivity measurement, and examined the relationship between the electrical conductivity obtained by the wire mesh sensor and the void fraction. In addition, the authors measured the void fraction using neutron radiography, and compared the result with the measured value using the wire mesh sensor. From the comparison with neutron radiography, it was found that the new method underestimated the void fraction in the flow in the vicinity of the void fraction of 0.2-0.5, similarly to the conventional result. In addition, since the wire mesh sensor cannot measure dispersed droplets, it tends to overestimate the void fraction in the high void fraction region, such as churn flow accompanied by droplet generation. In the electrical conductivity wire-mesh sensor method, it is necessary to correctly take into account the effect of liquid film or droplets. The authors also built a measurement system based on the capacitance wire mesh sensor method using the difference in dielectric constant, performed the confirmation of transmission and reception signals using deionized water as a medium, and showed the validity of the system. As for the dispersed droplets, the capacitance method has a potential to be able to measure them. (A.O.)
Operationalising the Sustainable Knowledge Society Concept through a Multi-dimensional Scorecard
Dragomirescu, Horatiu; Sharma, Ravi S.
Since the early 21st Century, building a Knowledge Society represents an aspiration not only for the developed countries, but for the developing ones too. There is an increasing concern worldwide for rendering this process manageable towards a sustainable, equitable and ethically sound societal system. As proper management, including at the societal level, requires both wisdom and measurement, the operationalisation of the Knowledge Society concept encompasses a qualitative side, related to vision-building, and a quantitative one, pertaining to designing and using dedicated metrics. The endeavour of enabling policy-makers mapping, steering and monitoring the sustainable development of the Knowledge Society at national level, in a world increasingly based on creativity, learning and open communication, led researchers to devising a wide range of composite indexes. However, as such indexes are generated through weighting and aggregation, their usefulness is limited to retrospectively assessing and comparing levels and states already attained; therefore, to better serve policy-making purposes, composite indexes should be complemented by other instruments. Complexification, inspired by the systemic paradigm, allows obtaining "rich pictures" of the Knowledge Society; to this end, a multi-dimensional scorecard of the Knowledge Society development is hereby suggested, that seeks a more contextual orientation towards sustainability. It is assumed that, in the case of the Knowledge Society, the sustainability condition goes well beyond the "greening" desideratum and should be of a higher order, relying upon the conversion of natural and productive life-cycles into virtuous circles of self-sustainability.
Dynameomics: a multi-dimensional analysis-optimized database for dynamic protein data.
Kehl, Catherine; Simms, Andrew M; Toofanny, Rudesh D; Daggett, Valerie
2008-06-01
The Dynameomics project is our effort to characterize the native-state dynamics and folding/unfolding pathways of representatives of all known protein folds by way of molecular dynamics simulations, as described by Beck et al. (in Protein Eng. Des. Select., the first paper in this series). The data produced by these simulations are highly multidimensional in structure and multi-terabytes in size. Both of these features present significant challenges for storage, retrieval and analysis. For optimal data modeling and flexibility, we needed a platform that supported both multidimensional indices and hierarchical relationships between related types of data and that could be integrated within our data warehouse, as described in the accompanying paper directly preceding this one. For these reasons, we have chosen On-line Analytical Processing (OLAP), a multi-dimensional analysis optimized database, as an analytical platform for these data. OLAP is a mature technology in the financial sector, but it has not been used extensively for scientific analysis. Our project is further more unusual for its focus on the multidimensional and analytical capabilities of OLAP rather than its aggregation capacities. The dimensional data model and hierarchies are very flexible. The query language is concise for complex analysis and rapid data retrieval. OLAP shows great promise for the dynamic protein analysis for bioengineering and biomedical applications. In addition, OLAP may have similar potential for other scientific and engineering applications involving large and complex datasets.
Moving toward multi-dimensional radiotherapy and the role of radiobiology
International Nuclear Information System (INIS)
Oita, Masataka; Uto, Yoshihiro; Aoyama, Hideki
2014-01-01
Recent radiotherapy for cancer treatment enable the high-precision irradiation to the target under the computed image guidance. Developments of such radiotherapy has played large role in the improved strategy of cancer treatments. In addition, the molecular mechanistic studies related to proliferations of cancer cell contribute the multidisciplinary fields of clinical radiotherapies. Therefore, the combination of the image guidance and molecular targeting of cancer cells make it possible for individualized cancer treatment. Especially, the use of particle beam or boron neutron capture therapy (BNCT) has been spotlighted, and installations of such devices are planned widely. As the progress and collaborations of radiation biology and engineering physics, establishment of a new style of radiotherapy becomes available in post-genome era. In 2010s, the hi-tech machines controlling the spaciotemporal radiotherapy become in practice. Although, there still remains to be improved, e.g., more precise prediction of radiosensitivity or growth of individual tumors, and adverse outcomes after treatments, multi-dimensional optimizations of the individualized irradiations based on the molecular radiation biologies and medical physics are important for further development of radiotherapy. (author)
Multi-dimensional self-esteem and substance use among Chinese adolescents.
Wu, Cynthia S T; Wong, Ho Ting; Shek, Carmen H M; Loke, Alice Yuen
2014-10-01
Substance use among adolescents has caused worldwide public health concern in recent years. Overseas studies have demonstrated an association between adolescent self-esteem and substance use, but studies within a Chinese context are limited. A study was therefore initiated to: (1) explore the 30 days prevalence of substance use (smoking, drinking, and drugs) among male and female adolescents in Hong Kong; (2) identify the significant associations between multidimensional self-esteem and gender; and (3) examine the relationship between multi-dimensional self-esteem and substance use. A self-esteem scale and the Chinese version of the global school-based student health survey were adopted. A total of 1,223 students were recruited from two mixed-gender schools and one boys' school. Among females, there was a lower 30-day prevalence of cigarette, alcohol, and drug use. They also had significantly higher peer and family self-esteem but lower sport-related self-esteem. Body image self-esteem was a predictor of alcohol use among females, while peer and school self-esteem were predictors of drug use among males. In summary, the findings demonstrated the influence of self-esteem to the overall well-being of adolescents. Schools could play a role in promoting physical fitness and positive relationships between adolescents and their peers, family, and schools to fulfill their physical and psychological self-esteem needs.
The knock study of methanol fuel based on multi-dimensional simulation analysis
International Nuclear Information System (INIS)
Zhen, Xudong; Liu, Daming; Wang, Yang
2017-01-01
Methanol is an alternative fuel, and considered to be one of the most favorable fuels for engines. In this study, knocking combustion in a developed ORCEM (optical rapid compression and expansion machine) is studied based on the multi-dimensional simulation analysis. The LES (large-eddy simulation) models coupled with methanol chemical reaction kinetics (contains 21-species and 84-elementary reactions) is adopted to study knocking combustion. The results showed that the end-gas auto-ignition first occurred in the position near the chamber wall because of the higher temperature and pressure. The H_2O_2 species could be a good flame front indicator. OH radicals played the major role, and the HCO radicals almost could be ignored during knocking combustion. The HCO radicals generated little, so its concentration during knocking combustion almost may be ignored. The mean reaction intensity results of CH_2O, OH, H_2O_2, and CO were higher than others during knocking combustion. Finally, this paper put forward some new suggestions on the weakness in the knocking combustion researches of methanol fuel. - Highlights: • Knocking combustion of methanol was studied in a developed ORCEM. • The LES coupled with detailed chemical kinetics was adopted to simulation study. • The end-gas auto-ignition first occurred in the place near the chamber wall. • OH radical was the predominant species during knocking combustion. • The H_2O_2 species could be a good flame front indicator.
Multi-dimensional modelling of spray, in-cylinder air motion and fuel ...
Indian Academy of Sciences (India)
Simulations over a range of speed and load indicate the need ... in this mode however, do experience the thermodynamic advantages of intake charge cooling ... approach where discrete droplets are tracked using a Lagrangian approach.
DEFF Research Database (Denmark)
Sørensen, John Aasted
2011-01-01
; construct a finite state machine for a given application. Apply these concepts to new problems. The teaching in Discrete Mathematics is a combination of sessions with lectures and students solving problems, either manually or by using Matlab. Furthermore a selection of projects must be solved and handed...... to accomplish the following: -Understand and apply formal representations in discrete mathematics. -Understand and apply formal representations in problems within discrete mathematics. -Understand methods for solving problems in discrete mathematics. -Apply methods for solving problems in discrete mathematics...... to new problems. Relations and functions: Define a product set; define and apply equivalence relations; construct and apply functions. Apply these concepts to new problems. Natural numbers and induction: Define the natural numbers; apply the principle of induction to verify a selection of properties...
DEFF Research Database (Denmark)
Busch, Peter Andre; Zinner Henriksen, Helle
2018-01-01
discretion is suggested to reduce this footprint by influencing or replacing their discretionary practices using ICT. What is less researched is whether digital discretion can cause changes in public policy outcomes, and under what conditions such changes can occur. Using the concept of public service values......This study reviews 44 peer-reviewed articles on digital discretion published in the period from 1998 to January 2017. Street-level bureaucrats have traditionally had a wide ability to exercise discretion stirring debate since they can add their personal footprint on public policies. Digital......, we suggest that digital discretion can strengthen ethical and democratic values but weaken professional and relational values. Furthermore, we conclude that contextual factors such as considerations made by policy makers on the macro-level and the degree of professionalization of street...
Directory of Open Access Journals (Sweden)
LI Jiyuan
2014-06-01
Full Text Available SOLAP (Spatial On-Line Analytical Processing has been applied to multi-dimensional analysis of remote sensing data recently. However, its computation performance faces a considerable challenge from the large-scale dataset. A geo-raster cube model extended by Map-Reduce is proposed, which refers to the application of Map-Reduce (a data-intensive computing paradigm in the OLAP field. In this model, the existing methods are modified to adapt to distributed environment based on the multi-level raster tiles. Then the multi-dimensional map algebra is introduced to decompose the SOLAP computation into multiple distributed parallel map algebra functions on tiles under the support of Map-Reduce. The drought monitoring by remote sensing data is employed as a case study to illustrate the model construction and application. The prototype is also implemented, and the performance testing shows the efficiency and scalability of this model.
Directory of Open Access Journals (Sweden)
Anne Pauly
Full Text Available In psychiatry, hospital stays and transitions to the ambulatory sector are susceptible to major changes in drug therapy that lead to complex medication regimens and common non-adherence among psychiatric patients. A multi-dimensional and inter-sectoral intervention is hypothesized to improve the adherence of psychiatric patients to their pharmacotherapy.269 patients from a German university hospital were included in a prospective, open, clinical trial with consecutive control and intervention groups. Control patients (09/2012-03/2013 received usual care, whereas intervention patients (05/2013-12/2013 underwent a program to enhance adherence during their stay and up to three months after discharge. The program consisted of therapy simplification and individualized patient education (multi-dimensional component during the stay and at discharge, as well as subsequent phone calls after discharge (inter-sectoral component. Adherence was measured by the "Medication Adherence Report Scale" (MARS and the "Drug Attitude Inventory" (DAI.The improvement in the MARS score between admission and three months after discharge was 1.33 points (95% CI: 0.73-1.93 higher in the intervention group compared to controls. In addition, the DAI score improved 1.93 points (95% CI: 1.15-2.72 more for intervention patients.These two findings indicate significantly higher medication adherence following the investigated multi-dimensional and inter-sectoral program.German Clinical Trials Register DRKS00006358.
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung Jun; Park, Ik Kyu; Yoon, Han Young [Thermal-Hydraulic Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jae, Byoung [School of Mechanical Engineering, Chungnam National University, Daejeon (Korea, Republic of)
2017-01-15
Two-fluid equations are widely used to obtain averaged behaviors of two-phase flows. This study addresses a problem that may arise when the two-fluid equations are used for multi-dimensional bubbly flows. If steady drag is the only accounted force for the interfacial momentum transfer, the disperse-phase velocity would be the same as the continuous-phase velocity when the flow is fully developed without gravity. However, existing momentum equations may show unphysical results in estimating the relative velocity of the disperse phase against the continuous-phase. First, we examine two types of existing momentum equations. One is the standard two-fluid momentum equation in which the disperse-phase is treated as a continuum. The other is the averaged momentum equation derived from a solid/ fluid particle motion. We show that the existing equations are not proper for multi-dimensional bubbly flows. To resolve the problem mentioned above, we modify the form of the Reynolds stress terms in the averaged momentum equation based on the solid/fluid particle motion. The proposed equation shows physically correct results for both multi-dimensional laminar and turbulent flows.
Multi-dimensional two-phase flow measurements in a large-diameter pipe using wire-mesh sensor
International Nuclear Information System (INIS)
Kanai, Taizo; Furuya, Masahiro; Arai, Takahiro; Shirakawa, Kenetsu; Nishi, Yoshihisa; Ueda, Nobuyuki
2011-01-01
The authors developed a method of measurement to determine the multi-dimensionality of two phase flow. A wire-mesh sensor (WMS) can acquire a void fraction distribution at a high temporal and spatial resolution and also estimate the velocity of a vertical rising flow by investigating the signal time-delay of the upstream WMS relative to downstream. Previously, one-dimensional velocity was estimated by using the same point of each WMS at a temporal resolution of 1.0 - 5.0 s. The authors propose to extend this time series analysis to estimate the multi-dimensional velocity profile via cross-correlation analysis between a point of upstream WMS and multiple points downstream. Bubbles behave in various ways according to size, which is used to classify them into certain groups via wavelet analysis before cross-correlation analysis. This method was verified by air-water straight and swirl flows within a large-diameter vertical pipe. A high-speed camera is used to set the parameter of cross-correlation analysis. The results revealed that for the rising straight and swirl flows, large scale bubbles tend to move to the center, while the small bubble is pushed to the outside or sucked into the space where the large bubbles existed. Moreover, it is found that this method can estimate the rotational component of velocity of the swirl flow as well as measuring the multi-dimensional velocity vector at high temporal resolutions of 0.2 s. (author)
Multi-dimensional Analysis Method of Hydrogen Combustion in the Containment of a Nuclear Power Plant
Energy Technology Data Exchange (ETDEWEB)
Kim, Jongtae; Hong, Seongwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Gun Hong [Kyungwon E and C Co., Seongnam (Korea, Republic of)
2014-05-15
The most severe case is the occurrence of detonation, which induces a few-fold greater pressure load on the containment wall than a deflagration flame. The occurrence of a containment-wise global detonation is prohibited by a national regulation. The compartments located in the flow path such as steam generator compartment, annular compartment, and dome region are likely to have highly-concentrated hydrogen. If it is found that hydrogen concentration in any compartment is far below a detonation criterion during an accident progression, it can be thought that the occurrence of a detonative explosion in a compartment is excluded. However, if it is not, it is necessary to evaluate the characteristics of flame acceleration in the containment. The possibility of a flame transition from a deflagration to a detonation (DDT) can be evaluated from a calculated hydrogen distribution in a compartment by using sigma-lambda criteria. However, this method can provide a very conservative result because the geometric characteristics of a real compartment are not considered well. In order to evaluate the containment integrity from a threat of a hydrogen explosion, it is necessary to establish an integrated evaluation system, which includes a lumped-parameter and detail analysis methods. In this study, a method for the multi-dimensional analysis of hydrogen combustion is proposed to mechanistically evaluate the flame acceleration characteristics with a geometric effect. The geometry of the containment is modeled 3-dimensionally using a CAD tool. To resolve a propagating flame front, an adaptive mesh refinement method is coupled with a combustion analysis solver.
Consumer preference of fertilizer in West Java using multi-dimensional scaling approach
Utami, Hesty Nurul; Sadeli, Agriani Hermita; Perdana, Tomy; Renaldy, Eddy; Mahra Arari, H.; Ajeng Sesy N., P.; Fernianda Rahayu, H.; Ginanjar, Tetep; Sanjaya, Sonny
2018-02-01
There are various fertilizer products in the markets for farmers to be used for farming activities. Fertilizers are a supplements supply to soil nutrients, build up soil fertility in order to support plant nutrients and increase plants productivity. Fertilizers consists of nitrogen, phosphorous, potassium, micro vitamins and other complex nutrient in farming systems that commonly used in agricultural activities to improve quantity and quality of harvest. Recently, market demand for fertilizer has been increased dramatically; furthermore, fertilizer companies are required to develop strategies to know about consumer preferences towards several issues. Consumer preference depends on consumer needs selected by subject (individual) that is measured by utilization from several things that market offered and as final decision on purchase process. West Java is one of province as the main producer of agricultural products and automatically is one of the potential consumer's fertilizers on farming activities. This research is a case study in nine districts in West Java province, i.e., Bandung, West Bandung, Bogor, Depok, Garut, Indramayu, Majalengka, Cirebon and Cianjur. The purpose of this research is to describe the attributes on consumer preference for fertilizers. The multi-dimensional scaling method is used as quantitative method to help visualize the level of similarity of individual cases on a dataset, to describe and mapping the information system and to accept the goal. The attributes in this research are availability, nutrients content, price, form of fertilizer, decomposition speed, easy to use, label, packaging type, color, design and size of packaging, hardening process and promotion. There are tendency towards two fertilizer brand have similarity on availability of products, price, speed of decomposition and hardening process.
Design of a Multi Dimensional Database for the Archimed DataWarehouse.
Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine
2005-01-01
The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.
Magnetic quantum tunneling: key insights from multi-dimensional high-field EPR.
Lawrence, J; Yang, E-C; Hendrickson, D N; Hill, S
2009-08-21
Multi-dimensional high-field/frequency electron paramagnetic resonance (HFEPR) spectroscopy is performed on single-crystals of the high-symmetry spin S = 4 tetranuclear single-molecule magnet (SMM) [Ni(hmp)(dmb)Cl](4), where hmp(-) is the anion of 2-hydroxymethylpyridine and dmb is 3,3-dimethyl-1-butanol. Measurements performed as a function of the applied magnetic field strength and its orientation within the hard-plane reveal the four-fold behavior associated with the fourth order transverse zero-field splitting (ZFS) interaction, (1/2)B(S + S), within the framework of a rigid spin approximation (with S = 4). This ZFS interaction mixes the m(s) = +/-4 ground states in second order of perturbation, generating a sizeable (12 MHz) tunnel splitting, which explains the fast magnetic quantum tunneling in this SMM. Meanwhile, multi-frequency measurements performed with the field parallel to the easy-axis reveal HFEPR transitions associated with excited spin multiplets (S spin s = 1 Ni(II) ions within the cluster, as well as a characterization of the ZFS within excited states. The combined experimental studies support recent work indicating that the fourth order anisotropy associated with the S = 4 state originates from second order ZFS interactions associated with the individual Ni(II) centers, but only as a result of higher-order processes that occur via S-mixing between the ground state and higher-lying (S spin multiplets. We argue that this S-mixing plays an important role in the low-temperature quantum dynamics associated with many other well known SMMs.
Installation of aerosol behavior model into multi-dimensional thermal hydraulic analysis code AQUA
International Nuclear Information System (INIS)
Kisohara, Naoyuki; Yamaguchi, Akira
1997-12-01
The safety analysis of FBR plant system for sodium leak phenomena needs to evaluate the deposition of the aerosol particle to the components in the plant, the chemical reaction of aerosol to humidity in the air and the effect of the combustion heat through aerosol to the structural component. For this purpose, ABC-INTG (Aerosol Behavior in Containment-INTeGrated Version) code has been developed and used until now. This code calculates aerosol behavior in the gas area of uniform temperature and pressure by 1 cell-model. Later, however, more detailed calculation of aerosol behavior requires the installation of aerosol model into multi-cell thermal hydraulic analysis code AQUA. AQUA can calculate the carrier gas flow, temperature and the distribution of the aerosol spatial concentration. On the other hand, ABC-INTG can calculate the generation, deposition to the wall and flower, agglomeration of aerosol particle and figure out the distribution of the aerosol particle size. Thus, the combination of these two codes enables to deal with aerosol model coupling the distribution of the aerosol spatial concentration and that of the aerosol particle size. This report describes aerosol behavior model, how to install the aerosol model to AQUA and new subroutine equipped to the code. Furthermore, the test calculations of the simple structural model were executed by this code, appropriate results were obtained. Thus, this code has prospect to predict aerosol behavior by the introduction of coupling analysis with multi-dimensional gas thermo-dynamics for sodium combustion evaluation. (J.P.N.)
A revised Thai Multi-Dimensional Scale of Perceived Social Support.
Wongpakaran, Nahathai; Wongpakaran, Tinakon
2012-11-01
In order to ensure the construct validity of the three-factor model of the Multi-dimensional Scale of Perceived Social Support (MSPSS), and based on the assumption that it helps users differentiate between sources of social support, in this study a revised version was created and tested. The aim was to compare the level of model fit of the original version of the MSPSS against the revised version--which contains a minor change from the original. The study was conducted on 486 medical students who completed the original and revised versions of the MSPSS, as well as the Rosenberg Self-Esteem Scale (Rosenberg, 1965) and Beck Depression Inventory II (Beck, Steer, & Brown, 1996). Confirmatory factor analysis was performed to compare the results, showing that the revised version of MSPSS demonstrated a good internal consistency--with a Cronbach's alpha of .92 for the MSPSS questionnaire, and a significant correlation with the other scales, as predicted. The revised version provided better internal consistency, increasing the Cronbach's alpha for the Significant Others sub-scale from 0.86 to 0.92. Confirmatory factor analysis revealed an acceptable model fit: chi2 128.11, df 51, p < .001; TLI 0.94; CFI 0.95; GFI 0.90; PNFI 0.71; AGFI 0.85; RMSEA 0.093 (0.073-0.113) and SRMR 0.042, which is better than the original version. The tendency of the new version was to display a better level of fit with a larger sample size. The limitations of the study are discussed, as well as recommendations for further study.
Directory of Open Access Journals (Sweden)
Duangporn Prasertsubpakij
2012-07-01
Full Text Available Metro systems act as fast and efficient transport systems for many modern metropolises; however, enhancing higher usage of such systems often conflicts with providing suitable accessibility options. The traditional approach of metro accessibility studies seems to be an ineffective measure to gage sustainable access in which the equal rights of all users are taken into account. Bangkok Metropolitan Region (BMR transportation has increasingly relied on the role of two mass rapid transport systems publicly called “BTS Skytrain” and “MRT Subway”, due to limited availability of land and massive road congestion; however, access to such transit arguably treats some vulnerable groups, especially women, the elderly and disabled people unfairly. This study constructs a multi-dimensional assessment of accessibility considerations to scrutinize how user groups access metro services based on BMR empirical case. 600 individual passengers at various stations were asked to rate the questionnaire that simultaneously considers accessibility aspects of spatial, feeder connectivity, temporal, comfort/safety, psychosocial and other dimensions. It was interestingly found by user disaggregated accessibility model that the lower the accessibility perceptions—related uncomfortable and unsafe environment conditions, the greater the equitable access to services, as illustrated by MRT — Hua Lumphong and MRT — Petchaburi stations. The study suggests that, to balance the access priorities of groups on services, policy actions should emphasize acceptably safe access for individuals, cost efficient feeder services connecting the metro lines, socioeconomic influences and time allocation. Insightful discussions on integrated approach balancing different dimensions of accessibility and recommendations would contribute to accessibility-based knowledge and potential propensity to use the public transits towards transport sustainability.
Studying Operation Rules of Cascade Reservoirs Based on Multi-Dimensional Dynamics Programming
Directory of Open Access Journals (Sweden)
Zhiqiang Jiang
2017-12-01
Full Text Available Although many optimization models and methods are applied to the optimization of reservoir operation at present, the optimal operation decision that is made through these models and methods is just a retrospective review. Due to the limitation of hydrological prediction accuracy, it is practical and feasible to obtain the suboptimal or satisfactory solution by the established operation rules in the actual reservoir operation, especially for the mid- and long-term operation. In order to obtain the optimized sample data with global optimality; and make the extracted operation rules more reasonable and reliable, this paper presents the multi-dimensional dynamic programming model of the optimal joint operation of cascade reservoirs and provides the corresponding recursive equation and the specific solving steps. Taking Li Xianjiang cascade reservoirs as a case study, seven uncertain problems in the whole operation period of the cascade reservoirs are summarized after a detailed analysis to the obtained optimal sample data, and two sub-models are put forward to solve these uncertain problems. Finally, by dividing the whole operation period into four characteristic sections, this paper extracts the operation rules of each reservoir for each section respectively. When compared the simulation results of the extracted operation rules with the conventional joint operation method; the result indicates that the power generation of the obtained rules has a certain degree of improvement both in inspection years and typical years (i.e., wet year; normal year and dry year. So, the rationality and effectiveness of the extracted operation rules are verified by the comparative analysis.
The use of multi-dimensional flow and morphodynamic models for restoration design analysis
McDonald, R.; Nelson, J. M.
2013-12-01
River restoration projects with the goal of restoring a wide range of morphologic and ecologic channel processes and functions have become common. The complex interactions between flow and sediment-transport make it challenging to design river channels that are both self-sustaining and improve ecosystem function. The relative immaturity of the field of river restoration and shortcomings in existing methodologies for evaluating channel designs contribute to this problem, often leading to project failures. The call for increased monitoring of constructed channels to evaluate which restoration techniques do and do not work is ubiquitous and may lead to improved channel restoration projects. However, an alternative approach is to detect project flaws before the channels are built by using numerical models to simulate hydraulic and sediment-transport processes and habitat in the proposed channel (Restoration Design Analysis). Multi-dimensional models provide spatially distributed quantities throughout the project domain that may be used to quantitatively evaluate restoration designs for such important metrics as (1) the change in water-surface elevation which can affect the extent and duration of floodplain reconnection, (2) sediment-transport and morphologic change which can affect the channel stability and long-term maintenance of the design; and (3) habitat changes. These models also provide an efficient way to evaluate such quantities over a range of appropriate discharges including low-probability events which often prove the greatest risk to the long-term stability of restored channels. Currently there are many free and open-source modeling frameworks available for such analysis including iRIC, Delft3D, and TELEMAC. In this presentation we give examples of Restoration Design Analysis for each of the metrics above from projects on the Russian River, CA and the Kootenai River, ID. These examples demonstrate how detailed Restoration Design Analysis can be used to
DEFF Research Database (Denmark)
Sørensen, John Aasted
2010-01-01
The introduction of the mathematics needed for analysis, design and verification of discrete systems, including applications within programming languages for computer systems. Course sessions and project work. Semester: Spring 2010 Ectent: 5 ects Class size: 18......The introduction of the mathematics needed for analysis, design and verification of discrete systems, including applications within programming languages for computer systems. Course sessions and project work. Semester: Spring 2010 Ectent: 5 ects Class size: 18...
DEFF Research Database (Denmark)
Sørensen, John Aasted
2010-01-01
The introduction of the mathematics needed for analysis, design and verification of discrete systems, including applications within programming languages for computer systems. Course sessions and project work. Semester: Autumn 2010 Ectent: 5 ects Class size: 15......The introduction of the mathematics needed for analysis, design and verification of discrete systems, including applications within programming languages for computer systems. Course sessions and project work. Semester: Autumn 2010 Ectent: 5 ects Class size: 15...
Ogle, K.; Fell, M.; Barber, J. J.
2016-12-01
Empirical, field studies of plant functional traits have revealed important trade-offs among pairs or triplets of traits, such as the leaf (LES) and wood (WES) economics spectra. Trade-offs include correlations between leaf longevity (LL) vs specific leaf area (SLA), LL vs mass-specific leaf respiration rate (RmL), SLA vs RmL, and resistance to breakage vs wood density. Ordination analyses (e.g., PCA) show groupings of traits that tend to align with different life-history strategies or taxonomic groups. It is unclear, however, what underlies such trade-offs and emergent spectra. Do they arise from inherent physiological constraints on growth, or are they more reflective of environmental filtering? The relative importance of these mechanisms has implications for predicting biogeochemical cycling, which is influenced by trait distributions of the plant community. We address this question using an individual-based model of tree growth (ACGCA) to quantify the theoretical trait space of trees that emerges from physiological constraints. ACGCA's inputs include 32 physiological, anatomical, and allometric traits, many of which are related to the LES and WES. We fit ACGCA to 1.6 million USFS FIA observations of tree diameters and heights to obtain vectors of trait values that produce realistic growth, and we explored the structure of this trait space. No notable correlations emerged among the 496 trait pairs, but stepwise regressions revealed complicated multi-variate structure: e.g., relationships between pairs of traits (e.g., RmL and SLA) are governed by other traits (e.g., LL, radiation-use efficiency [RUE]). We also simulated growth under various canopy gap scenarios that impose varying degrees of environmental filtering to explore the multi-dimensional trait space (hypervolume) of trees that died vs survived. The centroid and volume of the hypervolumes differed among dead and live trees, especially under gap conditions leading to low mortality. Traits most predictive
Gong, Wuming; Koyano-Nakagawa, Naoko; Li, Tongbin; Garry, Daniel J
2015-03-07
-CM transitions. We report a novel method to systematically integrate multi-dimensional -omics data and reconstruct the gene regulatory networks. This method will allow one to rapidly determine the cis-modules that regulate key genes during cardiac differentiation.
Caltagirone, Jean-Paul
2014-01-01
This book presents the fundamental principles of mechanics to re-establish the equations of Discrete Mechanics. It introduces physics and thermodynamics associated to the physical modeling. The development and the complementarity of sciences lead to review today the old concepts that were the basis for the development of continuum mechanics. The differential geometry is used to review the conservation laws of mechanics. For instance, this formalism requires a different location of vector and scalar quantities in space. The equations of Discrete Mechanics form a system of equations where the H
International Nuclear Information System (INIS)
Lee, T.D.
1985-01-01
This paper reviews the role of time throughout all phases of mechanics: classical mechanics, non-relativistic quantum mechanics, and relativistic quantum theory. As an example of the relativistic quantum field theory, the case of a massless scalar field interacting with an arbitrary external current is discussed. The comparison between the new discrete theory and the usual continuum formalism is presented. An example is given of a two-dimensional random lattice and its duel. The author notes that there is no evidence that the discrete mechanics is more appropriate than the usual continuum mechanics
Vincent Casseau; Daniel E. R. Espinoza; Thomas J. Scanlon; Richard E. Brown
2016-01-01
hy2Foam is a newly-coded open-source two-temperature computational fluid dynamics (CFD) solver that has previously been validated for zero-dimensional test cases. It aims at (1) giving open-source access to a state-of-the-art hypersonic CFD solver to students and researchers; and (2) providing a foundation for a future hybrid CFD-DSMC (direct simulation Monte Carlo) code within the OpenFOAM framework. This paper focuses on the multi-dimensional verification of hy2Foam and firstly describes th...
Czech Academy of Sciences Publication Activity Database
Král, Radomil; Náprstek, Jiří
2017-01-01
Roč. 113, November (2017), s. 54-75 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GP14-34467P; GA ČR(CZ) GA15-01035S Institutional support: RVO:68378297 Keywords : Fokker-Planck equation * finite element method * simplex element * multi-dimensional problem * non-symmetric operator Subject RIV: JM - Building Engineering OBOR OECD: Mechanical engineering Impact factor: 3.000, year: 2016 https://www.sciencedirect.com/science/ article /pii/S0965997817301904
International Nuclear Information System (INIS)
Bathke, C.
1978-03-01
A description is presented of a general algorithm for locating the extremum of a multi-dimensional constrained function. The algorithm employs a series of techniques dominated by random shrinkage, steepest descent, and adaptive creeping. A discussion follows of the algorithm's application to a ''real world'' problem, namely the optimization of the price of electricity, P/sub eh/, from a hybrid fusion-fission reactor. Upon the basis of comparisons with other optimization schemes of a survey nature, the algorithm is concluded to yield a good approximation to the location of a function's optimum
DEFF Research Database (Denmark)
Yankov, Metodi Plamenov; Forchhammer, Søren; Larsen, Knud J.
2014-01-01
In this work we study the properties of the optimal Proba- bility Mass Function (PMF) of a discrete input to a general Multiple Input Multiple Output (MIMO) channel. We prove that when the input constellation is constructed as a Cartesian product of 1-dimensional constellations, the optimal PMF...... factorizes into the product of the marginal 1D PMFs. This confirms the conjecture made in [1], which allows for optimizing the input PMF efficiently when the rank of the MIMO channel grows. The proof is built upon the iterative Blahut-Arimoto algorithm. We show that if the initial PMF is factorized, the PMF...
The multi-dimensional module of CATHARE 2 description and application
Energy Technology Data Exchange (ETDEWEB)
Barre, F.; Dor, I.; Sun, C. [French Atomic Energy Commission (C.E.A.), Grenoble (France)
1995-09-01
In this paper, the three-dimensional module of CATHARE 2 is presented. It is based on a two-phase-flow six-equation model. A predictor/corrector multistep method, with an implicit behavior, is used to discretize the equations. Blowdown and boil-of analytical tests are used for an initial validation of the module. UPTF downcomer refill tests simulating the refill phase of a large-break loss-of-coolant accident are calculated. Additional models, including molecular and turbulent diffusion, are added in order to perform containment calculations.
The multi-dimensional module of CATHARE 2 description and application
International Nuclear Information System (INIS)
Barre, F.; Dor, I.; Sun, C.
1995-01-01
In this paper, the three-dimensional module of CATHARE 2 is presented. It is based on a two-phase-flow six-equation model. A predictor/corrector multistep method, with an implicit behavior, is used to discretize the equations. Blowdown and boil-of analytical tests are used for an initial validation of the module. UPTF downcomer refill tests simulating the refill phase of a large-break loss-of-coolant accident are calculated. Additional models, including molecular and turbulent diffusion, are added in order to perform containment calculations
Energy Technology Data Exchange (ETDEWEB)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten [RWTH Aachen University, Chair for Nonlinear Dynamics, Steinbachstr. 15, 52047 Aachen (Germany); Gebhardt, Sascha [RWTH Aachen University, Virtual Reality Group, IT Center, Seffenter Weg 23, 52074 Aachen (Germany); Kuhlen, Torsten [Forschungszentrum Jülich GmbH, Institute for Advanced Simulation (IAS), Jülich Supercomputing Centre (JSC), Wilhelm-Johnen-Straße, 52425 Jülich (Germany); Schulz, Wolfgang [Fraunhofer, ILT Laser Technology, Steinbachstr. 15, 52047 Aachen (Germany)
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
Directory of Open Access Journals (Sweden)
Lockwood William W
2010-05-01
Full Text Available Abstract Background Genomics has substantially changed our approach to cancer research. Gene expression profiling, for example, has been utilized to delineate subtypes of cancer, and facilitated derivation of predictive and prognostic signatures. The emergence of technologies for the high resolution and genome-wide description of genetic and epigenetic features has enabled the identification of a multitude of causal DNA events in tumors. This has afforded the potential for large scale integration of genome and transcriptome data generated from a variety of technology platforms to acquire a better understanding of cancer. Results Here we show how multi-dimensional genomics data analysis would enable the deciphering of mechanisms that disrupt regulatory/signaling cascades and downstream effects. Since not all gene expression changes observed in a tumor are causal to cancer development, we demonstrate an approach based on multiple concerted disruption (MCD analysis of genes that facilitates the rational deduction of aberrant genes and pathways, which otherwise would be overlooked in single genomic dimension investigations. Conclusions Notably, this is the first comprehensive study of breast cancer cells by parallel integrative genome wide analyses of DNA copy number, LOH, and DNA methylation status to interpret changes in gene expression pattern. Our findings demonstrate the power of a multi-dimensional approach to elucidate events which would escape conventional single dimensional analysis and as such, reduce the cohort sample size for cancer gene discovery.
Assessment of multi-dimensional analysis cacpacity of the MARS using the OECD-SETH PANDA tests
International Nuclear Information System (INIS)
Bae, S. W.; Jung, J. J.; Jung, B. D.
2004-01-01
The objectives of OECD/NEA-PANDA tests are to validate and assess computer codes that analyze the non-condensable gas concentrations and mixing phenomena in a reactor containment building. Especially, the main issue is multi-dimensional analysis capability which is involved in the mixing of non-condensable gases, i. e. hydrogen. The main tests consist of a superheated steam flow injection into a large vessel initially filled with air or air/helium mixtures. Then the temperature and concentration of noncondensable gases are measured. A pre-calculation has been performed with the MARS about PANDA Tests even though MARS is not a containment analysis code. Three cases among 25 PANDA Tests are selected and are modeled to simulate the jet plumes and air mixing in a large vessel. The dimensions of large vessel are 4 m diameter and 8 m height. For the conclusion of calculation, the cylindrical vessel which dimensions are 4 m diameter and 8 m height was simplified as rectangular geometry. It is revealed that the MARS code has the capability to distinguish the multi-dimensional distribution of the velocity and the temperature fields
Urban agriculture: multi-dimensional tools for social development in poor neighbourhoods
Directory of Open Access Journals (Sweden)
E. Duchemin
2009-01-01
Full Text Available For over 30 years, different urban agriculture (UA experiments have been undertaken in Montreal (Quebec, Canada. The Community Gardening Program, managed by the City, and 6 collective gardens, managed by community organizations, are discussed in this article. These experiments have different objectives, including food security, socialization and education. Although these have changed over time, they have also differed depending on geographic location (neighbourhood. The UA initiatives in Montreal have resulted in the development of a centre with a signiﬁcant vegetable production and a socialization and education environment that fosters individual and collective social development in districts with a signiﬁcant economically disadvantaged population. The various approaches attain the established objectives and these are multi-dimensional tools used for the social development of disadvantaged populations.Depuis plus de 30 ans, différentes expériences d’AU ont été tentée à Montréal (Québec, Canada. Le programme des jardins communautaires, géré par la Ville, et 6 jardins collectifs, gérés par des organisations communautaires, sont examinés dans le cadre de cet article. Ces expériences visent différents objectifs : accroître la sécurité alimentaire, sociabiliser, éduquer, etc. Les objectifs évoluent dans le temps mais aussi selon les quartiers. Notre étude révèle que les initiatives en AU à Montréal sont un lieu de production de légumes non négligeable, un espace pour sociabiliser et un lieu d’éducation favorisant un développement social individuel et collectif des quartiers ayant une forte présence de population économique défavorisée. Les différentes approches atteignent les objectifs identifiés et permettent le développement d’outils multi-facettes favorisant le développement social des populations défavorisées.Durante más de 30 años se han realizado diversos experimentos relacionados con la
International Nuclear Information System (INIS)
Radice, David; Abdikamalov, Ernazar; Rezzolla, Luciano; Ott, Christian D.
2013-01-01
Recent work by McClarren and Hauck (2010) [31] suggests that the filtered spherical harmonics method represents an efficient, robust, and accurate method for radiation transport, at least in the two-dimensional (2D) case. We extend their work to the three-dimensional (3D) case and find that all of the advantages of the filtering approach identified in 2D are present also in the 3D case. We reformulate the filter operation in a way that is independent of the timestep and of the spatial discretization. We also explore different second- and fourth-order filters and find that the second-order ones yield significantly better results. Overall, our findings suggest that the filtered spherical harmonics approach represents a very promising method for 3D radiation transport calculations
Energy Technology Data Exchange (ETDEWEB)
Radice, David, E-mail: david.radice@aei.mpg.de [Max Planck Institute für Gravitationsphysik, Albert Einstein Institute, Potsdam (Germany); Abdikamalov, Ernazar [TAPIR, California Institute of Technology, Pasadena, CA (United States); Rezzolla, Luciano [Max Planck Institute für Gravitationsphysik, Albert Einstein Institute, Potsdam (Germany); Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA (United States); Ott, Christian D. [TAPIR, California Institute of Technology, Pasadena, CA (United States)
2013-06-01
Recent work by McClarren and Hauck (2010) [31] suggests that the filtered spherical harmonics method represents an efficient, robust, and accurate method for radiation transport, at least in the two-dimensional (2D) case. We extend their work to the three-dimensional (3D) case and find that all of the advantages of the filtering approach identified in 2D are present also in the 3D case. We reformulate the filter operation in a way that is independent of the timestep and of the spatial discretization. We also explore different second- and fourth-order filters and find that the second-order ones yield significantly better results. Overall, our findings suggest that the filtered spherical harmonics approach represents a very promising method for 3D radiation transport calculations.
Parker, R Gary
1988-01-01
This book treats the fundamental issues and algorithmic strategies emerging as the core of the discipline of discrete optimization in a comprehensive and rigorous fashion. Following an introductory chapter on computational complexity, the basic algorithmic results for the two major models of polynomial algorithms are introduced--models using matroids and linear programming. Further chapters treat the major non-polynomial algorithms: branch-and-bound and cutting planes. The text concludes with a chapter on heuristic algorithms.Several appendixes are included which review the fundamental ideas o
Chen, D. M.; Clapp, R. G.; Biondi, B.
2006-12-01
Ricksep is a freely-available interactive viewer for multi-dimensional data sets. The viewer is very useful for simultaneous display of multiple data sets from different viewing angles, animation of movement along a path through the data space, and selection of local regions for data processing and information extraction. Several new viewing features are added to enhance the program's functionality in the following three aspects. First, two new data synthesis algorithms are created to adaptively combine information from a data set with mostly high-frequency content, such as seismic data, and another data set with mainly low-frequency content, such as velocity data. Using the algorithms, these two data sets can be synthesized into a single data set which resembles the high-frequency data set on a local scale and at the same time resembles the low- frequency data set on a larger scale. As a result, the originally separated high and low-frequency details can now be more accurately and conveniently studied together. Second, a projection algorithm is developed to display paths through the data space. Paths are geophysically important because they represent wells into the ground. Two difficulties often associated with tracking paths are that they normally cannot be seen clearly inside multi-dimensional spaces and depth information is lost along the direction of projection when ordinary projection techniques are used. The new algorithm projects samples along the path in three orthogonal directions and effectively restores important depth information by using variable projection parameters which are functions of the distance away from the path. Multiple paths in the data space can be generated using different character symbols as positional markers, and users can easily create, modify, and view paths in real time. Third, a viewing history list is implemented which enables Ricksep's users to create, edit and save a recipe for the sequence of viewing states. Then, the recipe
International Nuclear Information System (INIS)
Bogolubov, Nikolai N. Jr.; Prykarpatsky, Anatoliy K.
2006-12-01
The differential-geometric aspects of generalized de Rham-Hodge complexes naturally related with integrable multi-dimensional differential systems of M. Gromov type, as well as the geometric structure of Chern characteristic classes are studied. Special differential invariants of the Chern type are constructed, their importance for the integrability of multi-dimensional nonlinear differential systems on Riemannian manifolds is discussed. An example of the three-dimensional Davey-Stewartson type nonlinear strongly integrable differential system is considered, its Cartan type connection mapping and related Chern type differential invariants are analyzed. (author)
Discrete gradients in discrete classical mechanics
International Nuclear Information System (INIS)
Renna, L.
1987-01-01
A simple model of discrete classical mechanics is given where, starting from the continuous Hamilton equations, discrete equations of motion are established together with a proper discrete gradient definition. The conservation laws of the total discrete momentum, angular momentum, and energy are demonstrated
International Nuclear Information System (INIS)
Lu, X; Tervola, P; Viljanen, M
2005-01-01
This paper provides an efficient analytical tool for solving the heat conduction equation in a multi-dimensional composite slab subject to generally time-dependent boundary conditions. A temporal Laplace transformation and novel separation of variables are applied to the heat equation. The time-dependent boundary conditions are approximated with Fourier series. Taking advantage of the periodic properties of Fourier series, the corresponding analytical solution is obtained and expressed explicitly through employing variable transformations. For such conduction problems, nearly all the published works necessitate numerical work such as computing residues or searching for eigenvalues even for a one-dimensional composite slab. In this paper, the proposed method involves no numerical iteration. The final closed form solution is straightforward; hence, the physical parameters are clearly shown in the formula. The accuracy of the developed analytical method is demonstrated by comparison with numerical calculations
Energy Technology Data Exchange (ETDEWEB)
Lee, Won Jae; Chung, Bub Dong; Jeong, Jae Jun; Ha, Kwi Seok [Korea Atomic Energy Research Institute, Taejon (Korea)
1998-06-01
A multi-dimensional realistic thermal-hydraulic system analysis code, MARS version 1.3 has been developed. Main purpose of MARS 1.3 development is to have the realistic analysis capability of transient two-phase thermal-hydraulics of Pressurized Water Reactors (PWRs) especially during Large Break Loss of Coolant Accidents (LBLOCAs) where the multi-dimensional phenomena domain the transients. MARS code is a unified version of USNRC developed COBRA-TF, domain the transients. MARS code is a unified version of USNRC developed COBRA-TF, three-dimensional (3D) reactor vessel analysis code, and RELAP5/MOD3.2.1.2, one-dimensional (1D) reactor system analysis code., Developmental requirements for MARS are chosen not only to best utilize the existing capability of the codes but also to have the enhanced capability in code maintenance, user accessibility, user friendliness, code portability, code readability, and code flexibility. For the maintenance of existing codes capability and the enhancement of code maintenance capability, user accessibility and user friendliness, MARS has been unified to be a single code consisting of 1D module (RELAP5) and 3D module (COBRA-TF). This is realized by implicitly integrating the system pressure matrix equations of hydrodynamic models and solving them simultaneously, by modifying the 1D/3D calculation sequence operable under a single Central Processor Unit (CPU) and by unifying the input structure and the light water property routines of both modules. In addition, the code structure of 1D module is completely restructured using the modular data structure of standard FORTRAN 90, which greatly improves the code maintenance capability, readability and portability. For the code flexibility, a dynamic memory management scheme is applied in both modules. MARS 1.3 now runs on PC/Windows and HP/UNIX platforms having a single CPU, and users have the options to select the 3D module to model the 3D thermal-hydraulics in the reactor vessel or other
Assessment of the RELAP5 multi-dimensional component model using data from LOFT test L2-5
International Nuclear Information System (INIS)
Davis, C.B.
1998-01-01
The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the LOFT L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident. Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L2-5 experiment were performed using the RELAP5-3D Version BF02 computer code. The calculated thermal-hydraulic responses of the LOFT primary and secondary coolant systems were generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP/MOD3
International Nuclear Information System (INIS)
Kwiatkowski, Witek; Riek, Roland
2003-01-01
The paper presents an alternative technique for chemical shift monitoring in a multi-dimensional NMR experiment. The monitored chemical shift is coded in the line-shape of a cross-peak through an apparent residual scalar coupling active during an established evolution period or acquisition. The size of the apparent scalar coupling is manipulated with an off-resonance radio-frequency pulse in order to correlate the size of the coupling with the position of the additional chemical shift. The strength of this concept is that chemical shift information is added without an additional evolution period and accompanying polarization transfer periods. This concept was incorporated into the three-dimensional triple-resonance experiment HNCA, adding the information of 1 H α chemical shifts. The experiment is called HNCA coded HA, since the chemical shift of 1 H α is coded in the line-shape of the cross-peak along the 13 C α dimension
Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.
2017-08-01
We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.
International Nuclear Information System (INIS)
Zhen, Xudong; Wang, Yang; Liu, Daming
2016-01-01
Highlights: • A new optimized chemical kinetic mechanism for PRF is developed. • New mechanism optimization is performed based on the CHEMKIN simulations. • More reactions of C_0–C_1 oxidation are added in the present mechanism. • Good performance is achieved of mechanism by validating various reactors and operating conditions. - Abstract: In the present study, for the multi-dimensional CFD (computational fluid dynamics) combustion simulations of internal combustion engines, a new optimized chemical kinetic reaction mechanism for the oxidation of PRF (primary reference fuel) instead of gasoline has been developed. In order to carry out the in-depth research for combustion phenomenon of internal combustion engines, an optimized reduced PRF mechanism including more intermediate species and radicals was developed. The developed mechanism contains of iso-octane (C_8H_1_8) and n-heptane (C_7H_1_6) surrogates, which contains of 51-species and 193 reactions. Compared with many other mechanisms of PRF, more reactions of C_0–C_1 oxidation (100 reactions) are added in the present mechanism. In order to improve the performances of the model, the developed mechanism focused on the improvement through the prediction of the ignition delay time. The developed mechanism has been validated against various experimental and simulation data including shock tube data, laminar flame speed data and HCCI (homogeneous charge compression ignition) engine data. The results showed that the developed PRF mechanism was agreements with the experimental data and other approved reduced mechanisms, and it could be applied to the multi-dimensional CFD simulations for internal combustion engines.
International Nuclear Information System (INIS)
Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M.; Folini, D.; Popov, M. V.; Walder, R.
2017-01-01
We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.
Energy Technology Data Exchange (ETDEWEB)
Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M. [Astrophysics Group, University of Exeter, Exeter EX4 4QL (United Kingdom); Folini, D.; Popov, M. V.; Walder, R., E-mail: i.baraffe@ex.ac.uk [Ecole Normale Supérieure de Lyon, CRAL, UMR CNRS 5574, F-69364 Lyon Cedex 07 (France)
2017-08-10
We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.
Transport synthetic acceleration scheme for multi-dimensional neutron transport problems
Energy Technology Data Exchange (ETDEWEB)
Modak, R S; Kumar, Vinod; Menon, S V.G. [Theoretical Physics Div., Bhabha Atomic Research Centre, Mumbai (India); Gupta, Anurag [Reactor Physics Design Div., Bhabha Atomic Research Centre, Mumbai (India)
2005-09-15
The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)
Transport synthetic acceleration scheme for multi-dimensional neutron transport problems
International Nuclear Information System (INIS)
Modak, R.S.; Vinod Kumar; Menon, S.V.G.; Gupta, Anurag
2005-09-01
The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)
Firth, Jean M
1992-01-01
The analysis of signals and systems using transform methods is a very important aspect of the examination of processes and problems in an increasingly wide range of applications. Whereas the initial impetus in the development of methods appropriate for handling discrete sets of data occurred mainly in an electrical engineering context (for example in the design of digital filters), the same techniques are in use in such disciplines as cardiology, optics, speech analysis and management, as well as in other branches of science and engineering. This text is aimed at a readership whose mathematical background includes some acquaintance with complex numbers, linear differen tial equations, matrix algebra, and series. Specifically, a familiarity with Fourier series (in trigonometric and exponential forms) is assumed, and an exposure to the concept of a continuous integral transform is desirable. Such a background can be expected, for example, on completion of the first year of a science or engineering degree cour...
Milledge, David; Bellugi, Dino; McKean, Jim; Dietrich, William E.
2013-04-01
Current practice in regional-scale shallow landslide hazard assessment is to adopt a one-dimensional slope stability representation. Such a representation cannot produce discrete landslides and thus cannot make predictions on landslide size. Furthermore, one-dimensional approaches cannot include lateral effects, which are known to be important in defining instability. Here we derive an alternative model that accounts for lateral resistance by representing the forces acting on each margin of an unstable block of soil. We model boundary frictional resistances using 'at rest' earth pressure on the lateral sides, and 'active' and 'passive' pressure, using the log-spiral method, on the upslope and downslope margins. We represent root reinforcement on each margin assuming that root cohesion declines exponentially with soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are relatively well constrained and find that our model predicts failure at the observed location and predicts that larger or smaller failures conformal to the observed shape are indeed more stable. We use a sensitivity analysis of the model to show that lateral reinforcement sets a minimum landslide size, and that the additional strength at the downslope boundary results in optimal shapes that are longer in the downslope direction. However, reinforcement effects alone cannot fully explain the size or shape distributions of observed landslides, highlighting the importance of the spatial pattern of key parameters (e.g. pore water pressure and soil depth) at the watershed scale. The application of the model at this scale requires an efficient method to find unstable shapes among an exponential number of candidates. In this context, the model allows a more extensive examination of the controls on landslide size, shape and location.
Šiaudinytė, Lauryna; Molnar, Gabor; Köning, Rainer; Flügge, Jens
2018-05-01
Industrial application versatility of interferometric encoders increases the urge to measure several degrees of freedom. A novel grating interferometer containing a commercially available, minimized Michelson interferometer and three fibre-fed measurement heads is presented in this paper. Moreover, the arrangement is designed for simultaneous displacement measurements in two perpendicular planes. In the proposed setup, beam splitters are located in the fibre heads, therefore the grating is separated from the light source and the photo detector, which influence measurement results by generated heat. The operating principle of the proposed system as well as error sources influencing measurement results are discussed in this paper. Further, the benefits and shortcomings of the setup are presented. A simple Littrow-configuration-based design leads to a compact-size interferometric encoder suitable for multidimensional measurements.
Discrete frequency identification using the HP 5451B Fourier analyser
International Nuclear Information System (INIS)
Holland, L.; Barry, P.
1977-01-01
The frequency analysis by the HP5451B discrete frequency Fourier analyser is studied. The advantages of cross correlation analysis to identify discrete frequencies in a background noise are discussed in conjuction with the elimination of aliasing and wraparound error. Discrete frequency identification is illustrated by a series of graphs giving the results of analysing 'electrical' and 'acoustical' white noise and sinusoidal signals [pt
TESTBED IMPLEMENTATION OF MULTI DIMENSIONAL SPECTRUM SENSING SCHEMES FOR COGNITIVE RADIO
Directory of Open Access Journals (Sweden)
Deepa N Reddy
2016-06-01
Full Text Available Cognitive Radio (CR is a promising technology to exploit the underutilized spectrum. Spectrum sensing is one of the most important components for the establishment of cognitive radio system. Spectrum sensing allows the secondary users (SUs to detect the presence of the primary users (PUs. The aim of this work is to create a CR environment to study the spectrum sensing methods using Universal software radio Peripheral (USRP boards. In this paper a novel method of estimation of spectrum opportunities in multiple dimensions especially the space and the angle dimensions are carried out on USRP boards. This paper typically provides the experimental results carried out in an indoor wireless environment. To enhance the sensing performance the space dimension is firstly studied using spatial diversity of the cooperative SUs. Secondly the receiver diversity is analyzed using multiple antennas to enhance the error performance of the wireless system. The spectrum usage is also determined in the angle dimension by investigating the direction of the dominant signals using MUSIC algorithm.
A Generic Framework for Using Multi-Dimensional Earth Observation Data in GIS
Directory of Open Access Journals (Sweden)
Yunfeng Jiang
2016-05-01
Full Text Available Earth Observation (EO data are critical for many Geographic Information System (GIS-based decision support systems to provide factual information. However, it is challenging for GIS to understand traditional EO data formats (e.g., Hierarchical Data Format (HDF given the different contents and formats in the two domains. To address this gap between EO data and GIS, the barriers and strategies of integrating various types of EO data with GIS are explored, especially with the popular Geospatial Data Abstraction Library (GDAL that is used by many GISs to access EO data. The research investigates four key technical aspects: (i designing a generic plug-in framework for consuming different types of EO data; (ii implementing the framework to fix the errors in GIS when using GDAL to understand EO data; and (iii developing extension for commercial and open source GIS (i.e., ArcGIS and QGIS to demonstrate the usability of the proposed framework and its implementation in GDAL. A series of EO data products collected from NASA’s Atmospheric Scientific Data Center (ASDC are used in the tests and the results prove the proposed framework is efficient to solve different problems in interpreting EO data without compromising their original content.
Discrete Curvatures and Discrete Minimal Surfaces
Sun, Xiang
2012-01-01
This thesis presents an overview of some approaches to compute Gaussian and mean curvature on discrete surfaces and discusses discrete minimal surfaces. The variety of applications of differential geometry in visualization and shape design leads
Carkin, Susan
The broad goal of this study is to represent the linguistic variation of textbooks and lectures, the primary input for student learning---and sometimes the sole input in the large introductory classes which characterize General Education at many state universities. Computer techniques are used to analyze a corpus of textbooks and lectures from first-year university classes in macroeconomics and biology. These spoken and written variants are compared to each other as well as to benchmark texts from other multi-dimensional studies in order to examine their patterns, relations, and functions. A corpus consisting of 147,000 words was created from macroeconomics and biology lectures at a medium-large state university and from a set of nationally "best-selling" textbooks used in these same introductory survey courses. The corpus was analyzed using multi-dimensional methodology (Biber, 1988). The analysis consists of both empirical and qualitative phases. Quantitative analyses are undertaken on the linguistic features, their patterns of co-occurrence, and on the contextual elements of classrooms and textbooks. The contextual analysis is used to functionally interpret the statistical patterns of co-occurrence along five dimensions of textual variation, demonstrating patterns of difference and similarity with reference to text excerpts. Results of the analysis suggest that academic discourse is far from monolithic. Pedagogic discourse in introductory classes varies by modality and discipline, but not always in the directions expected. In the present study the most abstract texts were biology lectures---more abstract than written genres of academic prose and more abstract than introductory textbooks. Academic lectures in both disciplines, monologues which carry a heavy informational load, were extremely interactive, more like conversation than academic prose. A third finding suggests that introductory survey textbooks differ from those used in upper division classes by being
International Nuclear Information System (INIS)
Suzuki, K.; Sato, H.
1975-01-01
The power and the cross power spectrum analysis by which the vibration characteristic of structures, such as natural frequency, mode of vibration and damping ratio, can be identified would be effective for the confirmation of the characteristics after the construction is completed by using the response for small earthquakes or the micro-tremor under the operating condition. This method of analysis previously utilized only from the view point of systems with single input so far, is extensively applied for the analysis of a medium scale model of a piping system subjected to two seismic inputs. The piping system attached to a three storied concrete structure model which is constructed on a shaking table was excited due to earthquake motions. The inputs to the piping system were recorded at the second floor and the ceiling of the third floor where the system was attached to. The output, the response of the piping system, was instrumented at a middle point on the system. As a result, the multi-dimensional power spectrum analysis is effective for a more reliable identification of the vibration characteristics of the multi-input structure system
Sibley, Chris G; Houkamau, Carla A
2013-01-01
We argue that there is a need for culture-specific measures of identity that delineate the factors that most make sense for specific cultural groups. One such measure, recently developed specifically for Māori peoples, is the Multi-Dimensional Model of Māori Identity and Cultural Engagement (MMM-ICE). Māori are the indigenous peoples of New Zealand. The MMM-ICE is a 6-factor measure that assesses the following aspects of identity and cultural engagement as Māori: (a) group membership evaluation, (b) socio-political consciousness, (c) cultural efficacy and active identity engagement, (d) spirituality, (e) interdependent self-concept, and (f) authenticity beliefs. This article examines the scale properties of the MMM-ICE using item response theory (IRT) analysis in a sample of 492 Māori. The MMM-ICE subscales showed reasonably even levels of measurement precision across the latent trait range. Analysis of age (cohort) effects further indicated that most aspects of Māori identification tended to be higher among older Māori, and these cohort effects were similar for both men and women. This study provides novel support for the reliability and measurement precision of the MMM-ICE. The study also provides a first step in exploring change and stability in Māori identity across the life span. A copy of the scale, along with recommendations for scale scoring, is included.
Parallel Implementation of the Multi-Dimensional Spectral Code SPECT3D on large 3D grids.
Golovkin, Igor E.; Macfarlane, Joseph J.; Woodruff, Pamela R.; Pereyra, Nicolas A.
2006-10-01
The multi-dimensional collisional-radiative, spectral analysis code SPECT3D can be used to study radiation from complex plasmas. SPECT3D can generate instantaneous and time-gated images and spectra, space-resolved and streaked spectra, which makes it a valuable tool for post-processing hydrodynamics calculations and direct comparison between simulations and experimental data. On large three dimensional grids, transporting radiation along lines of sight (LOS) requires substantial memory and CPU resources. Currently, the parallel option in SPECT3D is based on parallelization over photon frequencies and allows for a nearly linear speed-up for a variety of problems. In addition, we are introducing a new parallel mechanism that will greatly reduce memory requirements. In the new implementation, spatial domain decomposition will be utilized allowing transport along a LOS to be performed only on the mesh cells the LOS crosses. The ability to operate on a fraction of the grid is crucial for post-processing the results of large-scale three-dimensional hydrodynamics simulations. We will present a parallel implementation of the code and provide a scalability study performed on a Linux cluster.
Energy Technology Data Exchange (ETDEWEB)
Wu, Juhao, E-mail: jhwu@SLAC.Stanford.EDU [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Hu, Newman [Valley Christian High School, 100 Skyway Drive, San Jose, CA 95111 (United States); Setiawan, Hananiel [The Facility for Rare Isotope Beams, Michigan State University, East Lansing, MI 48824 (United States); Huang, Xiaobiao; Raubenheimer, Tor O. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Jiao, Yi [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Yu, George [Columbia University, New York, NY 10027 (United States); Mandlekar, Ajay [California Institute of Technology, Pasadena, CA 91125 (United States); Spampinati, Simone [Sincrotrone Trieste S.C.p.A. di interesse nazionale, Strada Statale 14-km 163,5 in AREA Science Park, 34149 Basovizza, Trieste (Italy); Fang, Kun [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Chu, Chungming [The Facility for Rare Isotope Beams, Michigan State University, East Lansing, MI 48824 (United States); Qiang, Ji [Lawrence Berkeley National Laboratory, University of California, Berkeley, CA 94720 (United States)
2017-02-21
There is a great interest in generating high-power hard X-ray Free Electron Laser (FEL) in the terawatt (TW) level that can enable coherent diffraction imaging of complex molecules like proteins and probe fundamental high-field physics. A feasibility study of producing such X-ray pulses was carried out employing a configuration beginning with a Self-Amplified Spontaneous Emission FEL, followed by a “self-seeding” crystal monochromator generating a fully coherent seed, and finishing with a long tapered undulator where the coherent seed recombines with the electron bunch and is amplified to high power. The undulator tapering profile, the phase advance in the undulator break sections, the quadrupole focusing strength, etc. are parameters to be optimized. A Genetic Algorithm (GA) is adopted for this multi-dimensional optimization. Concrete examples are given for LINAC Coherent Light Source (LCLS) and LCLS-II-type systems. Analytical estimate is also developed to cross check the simulation and optimization results as a quick and complimentary tool.
Su, Yunshan; Fang, Kewei; Mao, Chongwen; Xiang, Shutian; Wang, Jin; Li, Yingwen
2018-01-01
The present study aimed to explore the application of 640-slice dynamic volume computed tomography (DVCT) to excretory cystography and urethrography. A total of 70 healthy subjects were included in the study. Excretory cystography and urethrography using 640-slice DVCT was conducted to continuously record the motions of the bladder and the proximal female and male urethra. The patients' voiding process was divided into early, early to middle, middle, middle to late, and late voiding phases. The subjects were analyzed using DVCT and conventional CT. The cross-sectional areas of various sections of the male and female urethra were evaluated, and the average urine flow rate was calculated. The 640-slice DVCT technique was used to dynamically observe the urine flow rate and changes in bladder volume at all voiding phases. The urine volume detected by 640-slice DVCT exhibited no significant difference compared with the actual volume, and no significant difference compared with that determined using conventional CT. Furthermore, no significant difference in the volume of the bladder at each phase of the voiding process was detected between 640-slice DVCT and conventional CT. The results indicate that 640-slice DVCT can accurately evaluate the status of the male posterior urethra and female urethra. In conclusion, 640-slice DVCT is able to multi-dimensionally and dynamically present changes in bladder volume and urine flow rate, and could obtain similar results to conventional CT in detecting urine volume, as well as the status of the male posterior urethra and female urethra. PMID:29467853
Directory of Open Access Journals (Sweden)
Vincent Casseau
2016-12-01
Full Text Available hy2Foam is a newly-coded open-source two-temperature computational fluid dynamics (CFD solver that has previously been validated for zero-dimensional test cases. It aims at (1 giving open-source access to a state-of-the-art hypersonic CFD solver to students and researchers; and (2 providing a foundation for a future hybrid CFD-DSMC (direct simulation Monte Carlo code within the OpenFOAM framework. This paper focuses on the multi-dimensional verification of hy2Foam and firstly describes the different models implemented. In conjunction with employing the coupled vibration-dissociation-vibration (CVDV chemistry–vibration model, novel use is made of the quantum-kinetic (QK rates in a CFD solver. hy2Foam has been shown to produce results in good agreement with previously published data for a Mach 11 nitrogen flow over a blunted cone and with the dsmcFoam code for a Mach 20 cylinder flow for a binary reacting mixture. This latter case scenario provides a useful basis for other codes to compare against.
International Nuclear Information System (INIS)
Jeong, Hae Yong; Ha, Kwi Seok; Chang, Won Pyo; Lee, Kwi Lim
2012-01-01
Phenix is one of the important prototype sodium-cooled fast reactors (SFR) in nuclear reactor development history. It had been operated successfully for 35 years by the French Commissariat a l'energie atomique (CEA) and the Electricite de France (EdF) achieving its original objectives of demonstrating a fast breeder reactor technology and of playing the role of irradiation facility for innovative fuels and materials. After its final shutdown in 2009, CEA launched the Phenix End-of-life (EOL) test program. It provided a unique opportunity to generate reliable test data which is inevitable in the validation and verification of a SFR system analysis code. KAERI joined this international collaboration program of IAEA CRP and has performed the pretest analysis and post-test analysis utilizing the one-dimensional modeling of the MARS-LMR code, which had been developed by KAERI for the transient analysis of SFR systems. Through the previous studies, it has been identified that there are some limitations in the modeling of complicated thermal-hydraulic behaviors in the large pool volumes with the one-dimensional modeling. Recently, KAERI performed the analysis of Phenix EOL natural circulation test with multi-dimensional pool modeling, which is detailed below
Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol
2015-06-05
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states.
International Nuclear Information System (INIS)
Kammer, Frank von der; Ottofuelling, Stephanie; Hofmann, Thilo
2010-01-01
Assessment of the behavior and fate of engineered nanoparticles (ENPs) in natural aquatic media is crucial for the identification of environmentally critical properties of the ENPs. Here we present a methodology for testing the dispersion stability, ζ-potential and particle size of engineered nanoparticles as a function of pH and water composition. The results obtained from already widely used titanium dioxide nanoparticles (Evonik P25 and Hombikat UV-100) serve as a proof-of-concept for the proposed testing scheme. In most cases the behavior of the particles in the tested settings follows the expectations derived from classical DLVO theory for metal oxide particles with variable charge and an isoelectric point at around pH 5, but deviations also occur. Regardless of a 5-fold difference in BET specific surface area particles composed of the same core material behave in an overall comparable manner. The presented methodology can act as a basis for the development of standardised methods for comparing the behavior of different nanoparticles within aquatic systems. - The behavior of engineered nanoparticles in the aquatic environment can be elucidated using a multi-dimensional parameter set acquired by a semi automated experimental set-up.
Directory of Open Access Journals (Sweden)
Yongli Wang
2017-11-01
Full Text Available The Multi-dimensional Scale of Perceived Social Support (MSPSS is one of the most extensively used instruments to assess social support. The purpose of this research was to test the reliability, factorial validity, concurrent validity and measurement invariance across gender groups of the MSPSS in Chinese parents of children with cerebral palsy. A total of 487 participants aged 21–55 years were recruited to complete the Chinese MSPSS and Parenting Stress Index-Short Form (PSI-SF. Composite reliability was calculated as the internal consistency of the Chinese MSPSS and a (multi-group confirmatory factor analysis (CFA was conducted to test the factorial validity and measurement invariance across gender. And Pearson correlations were calculated to test the relationships between MSPSS and PSI-SF. The Chinese MSPSS had satisfactory internal reliability with composite reliability values of more than 0.7. The CFA indicated that the original three-factor model was replicated in this specific population. Importantly, the results of the multi-group CFA demonstrated that configural, metric, and scalar invariance across gender groups was supported. In addition, all the three subscales of MSPSS were significant related with PSI-SF. These findings suggest that the Chinese MSPSS is a reliable and valid tool for assessing social support and can generally be utilized across sex in the parents of children with cerebral palsy.
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hae Yong; Ha, Kwi Seok; Chang, Won Pyo; Lee, Kwi Lim [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2012-05-15
Phenix is one of the important prototype sodium-cooled fast reactors (SFR) in nuclear reactor development history. It had been operated successfully for 35 years by the French Commissariat a l'energie atomique (CEA) and the Electricite de France (EdF) achieving its original objectives of demonstrating a fast breeder reactor technology and of playing the role of irradiation facility for innovative fuels and materials. After its final shutdown in 2009, CEA launched the Phenix End-of-life (EOL) test program. It provided a unique opportunity to generate reliable test data which is inevitable in the validation and verification of a SFR system analysis code. KAERI joined this international collaboration program of IAEA CRP and has performed the pretest analysis and post-test analysis utilizing the one-dimensional modeling of the MARS-LMR code, which had been developed by KAERI for the transient analysis of SFR systems. Through the previous studies, it has been identified that there are some limitations in the modeling of complicated thermal-hydraulic behaviors in the large pool volumes with the one-dimensional modeling. Recently, KAERI performed the analysis of Phenix EOL natural circulation test with multi-dimensional pool modeling, which is detailed below
Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed
2016-07-01
Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Moeller, Peter; Sierk, Arnold J.; Bengtsson, Ragnar; Iwamoto, Akira
2003-01-01
We present fission-barrier-height calculations for nuclei throughout the periodic system based on a realistic theoretical model of the multi-dimensional potential-energy surface of a fissioning nucleus. This surface guides the nuclear shape evolution from the ground state, over inner and outer saddle points, to the final configurations of separated fission fragments. We have previously shown that our macroscopic-microscopic nuclear potential-energy model yields calculated 'outer' fission-barrier heights (E B ) for even-even nuclei throughout the periodic system that agree with experimental data to within about 1.0 MeV. We present final results of this work. Just recently we have enhanced our macroscopic-microscopic nuclear potential-energy model to also allow the consideration of axially asymmetric shapes. This shape degree of freedom has a substantial effect on the calculated height (E A ) of the inner peak of some actinide fission barriers. We present examples of fission-barrier calculations by use of this model with its redetermined constants. Finally we discuss what the model now tells us about fission barriers at the end of the r-process nucleosynthesis path. (author)
Discrete Curvatures and Discrete Minimal Surfaces
Sun, Xiang
2012-06-01
This thesis presents an overview of some approaches to compute Gaussian and mean curvature on discrete surfaces and discusses discrete minimal surfaces. The variety of applications of differential geometry in visualization and shape design leads to great interest in studying discrete surfaces. With the rich smooth surface theory in hand, one would hope that this elegant theory can still be applied to the discrete counter part. Such a generalization, however, is not always successful. While discrete surfaces have the advantage of being finite dimensional, thus easier to treat, their geometric properties such as curvatures are not well defined in the classical sense. Furthermore, the powerful calculus tool can hardly be applied. The methods in this thesis, including angular defect formula, cotangent formula, parallel meshes, relative geometry etc. are approaches based on offset meshes or generalized offset meshes. As an important application, we discuss discrete minimal surfaces and discrete Koenigs meshes.
Hoede, C.; Li, Z.
2001-01-01
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,
International Nuclear Information System (INIS)
Bor-Jing Chang; Yen-Wan H. Liu
1992-01-01
The HYBRID, or mixed group and point, method was developed to solve the neutron transport equation deterministically using detailed treatment at cross section minima for deep penetration calculations. Its application so far is limited to one-dimensional calculations due to the enormous computing time involved in multi-dimensional calculations. In this article, a collapsing method is developed for the mixed group and point cross section sets to provide a more direct and practical way of using the HYBRID method in the multi-dimensional calculations. A testing problem is run. The method is then applied to the calculation of a deep penetration benchmark experiment. It is observed that half of the window effect is smeared in the collapsing treatment, but it still provide a better cross section set than the VITAMIN-C cross sections for the deep penetrating calculations
Schillinger, Dominik
2013-07-01
The method of separation can be used as a non-parametric estimation technique, especially suitable for evolutionary spectral density functions of uniformly modulated and strongly narrow-band stochastic processes. The paper at hand provides a consistent derivation of method of separation based spectrum estimation for the general multi-variate and multi-dimensional case. The validity of the method is demonstrated by benchmark tests with uniformly modulated spectra, for which convergence to the analytical solution is demonstrated. The key advantage of the method of separation is the minimization of spectral dispersion due to optimum time- or space-frequency localization. This is illustrated by the calibration of multi-dimensional and multi-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed. © 2013 Elsevier Ltd.
Bae, Jin-Hyuk; Lee, Sin-Doo; Choi, Jong Sun; Park, Jaehoon
2012-05-01
We report on the multi-dimensional alignment of pentacene molecules on a poly(methyl methacrylate)-based photosensitive polymer (PMMA-polymer) and its effect on the electrical performance of the pentacene-based field-effect transistor (FET). Pentacene molecules are shown to be preferentially aligned on the linearly polarized ultraviolet (LPUV)-exposed PMMA-polymer layer, which is contrast to an isotropic alignment on the bare PMMA-polymer layer. Multi-dimensional alignment of pentacene molecules in the film could be achieved by adjusting the direction of LPUV exposed to the PMMA-polymer. The control of pentacene molecular alignment is found to be promising for the field-effect mobility enhancement in the pentacene FET.
Energy Technology Data Exchange (ETDEWEB)
Bailey, T S; Adams, M L [Texas A M Univ., Dept. of Nuclear Engineering, College Station, TX (United States); Yang, B; Zika, M R [Lawrence Livermore National Lab., Livermore, CA (United States)
2005-07-01
We develop a piecewise linear (PWL) Galerkin finite element spatial discretization for the multi-dimensional radiation diffusion equation. It uses piecewise linear weight and basis functions in the finite element approximation, and it can be applied on arbitrary polygonal (2-dimensional) or polyhedral (3-dimensional) grids. We show that this new PWL method gives solutions comparable to those from Palmer's finite-volume method. However, since the PWL method produces a symmetric positive definite coefficient matrix, it should be substantially more computationally efficient than Palmer's method, which produces an asymmetric matrix. We conclude that the Galerkin PWL method is an attractive option for solving diffusion equations on unstructured grids. (authors)
Approximating the constellation constrained capacity of the MIMO channel with discrete input
DEFF Research Database (Denmark)
Yankov, Metodi Plamenov; Forchhammer, Søren; Larsen, Knud J.
2015-01-01
In this paper the capacity of a Multiple Input Multiple Output (MIMO) channel is considered, subject to average power constraint, for multi-dimensional discrete input, in the case when no channel state information is available at the transmitter. We prove that when the constellation size grows, t...... for the equivalent orthogonal channel, obtained by the singular value decomposition. Furthermore, lower bounds on the constrained capacity are derived for the cases of square and tall MIMO matrix, by optimizing the constellation for the equivalent channel, obtained by QR decomposition....
Energy Technology Data Exchange (ETDEWEB)
Jang, Hyung-wook; Lee, Sang-yong; Oh, Seung-jong; Kim, Woong-bae [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)
2016-10-15
The phenomena of LOCA have been investigated for long time. The most extensive research project for LOCA was the 2D/3D program experiments. The results of the 2D/3D experiments show flow conditions in the downcomer during end-of-blowdown were highly multi-dimensional at full-scale. In this paper, the authors modified the nodalization of MARS code LBLOCA input deck and performed LBLOCA analysis with new input deck. An LBLOCA analysis for APR1400 with new downcomer input deck was conducted using KREM with MARS-KS 1.4 Version code. Analysis was processed under LBCOCA of 100% break size of cold leg case. The authors developed input deck with new downcomer nodalizaion and Multi-Dimensional downcomer model, then implemented LOCA analysis with new input decks and compared with existing analysis results. PCT from new input and multi-dimensional input deck shows similar PCT trend from original input deck. There occurred more rapid drop of PCT from new and multidimensional input deck than original input deck. PCT from new and multidimensional input deck are satisfied with PCT design limit. It can be concluded that there occurs no acceptance criteria issue even though new and multidimensional input deck are applied to LBLOCA analysis. In future study, comparative analysis with experiment results will be implemented.
International Nuclear Information System (INIS)
Kwon, T.S.; Yun, B.J.; Euh, D.J.; Chu, I.C.; Song, C.H.
2002-01-01
Multi-dimensional thermal-hydraulic behavior in the downcomer annulus of a pressurized water reactor vessel with a Direct Vessel Injection (DVI) mode is presented based on the experimental observation in the MIDAS (Multi-dimensional Investigation in Downcomer Annulus Simulation) steam-water test facility. From the steady-state test results to simulate the late reflood phase of a Large Break Loss-of-Coolant Accidents(LBLOCA), isothermal lines show the multidimensional phenomena of a phasic interaction between steam and water in the downcomer annulus very well. MIDAS is a steam-water separate effect test facility, which is 1/4.93 linearly scaled-down of 1400 MWe PWR type of a nuclear reactor, focused on understanding multi-dimensional thermalhydraulic phenomena in downcomer annulus with various types of safety injection during the refill or reflood phase of a LBLOCA. The initial and the boundary conditions are scaled from the pre-test analysis based on the preliminary calculation using the TRAC code. The superheated steam with a superheating degree of 80 K at a given downcomer pressure of 180 kPa is injected equally through three intact cold legs into the downcomer. (authors)
Thomas, Hannah J; Scott, James G; Coates, Jason M; Connor, Jason P
2018-05-03
Intervention on adolescent bullying is reliant on valid and reliable measurement of victimization and perpetration experiences across different behavioural expressions. This study developed and validated a survey tool that integrates measurement of both traditional and cyber bullying to test a theoretically driven multi-dimensional model. Adolescents from 10 mainstream secondary schools completed a baseline and follow-up survey (N = 1,217; M age = 14 years; 66.2% male). The Bullying and cyberbullying Scale for Adolescents (BCS-A) developed for this study comprised parallel victimization and perpetration subscales, each with 20 items. Additional measures of bullying (Olweus Global Bullying and the Forms of Bullying Scale [FBS]), as well as measures of internalizing and externalizing problems, school connectedness, social support, and personality, were used to further assess validity. Factor structure was determined, and then, the suitability of items was assessed according to the following criteria: (1) factor interpretability, (2) item correlations, (3) model parsimony, and (4) measurement equivalence across victimization and perpetration experiences. The final models comprised four factors: physical, verbal, relational, and cyber. The final scale was revised to two 13-item subscales. The BCS-A demonstrated acceptable concurrent and convergent validity (internalizing and externalizing problems, school connectedness, social support, and personality), as well as predictive validity over 6 months. The BCS-A has sound psychometric properties. This tool establishes measurement equivalence across types of involvement and behavioural forms common among adolescents. An improved measurement method could add greater rigour to the evaluation of intervention programmes and also enable interventions to be tailored to subscale profiles. © 2018 The British Psychological Society.
Energy Technology Data Exchange (ETDEWEB)
Desai, V; Labby, Z; Culberson, W [University of Wisc Madison, Madison, WI (United States)
2016-06-15
Purpose: To determine whether body site-specific treatment plans form unique “plan class” clusters in a multi-dimensional analysis of plan complexity metrics such that a single beam quality correction determined for a representative plan could be universally applied within the “plan class”, thereby increasing the dosimetric accuracy of a detector’s response within a subset of similarly modulated nonstandard deliveries. Methods: We collected 95 clinical volumetric modulated arc therapy (VMAT) plans from four body sites (brain, lung, prostate, and spine). The lung data was further subdivided into SBRT and non-SBRT data for a total of five plan classes. For each control point in each plan, a variety of aperture-based complexity metrics were calculated and stored as unique characteristics of each patient plan. A multiple comparison of means analysis was performed such that every plan class was compared to every other plan class for every complexity metric in order to determine which groups could be considered different from one another. Statistical significance was assessed after correcting for multiple hypothesis testing. Results: Six out of a possible 10 pairwise plan class comparisons were uniquely distinguished based on at least nine out of 14 of the proposed metrics (Brain/Lung, Brain/SBRT lung, Lung/Prostate, Lung/SBRT Lung, Lung/Spine, Prostate/SBRT Lung). Eight out of 14 of the complexity metrics could distinguish at least six out of the possible 10 pairwise plan class comparisons. Conclusion: Aperture-based complexity metrics could prove to be useful tools to quantitatively describe a distinct class of treatment plans. Certain plan-averaged complexity metrics could be considered unique characteristics of a particular plan. A new approach to generating plan-class specific reference (pcsr) fields could be established through a targeted preservation of select complexity metrics or a clustering algorithm that identifies plans exhibiting similar
Yu, Xiaoyan; Liu, Lu; Sun, Li; Qian, Ying; Qian, Qiujin; Wu, Zhaomin; Cao, Qingjiu; Wang, Yufeng
2015-10-20
To explore the characteristics of emotional regulation in children with attention-deficit/hyperactivity disorder (ADHD). Two hundred and eighty-two children who were diagnosed as ADHD according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) were recruited from the child psychiatric clinic of Peking University Sixth Hospital/Institute of Mental Health from August 2012 to April 2014. And 260 normal children from the local primary schools were selected as the healthy control group. The emotional factors or items of Conners' Parent Rating Scale, Behavior Rating Inventory of Executive Function (BRIEF), Achenbach's Child Behavior Checklist (CBCL) and Rutter Children Behavior Questionnaire were used to assess the characteristics of emotional regulation multi-dimensionally. After controlling for the effects of age, sex and intelligence quotient (IQ), in Conner scale, the emotional lability (EL) scores of ADHD group were significantly higher than that of healthy control group [(4.3±2.6) vs (1.4±1.5), Pemotional control (ECTRL) scores of ADHD group were significantly higher than that of control group [(16.1±4.4) vs (12.0±2.5), Pemotional self-regulation (DESR) scores of ADHD group were significantly higher than that of control group [(26.8±11.0) vs (6.6±6.8), Pemotional symptoms (ES) scores of ADHD group were significantly higher than that of control group [(2.7±2.0) vs (1.7±1.5), Pemotional regulation.
Capasso, Roberto; Zurlo, Maria Clelia; Smith, Andrew P
2018-02-01
This study integrates different aspects of ethnicity and work-related stress dimensions (based on the Demands-Resources-Individual-Effects model, DRIVE [Mark, G. M., and A. P. Smith. 2008. "Stress Models: A Review and Suggested New Direction." In Occupational Health Psychology, edited by J. Houdmont and S. Leka, 111-144. Nottingham: Nottingham University Press]) and aims to test a multi-dimensional model that combines individual differences, ethnicity dimensions, work characteristics, and perceived job satisfaction/stress as independent variables in the prediction of subjectives reports of health by workers differing in ethnicity. A questionnaire consisting of the following sections was submitted to 900 workers in Southern Italy: for individual and cultural characteristics, coping strategies, personality behaviours, and acculturation strategies; for work characteristics, perceived job demands and job resources/rewards; for appraisals, perceived job stress/satisfaction and racial discrimination; for subjective reports of health, psychological disorders and general health. To test the reliability and construct validity of the extracted factors referred to all dimensions involved in the proposed model and logistic regression analyses to evaluate the main effects of the independent variables on the health outcomes were conducted. Principal component analysis (PCA) yielded seven factors for individual and cultural characteristics (emotional/relational coping, objective coping, Type A behaviour, negative affectivity, social inhibition, affirmation/maintenance culture, and search identity/adoption of the host culture); three factors for work characteristics (work demands, intrinsic/extrinsic rewards, and work resources); three factors for appraisals (perceived job satisfaction, perceived job stress, perceived racial discrimination) and three factors for subjective reports of health (interpersonal disorders, anxious-depressive disorders, and general health). Logistic
International Nuclear Information System (INIS)
Nishimura, M.; Kamide, H.; Miyake, Y.
1997-04-01
Temperature distributions in fuel subassemblies of fast reactors interactively affect heat transfer from center to outer region of the core (inter-subassembly heat transfer) and cooling capability of an inter-wrapper flow, as well as maximum cladding temperature. The prediction of temperature distribution in the subassembly is, therefore one of the important issues for the reactor safety assessment. Mixing factors were applied to multi-dimensional thermal-hydraulic code AQUA to enhance the predictive capability of simulating maximum cladding temperature in the fuel subassemblies. In the previous studies, this analytical method had been validated through the calculations of the sodium experiments using driver subassembly test rig PLANDTL-DHX with 37-pin bundle and blanket subassembly test rig CCTL-CFR with 61-pin bundle. The error of the analyses were comparable to the error of instrumentation's. Thus the modeling was capable of predicting thermal-hydraulic field in the middle scale subassemblies. Before the application to large scale real subassemblies with more than 217 pins, accuracy of the analytical method have to be inspected through calculations of sodium tests in a large scale pin bundle. Therefore, computations were performed on sodium experiments in the relatively large 169-pin subassembly which had heater pins sparsely within the bundle. The analysis succeeded to predict the experimental temperature distributions. The errors of temperature rise from inlet to maximum values were reduced to half magnitudes by using mixing factors, compared to those of analyses without mixing factors. Thus the modeling is capable of predicting the large scale real subassemblies. (author)
Mohamed, Mamdouh S.
2016-02-11
A conservative discretization of incompressible Navier–Stokes equations is developed based on discrete exterior calculus (DEC). A distinguishing feature of our method is the use of an algebraic discretization of the interior product operator and a combinatorial discretization of the wedge product. The governing equations are first rewritten using the exterior calculus notation, replacing vector calculus differential operators by the exterior derivative, Hodge star and wedge product operators. The discretization is then carried out by substituting with the corresponding discrete operators based on the DEC framework. Numerical experiments for flows over surfaces reveal a second order accuracy for the developed scheme when using structured-triangular meshes, and first order accuracy for otherwise unstructured meshes. By construction, the method is conservative in that both mass and vorticity are conserved up to machine precision. The relative error in kinetic energy for inviscid flow test cases converges in a second order fashion with both the mesh size and the time step.
Mohamed, Mamdouh S.; Hirani, Anil N.; Samtaney, Ravi
2016-05-01
A conservative discretization of incompressible Navier-Stokes equations is developed based on discrete exterior calculus (DEC). A distinguishing feature of our method is the use of an algebraic discretization of the interior product operator and a combinatorial discretization of the wedge product. The governing equations are first rewritten using the exterior calculus notation, replacing vector calculus differential operators by the exterior derivative, Hodge star and wedge product operators. The discretization is then carried out by substituting with the corresponding discrete operators based on the DEC framework. Numerical experiments for flows over surfaces reveal a second order accuracy for the developed scheme when using structured-triangular meshes, and first order accuracy for otherwise unstructured meshes. By construction, the method is conservative in that both mass and vorticity are conserved up to machine precision. The relative error in kinetic energy for inviscid flow test cases converges in a second order fashion with both the mesh size and the time step.
International Nuclear Information System (INIS)
Knuefer; Lindauer
1980-01-01
Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)
Mimetic discretization methods
Castillo, Jose E
2013-01-01
To help solve physical and engineering problems, mimetic or compatible algebraic discretization methods employ discrete constructs to mimic the continuous identities and theorems found in vector calculus. Mimetic Discretization Methods focuses on the recent mimetic discretization method co-developed by the first author. Based on the Castillo-Grone operators, this simple mimetic discretization method is invariably valid for spatial dimensions no greater than three. The book also presents a numerical method for obtaining corresponding discrete operators that mimic the continuum differential and
Time Discretization Techniques
Gottlieb, S.; Ketcheson, David I.
2016-01-01
The time discretization of hyperbolic partial differential equations is typically the evolution of a system of ordinary differential equations obtained by spatial discretization of the original problem. Methods for this time evolution include
Brabets, Timothy P.; Conaway, Jeffrey S.
2009-01-01
The Copper River Basin, the sixth largest watershed in Alaska, drains an area of 24,200 square miles. This large, glacier-fed river flows across a wide alluvial fan before it enters the Gulf of Alaska. Bridges along the Copper River Highway, which traverses the alluvial fan, have been impacted by channel migration. Due to a major channel change in 2001, Bridge 339 at Mile 36 of the highway has undergone excessive scour, resulting in damage to its abutments and approaches. During the snow- and ice-melt runoff season, which typically extends from mid-May to September, the design discharge for the bridge often is exceeded. The approach channel shifts continuously, and during our study it has shifted back and forth from the left bank to a course along the right bank nearly parallel to the road.Maintenance at Bridge 339 has been costly and will continue to be so if no action is taken. Possible solutions to the scour and erosion problem include (1) constructing a guide bank to redirect flow, (2) dredging approximately 1,000 feet of channel above the bridge to align flow perpendicular to the bridge, and (3) extending the bridge. The USGS Multi-Dimensional Surface Water Modeling System (MD_SWMS) was used to assess these possible solutions. The major limitation of modeling these scenarios was the inability to predict ongoing channel migration. We used a hybrid dataset of surveyed and synthetic bathymetry in the approach channel, which provided the best approximation of this dynamic system. Under existing conditions and at the highest measured discharge and stage of 32,500 ft3/s and 51.08 ft, respectively, the velocities and shear stresses simulated by MD_SWMS indicate scour and erosion will continue. Construction of a 250-foot-long guide bank would not improve conditions because it is not long enough. Dredging a channel upstream of Bridge 339 would help align the flow perpendicular to Bridge 339, but because of the mobility of the channel bed, the dredged channel would
Directory of Open Access Journals (Sweden)
Yanhong Wang
2017-06-01
Full Text Available Lichong Shengsui Yin (LCSSY is an effective and classic compound prescription of Traditional Chinese Medicines (TCMs used for the treatment of ovarian cancer. To investigate its pharmacodynamic basis for treating ovarian cancer, the multi-dimensional spectrum-effect relationship was determined. Four compositions (I to IV were obtained by extracting LCSSY successively with supercritical CO2 fluid extraction, 75% ethanol reflux extraction, and the water extraction-ethanol precipitation method. Nine samples for pharmacological evaluation and fingerprint analysis were prepared by changing the content of the four compositions. The specific proportions of the four compositions were designed according to a four-factor, three-level L9(34 orthogonal test. The pharmacological evaluation included in vitro tumor inhibition experiments and the survival extension rate in tumor-bearing nude mice. The fingerprint analyzed by chromatographic condition I (high-performance liquid chromatography-photodiode array detec tor，HPLC-PDA identified 19 common peaks. High-performance liquid chromatography-photodiode array detector-Evaporative Light-scattering Detector (HPLC-PDA-ELSD hyphenated techniques were used to compensate for the use of a single detector, and the fingerprint analyzed by chromatographic condition II identified 28 common peaks in PDA and 23 common peaks in ELSD. Furthermore, multiple statistical analyses were utilized to calculate the relationships between the peaks and the pharmacological results. The union of the regression and the correlation analysis results were the peaks of X5, X9, X11, X12, X16, X18, Y5, Y8, Y12, Y14, Y20, Z4, Z5, Z6, and Z8. The intersection of the regression and the correlation analysis results were the peaks of X11, X12, X16, X18, Y5, Y12, and Z5. The correlated peaks were assigned by comparing the fingerprints with the negative control samples and reference standard samples, and identifying the structure using high
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...
International Nuclear Information System (INIS)
Winterflood, A.H.
1980-01-01
In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)
Directory of Open Access Journals (Sweden)
Michelle Fernandes
Full Text Available BACKGROUND: The International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st Project is a population-based, longitudinal study describing early growth and development in an optimally healthy cohort of 4607 mothers and newborns. At 24 months, children are assessed for neurodevelopmental outcomes with the INTERGROWTH-21st Neurodevelopment Package. This paper describes neurodevelopment tools for preschoolers and the systematic approach leading to the development of the Package. METHODS: An advisory panel shortlisted project-specific criteria (such as multi-dimensional assessments and suitability for international populations to be fulfilled by a neurodevelopment instrument. A literature review of well-established tools for preschoolers revealed 47 candidates, none of which fulfilled all the project's criteria. A multi-dimensional assessment was, therefore, compiled using a package-based approach by: (i categorizing desired outcomes into domains, (ii devising domain-specific criteria for tool selection, and (iii selecting the most appropriate measure for each domain. RESULTS: The Package measures vision (Cardiff tests; cortical auditory processing (auditory evoked potentials to a novelty oddball paradigm; and cognition, language skills, behavior, motor skills and attention (the INTERGROWTH-21st Neurodevelopment Assessment in 35-45 minutes. Sleep-wake patterns (actigraphy are also assessed. Tablet-based applications with integrated quality checks and automated, wireless electroencephalography make the Package easy to administer in the field by non-specialist staff. The Package is in use in Brazil, India, Italy, Kenya and the United Kingdom. CONCLUSIONS: The INTERGROWTH-21st Neurodevelopment Package is a multi-dimensional instrument measuring early child development (ECD. Its developmental approach may be useful to those involved in large-scale ECD research and surveillance efforts.
Effective Hamiltonian for travelling discrete breathers
MacKay, Robert S.; Sepulchre, Jacques-Alexandre
2002-05-01
Hamiltonian chains of oscillators in general probably do not sustain exact travelling discrete breathers. However solutions which look like moving discrete breathers for some time are not difficult to observe in numerics. In this paper we propose an abstract framework for the description of approximate travelling discrete breathers in Hamiltonian chains of oscillators. The method is based on the construction of an effective Hamiltonian enabling one to describe the dynamics of the translation degree of freedom of moving breathers. Error estimate on the approximate dynamics is also studied. The concept of the Peierls-Nabarro barrier can be made clear in this framework. We illustrate the method with two simple examples, namely the Salerno model which interpolates between the Ablowitz-Ladik lattice and the discrete nonlinear Schrödinger system, and the Fermi-Pasta-Ulam chain.
Adjoint Based A Posteriori Analysis of Multiscale Mortar Discretizations with Multinumerics
Tavener, Simon
2013-01-01
In this paper we derive a posteriori error estimates for linear functionals of the solution to an elliptic problem discretized using a multiscale nonoverlapping domain decomposition method. The error estimates are based on the solution of an appropriately defined adjoint problem. We present a general framework that allows us to consider both primal and mixed formulations of the forward and adjoint problems within each subdomain. The primal subdomains are discretized using either an interior penalty discontinuous Galerkin method or a continuous Galerkin method with weakly imposed Dirichlet conditions. The mixed subdomains are discretized using Raviart- Thomas mixed finite elements. The a posteriori error estimate also accounts for the errors due to adjoint-inconsistent subdomain discretizations. The coupling between the subdomain discretizations is achieved via a mortar space. We show that the numerical discretization error can be broken down into subdomain and mortar components which may be used to drive adaptive refinement.Copyright © by SIAM.
Energy Technology Data Exchange (ETDEWEB)
Won-Jae, Lee; Kwi-Seok, Ha; Chul-Hwa, Song [Korea Atomic Energy Research Inst., Daejeon (Korea, Republic of)
2001-07-01
The MARS code has been assessed for the downcomer multi-dimensional thermal hydraulics during a large break loss-of-coolant accident (LBLOCA) reflood of Korean Next Generation Reactor (KNGR) that adopted an upper direct vessel injection (DVI) design. Direct DVI bypass and downcomer level sweep-out tests carried out at 1/50-scale air-water DVI test facility are simulated to examine the capability of MARS. Test conditions are selected such that they represent typical reflood conditions of KNGR, that is, DVI injection velocities of 1.0 {approx} 1.6 m/sec and air injection velocities of 18.0 {approx} 35.0 m/sec, for single and double DVI configurations. MARS calculation is first adjusted to the experimental DVI film distribution that largely affects air-water interaction in a scaled-down downcomer, then, the code is assessed for the selected test matrix. With some improvements of MARS thermal-hydraulic (T/H) models, it has been demonstrated that the MARS code is capable of simulating the direct DVI bypass and downcomer level sweep-out as well as the multi-dimensional thermal hydraulics in downcomer, where condensation effect is excluded. (authors)
International Nuclear Information System (INIS)
Tanaka, Masa-aki; Kamide, Hideki
2001-02-01
This investigation deals with the porous blockage in a wire spacer type fuel subassembly in Fast Breeder Reactors (FBR's). Multi-dimensional analysis method for a porous blockage in a fuel subassembly is developed using the standard k-ε turbulence model with the typical correlations in handbooks. The purpose of this analysis method is to evaluate the position and the magnitude of the maximum temperature, and to investigate the thermo-hydraulic phenomena in the porous blockage. Verification of this analysis method was conducted based on the results of 4-subchannel geometry water test. It was revealed that the evaluation of the porosity distribution and the particle diameter in a porous blockage was important to predict the temperature distribution. This analysis method could simulate the spatial characteristic of velocity and temperature distributions in the blockage and evaluate the pin surface temperature inside the porous blockage. Through the verification of this analysis method, it is shown that this multi-dimensional analysis method is useful to predict the thermo-hydraulic field and the highest temperature in a porous blockage. (author)
Adjoint Based A Posteriori Analysis of Multiscale Mortar Discretizations with Multinumerics
Tavener, Simon; Wildey, Tim
2013-01-01
In this paper we derive a posteriori error estimates for linear functionals of the solution to an elliptic problem discretized using a multiscale nonoverlapping domain decomposition method. The error estimates are based on the solution
Speeding Up Network Simulations Using Discrete Time
Lucas, Aaron; Armbruster, Benjamin
2013-01-01
We develop a way of simulating disease spread in networks faster at the cost of some accuracy. Instead of a discrete event simulation (DES) we use a discrete time simulation. This aggregates events into time periods. We prove a bound on the accuracy attained. We also discuss the choice of step size and do an analytical comparison of the computational costs. Our error bound concept comes from the theory of numerical methods for SDEs and the basic proof structure comes from the theory of numeri...
Watson, Andrew William
2017-08-01
pocket above the liquid region, respectively. One of the lingering challenges in this experiment, however, is the determination of an event's position along the other two spatial dimensions, that is, its transverse or "xy" position. Some liquid noble element TPCs have achieved remarkably accurate event position reconstructions, typically using the relative amounts of S2 light collected by Photo-Multiplier Tubes ("PMTs") as the input data to their reconstruction algorithms. This approach has been partic- ularly challenging in DarkSide-50, partly due to unexpected asymmetries in the detector, and partly due to the design of the detector itself. A variety of xy-Reconstruction methods ("xy methods" for short) have come and gone in DS- 50, with only a few of them providing useful results. The xy method described in this dissertation is a two-step Principal Component Analysis / Multi-Dimensional Fit (PCAMDF) reconstruction. In a nutshell, this method develops a functional mapping from the 19-dimensional space of the signal received by the PMTs at the "top" (or the "anode" end) of the DarkSide-50 TPC to each of the transverse coordinates, x and y. PCAMDF is a low-level "machine learning" algorithm, and as such, needs to be "trained" with a sample of representative events; in this case, these are provided by the DarkSide geant4-based Monte Carlo, g4ds. In this work, a thorough description of the PCAMDF xy-Reconstruction method is provided along with an analysis of its performance on MC events and data. The method is applied to several classes of data events, including coincident decays, external gamma rays from calibration sources, and both atmospheric argon "AAr" and underground argon "UAr". Discrepancies between the MC and data are explored, and fiducial volume cuts are calculated. Finally, a novel method is proposed for finding the accuracy of the PCAMDF reconstruction on data by using the asymmetry of the S2 light collected on the anode and cathode PMT arrays as a function
Baecklund transformations for discrete Painleve equations: Discrete PII-PV
International Nuclear Information System (INIS)
Sakka, A.; Mugan, U.
2006-01-01
Transformation properties of discrete Painleve equations are investigated by using an algorithmic method. This method yields explicit transformations which relates the solutions of discrete Painleve equations, discrete P II -P V , with different values of parameters. The particular solutions which are expressible in terms of the discrete analogue of the classical special functions of discrete Painleve equations can also be obtained from these transformations
Discrete Gabor transform and discrete Zak transform
Bastiaans, M.J.; Namazi, N.M.; Matthews, K.
1996-01-01
Gabor's expansion of a discrete-time signal into a set of shifted and modulated versions of an elementary signal or synthesis window is introduced, along with the inverse operation, i.e. the Gabor transform, which uses an analysis window that is related to the synthesis window and with the help of
Discrete modeling considerations in multiphase fluid dynamics
International Nuclear Information System (INIS)
Ransom, V.H.; Ramshaw, J.D.
1988-01-01
The modeling of multiphase flows play a fundamental role in light water reactor safety. The main ingredients in our discrete modeling Weltanschauung are the following considerations: (1) Any physical model must be cast into discrete form for a digital computer. (2) The usual approach of formulating models in differential form and then discretizing them is potentially hazardous. It may be preferable to formulate the model in discrete terms from the outset. (3) Computer time and storage constraints limit the resolution that can be employed in practical calculations. These limits effectively define the physical phenomena, length scales, and time scales which cannot be directly represented in the calculation and therefore must be modeled. This information should be injected into the model formulation process at an early stage. (4) Practical resolution limits are generally so coarse that traditional convergence and truncation-error analyses become irrelevant. (5) A discrete model constitutes a reduced description of a physical system, from which fine-scale details are eliminated. This elimination creates a statistical closure problem. Methods from statistical physics may therefore be useful in the formulation of discrete models. In the present paper we elaborate on these themes and illustrate them with simple examples. 48 refs
Discrete Mathematics Re "Tooled."
Grassl, Richard M.; Mingus, Tabitha T. Y.
1999-01-01
Indicates the importance of teaching discrete mathematics. Describes how the use of technology can enhance the teaching and learning of discrete mathematics. Explorations using Excel, Derive, and the TI-92 proved how preservice and inservice teachers experienced a new dimension in problem solving and discovery. (ASK)
DEFF Research Database (Denmark)
Frier, Christian; Sørensen, John Dalsgaard
2005-01-01
For many reinforced concrete structures corrosion of the reinforcement is an important problem since it can result in expensive maintenance and repair actions. Further, a significant reduction of the load-bearing capacity can occur. One mode of corrosion initiation occurs when the chloride content...... is modeled by a 2-dimensional diffusion process by FEM (Finite Element Method) and the diffusion coefficient, surface chloride concentration and reinforcement cover depth are modeled by multidimensional stochastic fields, which are discretized using the EOLE (Expansion Optimum Linear Estimation) approach....... As an example a bridge pier in a marine environment is considered and the results are given in terms of the distribution of the time for initialization of corrosion...
Homogenization of discrete media
International Nuclear Information System (INIS)
Pradel, F.; Sab, K.
1998-01-01
Material such as granular media, beam assembly are easily seen as discrete media. They look like geometrical points linked together thanks to energetic expressions. Our purpose is to extend discrete kinematics to the one of an equivalent continuous material. First we explain how we build the localisation tool for periodic materials according to estimated continuum medium type (classical Cauchy, and Cosserat media). Once the bridge built between discrete and continuum media, we exhibit its application over two bidimensional beam assembly structures : the honey comb and a structural reinforced variation. The new behavior is then applied for the simple plan shear problem in a Cosserat continuum and compared with the real discrete solution. By the mean of this example, we establish the agreement of our new model with real structures. The exposed method has a longer range than mechanics and can be applied to every discrete problems like electromagnetism in which relationship between geometrical points can be summed up by an energetic function. (orig.)
Okuyama, Yoshifumi
2014-01-01
Discrete Control Systems establishes a basis for the analysis and design of discretized/quantized control systemsfor continuous physical systems. Beginning with the necessary mathematical foundations and system-model descriptions, the text moves on to derive a robust stability condition. To keep a practical perspective on the uncertain physical systems considered, most of the methods treated are carried out in the frequency domain. As part of the design procedure, modified Nyquist–Hall and Nichols diagrams are presented and discretized proportional–integral–derivative control schemes are reconsidered. Schemes for model-reference feedback and discrete-type observers are proposed. Although single-loop feedback systems form the core of the text, some consideration is given to multiple loops and nonlinearities. The robust control performance and stability of interval systems (with multiple uncertainties) are outlined. Finally, the monograph describes the relationship between feedback-control and discrete ev...
Discrete repulsive oscillator wavefunctions
International Nuclear Information System (INIS)
Munoz, Carlos A; Rueda-Paz, Juvenal; Wolf, Kurt Bernardo
2009-01-01
For the study of infinite discrete systems on phase space, the three-dimensional Lorentz algebra and group, so(2,1) and SO(2,1), provide a discrete model of the repulsive oscillator. Its eigenfunctions are found in the principal irreducible representation series, where the compact generator-that we identify with the position operator-has the infinite discrete spectrum of the integers Z, while the spectrum of energies is a double continuum. The right- and left-moving wavefunctions are given by hypergeometric functions that form a Dirac basis for l 2 (Z). Under contraction, the discrete system limits to the well-known quantum repulsive oscillator. Numerical computations of finite approximations raise further questions on the use of Dirac bases for infinite discrete systems.
Energy Technology Data Exchange (ETDEWEB)
Morris, J; Johnson, S
2007-12-03
The Distinct Element Method (also frequently referred to as the Discrete Element Method) (DEM) is a Lagrangian numerical technique where the computational domain consists of discrete solid elements which interact via compliant contacts. This can be contrasted with Finite Element Methods where the computational domain is assumed to represent a continuum (although many modern implementations of the FEM can accommodate some Distinct Element capabilities). Often the terms Discrete Element Method and Distinct Element Method are used interchangeably in the literature, although Cundall and Hart (1992) suggested that Discrete Element Methods should be a more inclusive term covering Distinct Element Methods, Displacement Discontinuity Analysis and Modal Methods. In this work, DEM specifically refers to the Distinct Element Method, where the discrete elements interact via compliant contacts, in contrast with Displacement Discontinuity Analysis where the contacts are rigid and all compliance is taken up by the adjacent intact material.
Commutative discrete filtering on unstructured grids based on least-squares techniques
International Nuclear Information System (INIS)
Haselbacher, Andreas; Vasilyev, Oleg V.
2003-01-01
The present work is concerned with the development of commutative discrete filters for unstructured grids and contains two main contributions. First, building on the work of Marsden et al. [J. Comp. Phys. 175 (2002) 584], a new commutative discrete filter based on least-squares techniques is constructed. Second, a new analysis of the discrete commutation error is carried out. The analysis indicates that the discrete commutation error is not only dependent on the number of vanishing moments of the filter weights, but also on the order of accuracy of the discrete gradient operator. The results of the analysis are confirmed by grid-refinement studies
Methodology for characterizing modeling and discretization uncertainties in computational simulation
Energy Technology Data Exchange (ETDEWEB)
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Kleiman, Evan M; Chiara, Alexandra M; Liu, Richard T; Jager-Hyman, Shari G; Choi, Jimmy Y; Alloy, Lauren B
2017-02-01
Optimism has been conceptualised variously as positive expectations (PE) for the future , optimistic attributions , illusion of control , and self-enhancing biases. Relatively little research has examined these multiple dimensions of optimism in relation to psychological and physical health. The current study assessed the multi-dimensional nature of optimism within a prospective vulnerability-stress framework. Initial principal component analyses revealed the following dimensions: PEs, Inferential Style (IS), Sense of Invulnerability (SI), and Overconfidence (O). Prospective follow-up analyses demonstrated that PE was associated with fewer depressive episodes and moderated the effect of stressful life events on depressive symptoms. SI also moderated the effect of life stress on anxiety symptoms. Generally, our findings indicated that optimism is a multifaceted construct and not all forms of optimism have the same effects on well-being. Specifically, our findings indicted that PE may be the most relevant to depression, whereas SI may be the most relevant to anxiety.
DEFF Research Database (Denmark)
Saeed Madani, Seyed; Swierczynski, Maciej Jozef; Kær, Søren Knudsen
2017-01-01
This paper gives insight into the discharge behavior of lithium-ion batteries based on the investigations, which have been done by the researchers [1– 19]. In this article, the battery's discharge behaviour at various discharge rates is studied and surface monitor, discharge curve, volume monitor...... to analysis the discharge behaviour of lithium-ion batteries. The results show that surface monitor plot of discharge curve at 1 C has a decreasing trend and volume monitor plot of maximum temperature in the domain has slightly increasing pattern over the simulation time. For the curves of discharge...... plot of maximum temperature in the domain and maximum temperature in the area are illustrated. Additionally, an external and internal short-circuit treatment for three cases have been studied. The Dual-Potential Multi-Scale Multi-Dimensional (MSMD) Battery Model (BM) was used by ANSYS FLUENT software...
Directory of Open Access Journals (Sweden)
Yuri Luchko
2017-12-01
Full Text Available In this paper, some new properties of the fundamental solution to the multi-dimensional space- and time-fractional diffusion-wave equation are deduced. We start with the Mellin-Barnes representation of the fundamental solution that was derived in the previous publications of the author. The Mellin-Barnes integral is used to obtain two new representations of the fundamental solution in the form of the Mellin convolution of the special functions of the Wright type. Moreover, some new closed-form formulas for particular cases of the fundamental solution are derived. In particular, we solve the open problem of the representation of the fundamental solution to the two-dimensional neutral-fractional diffusion-wave equation in terms of the known special functions.
Directory of Open Access Journals (Sweden)
Johanna Stahl
2015-03-01
Full Text Available Providing gifted students with personalized talent development programs is a challenge for teachers and educators alike. The multi-dimensional talent development tool (mBET guides teachers on their way to individualized gifted programs. Within a holistic and systemic concept of giftedness, the mBET brings together the perspectives of teachers, parents and the individual student in assessing talents as well as relevant personality characteristics and environment factors. By facilitating support-oriented round-table talks, the mBET helps teachers, parents and students to develop individually tailored talent development programs, taking into consideration both talents and other factors relevant for successful gifted education (i.e. non-cognitive personality characteristics and environmental factors.
Izadi, F A; Bagirov, G
2009-01-01
With its origins stretching back several centuries, discrete calculus is now an increasingly central methodology for many problems related to discrete systems and algorithms. The topics covered here usually arise in many branches of science and technology, especially in discrete mathematics, numerical analysis, statistics and probability theory as well as in electrical engineering, but our viewpoint here is that these topics belong to a much more general realm of mathematics; namely calculus and differential equations because of the remarkable analogy of the subject to this branch of mathemati
Brodie, Matthew A; Okubo, Yoshiro; Annegarn, Janneke; Wieching, Rainer; Lord, Stephen R; Delbaere, Kim
2017-01-01
Falls and physical deconditioning are two major health problems for older people. Recent advances in remote physiological monitoring provide new opportunities to investigate why walking exercise, with its many health benefits, can both increase and decrease fall rates in older people. In this paper we combine remote wearable device monitoring of daily gait with non-linear multi-dimensional pattern recognition analysis; to disentangle the complex associations between walking, health and fall rates. One week of activities of daily living (ADL) were recorded with a wearable device in 96 independent living older people prior to completing 6 months of exergaming interventions. Using the wearable device data; the quantity, intensity, variability and distribution of daily walking patterns were assessed. At baseline, clinical assessments of health, falls, sensorimotor and physiological fall risks were completed. At 6 months, fall rates, sensorimotor and physiological fall risks were re-assessed. A non-linear multi-dimensional analysis was conducted to identify risk-groups according to their daily walking patterns. Four distinct risk-groups were identified: The Impaired (93% fallers), Restrained (8% fallers), Active (50% fallers) and Athletic (4% fallers). Walking was strongly associated with multiple health benefits and protective of falls for the top performing Athletic risk-group. However, in the middle of the spectrum, the Active risk-group, who were more active, younger and healthier were 6.25 times more likely to be fallers than their Restrained counterparts. Remote monitoring of daily walking patterns may provide a new way to distinguish Impaired people at risk of falling because of frailty from Active people at risk of falling from greater exposure to situations were falls could occur, but further validation is required. Wearable device risk-profiling could help in developing more personalised interventions for older people seeking the health benefits of walking
Yu, Han
2013-09-01
The Talbot-Ogden hydrology model provides a fast mass conservative method to compute infiltration in unsaturated soils. As a replacement for a model based on Richards equation, it separates the groundwater movement into infiltration and redistribution for every time step. The typical feature making this method fast is the discretization of the moisture content domain rather than the spatial one. The Talbot-Ogden model rapidly determines how well ground water and aquifers are recharged only. Hence, it differs from models based on advanced reservoir modeling that are uniformly far more expensive computationally since they determine where the water moves in space instead, a completely different and more complex problem.According to the pore-size distribution curve for many soils, this paper extends the one dimensional moisture content domain into a two dimensional one by keeping the vertical spatial axis. The proposed extension can describe any pore-size or porosity distribution as an important soil feature. Based on this extension, infiltration and redistribution are restudied. The unconditional conservation of mass in the Talbot-Ogden model is inherited in this extended model. A numerical example is given for the extended model.
The Suppression of Energy Discretization Errors in Multigroup Transport Calculations
International Nuclear Information System (INIS)
Larsen, Edward
2013-01-01
The Objective of this project is to develop, implement, and test new deterministric methods to solve, as efficiently as possible, multigroup neutron transport problems having an extremely large number of groups. Our approach was to (i) use the standard CMFD method to 'coarsen' the space-angle grid, yielding a multigroup diffusion equation, and (ii) use a new multigrid-in-space-and-energy technique to efficiently solve the multigroup diffusion problem. The overall strategy of (i) how to coarsen the spatial an energy grids, and (ii) how to navigate through the various grids, has the goal of minimizing the overall computational effort. This approach yields not only the fine-grid solution, but also coarse-group flux-weighted cross sections that can be used for other related problems.
Galerkin v. discrete-optimal projection in nonlinear model reduction
Energy Technology Data Exchange (ETDEWEB)
Carlberg, Kevin Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Barone, Matthew Franklin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Antil, Harbir [George Mason Univ., Fairfax, VA (United States)
2015-04-01
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes. We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.
Finite Discrete Gabor Analysis
DEFF Research Database (Denmark)
Søndergaard, Peter Lempel
2007-01-01
frequency bands at certain times. Gabor theory can be formulated for both functions on the real line and for discrete signals of finite length. The two theories are largely the same because many aspects come from the same underlying theory of locally compact Abelian groups. The two types of Gabor systems...... can also be related by sampling and periodization. This thesis extends on this theory by showing new results for window construction. It also provides a discussion of the problems associated to discrete Gabor bases. The sampling and periodization connection is handy because it allows Gabor systems...... on the real line to be well approximated by finite and discrete Gabor frames. This method of approximation is especially attractive because efficient numerical methods exists for doing computations with finite, discrete Gabor systems. This thesis presents new algorithms for the efficient computation of finite...
Adaptive Discrete Hypergraph Matching.
Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao
2018-02-01
This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.
Goodrich, Christopher
2015-01-01
This text provides the first comprehensive treatment of the discrete fractional calculus. Experienced researchers will find the text useful as a reference for discrete fractional calculus and topics of current interest. Students who are interested in learning about discrete fractional calculus will find this text to provide a useful starting point. Several exercises are offered at the end of each chapter and select answers have been provided at the end of the book. The presentation of the content is designed to give ample flexibility for potential use in a myriad of courses and for independent study. The novel approach taken by the authors includes a simultaneous treatment of the fractional- and integer-order difference calculus (on a variety of time scales, including both the usual forward and backwards difference operators). The reader will acquire a solid foundation in the classical topics of the discrete calculus while being introduced to exciting recent developments, bringing them to the frontiers of the...
International Nuclear Information System (INIS)
Williams, Ruth M
2006-01-01
A review is given of a number of approaches to discrete quantum gravity, with a restriction to those likely to be relevant in four dimensions. This paper is dedicated to Rafael Sorkin on the occasion of his sixtieth birthday
Empirical study of the GARCH model with rational errors
International Nuclear Information System (INIS)
Chen, Ting Ting; Takaishi, Tetsuya
2013-01-01
We use the GARCH model with a fat-tailed error distribution described by a rational function and apply it to stock price data on the Tokyo Stock Exchange. To determine the model parameters we perform Bayesian inference to the model. Bayesian inference is implemented by the Metropolis-Hastings algorithm with an adaptive multi-dimensional Student's t-proposal density. In order to compare our model with the GARCH model with the standard normal errors, we calculate the information criteria AIC and DIC, and find that both criteria favor the GARCH model with a rational error distribution. We also calculate the accuracy of the volatility by using the realized volatility and find that a good accuracy is obtained for the GARCH model with a rational error distribution. Thus we conclude that the GARCH model with a rational error distribution is superior to the GARCH model with the normal errors and it can be used as an alternative GARCH model to those with other fat-tailed distributions
Discrete computational structures
Korfhage, Robert R
1974-01-01
Discrete Computational Structures describes discrete mathematical concepts that are important to computing, covering necessary mathematical fundamentals, computer representation of sets, graph theory, storage minimization, and bandwidth. The book also explains conceptual framework (Gorn trees, searching, subroutines) and directed graphs (flowcharts, critical paths, information network). The text discusses algebra particularly as it applies to concentrates on semigroups, groups, lattices, propositional calculus, including a new tabular method of Boolean function minimization. The text emphasize
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Adaptive discrete-ordinates algorithms and strategies
International Nuclear Information System (INIS)
Stone, J.C.; Adams, M.L.
2005-01-01
We present our latest algorithms and strategies for adaptively refined discrete-ordinates quadrature sets. In our basic strategy, which we apply here in two-dimensional Cartesian geometry, the spatial domain is divided into regions. Each region has its own quadrature set, which is adapted to the region's angular flux. Our algorithms add a 'test' direction to the quadrature set if the angular flux calculated at that direction differs by more than a user-specified tolerance from the angular flux interpolated from other directions. Different algorithms have different prescriptions for the method of interpolation and/or choice of test directions and/or prescriptions for quadrature weights. We discuss three different algorithms of different interpolation orders. We demonstrate through numerical results that each algorithm is capable of generating solutions with negligible angular discretization error. This includes elimination of ray effects. We demonstrate that all of our algorithms achieve a given level of error with far fewer unknowns than does a standard quadrature set applied to an entire problem. To address a potential issue with other algorithms, we present one algorithm that retains exact integration of high-order spherical-harmonics functions, no matter how much local refinement takes place. To address another potential issue, we demonstrate that all of our methods conserve partial currents across interfaces where quadrature sets change. We conclude that our approach is extremely promising for solving the long-standing problem of angular discretization error in multidimensional transport problems. (authors)
Energy Technology Data Exchange (ETDEWEB)
Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B_{0} is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB_{0}/B_{0}, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2
Image Retrieval Algorithm Based on Discrete Fractional Transforms
Jindal, Neeru; Singh, Kulbir
2013-06-01
The discrete fractional transforms is a signal processing tool which suggests computational algorithms and solutions to various sophisticated applications. In this paper, a new technique to retrieve the encrypted and scrambled image based on discrete fractional transforms has been proposed. Two-dimensional image was encrypted using discrete fractional transforms with three fractional orders and two random phase masks placed in the two intermediate planes. The significant feature of discrete fractional transforms benefits from its extra degree of freedom that is provided by its fractional orders. Security strength was enhanced (1024!)4 times by scrambling the encrypted image. In decryption process, image retrieval is sensitive for both correct fractional order keys and scrambling algorithm. The proposed approach make the brute force attack infeasible. Mean square error and relative error are the recital parameters to verify validity of proposed method.
Discretisation errors in Landau gauge on the lattice
International Nuclear Information System (INIS)
Bonnet DR, Frederic; Bowman O, Patrick; Leinweber B, Derek; Williams G, Anthony; Richards G, David G.
1999-01-01
Lattice discretization errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition improves comparison with continuum Landau gauge in two ways: (1) through the elimination of O(a 2 ) errors and (2) through a secondary effect of reducing the size of higher-order errors. These results emphasize the importance of implementing an improved gauge fixing condition
Homogenization of discrete media
Energy Technology Data Exchange (ETDEWEB)
Pradel, F.; Sab, K. [CERAM-ENPC, Marne-la-Vallee (France)
1998-11-01
Material such as granular media, beam assembly are easily seen as discrete media. They look like geometrical points linked together thanks to energetic expressions. Our purpose is to extend discrete kinematics to the one of an equivalent continuous material. First we explain how we build the localisation tool for periodic materials according to estimated continuum medium type (classical Cauchy, and Cosserat media). Once the bridge built between discrete and continuum media, we exhibit its application over two bidimensional beam assembly structures : the honey comb and a structural reinforced variation. The new behavior is then applied for the simple plan shear problem in a Cosserat continuum and compared with the real discrete solution. By the mean of this example, we establish the agreement of our new model with real structures. The exposed method has a longer range than mechanics and can be applied to every discrete problems like electromagnetism in which relationship between geometrical points can be summed up by an energetic function. (orig.) 7 refs.
DISCRETE MATHEMATICS/NUMBER THEORY
Mrs. Manju Devi*
2017-01-01
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics such as integers, graphs, and statements do not vary smoothly in this way, but have distinct, separated values. Discrete mathematics therefore excludes topics in "continuous mathematics" such as calculus and analysis. Discrete objects can often be enumerated by ...
Directory of Open Access Journals (Sweden)
Prateek Sharma
2015-04-01
Full Text Available Abstract Simulation can be regarded as the emulation of the behavior of a real-world system over an interval of time. The process of simulation relies upon the generation of the history of a system and then analyzing that history to predict the outcome and improve the working of real systems. Simulations can be of various kinds but the topic of interest here is one of the most important kind of simulation which is Discrete-Event Simulation which models the system as a discrete sequence of events in time. So this paper aims at introducing about Discrete-Event Simulation and analyzing how it is beneficial to the real world systems.
Discrete systems and integrability
Hietarinta, J; Nijhoff, F W
2016-01-01
This first introductory text to discrete integrable systems introduces key notions of integrability from the vantage point of discrete systems, also making connections with the continuous theory where relevant. While treating the material at an elementary level, the book also highlights many recent developments. Topics include: Darboux and Bäcklund transformations; difference equations and special functions; multidimensional consistency of integrable lattice equations; associated linear problems (Lax pairs); connections with Padé approximants and convergence algorithms; singularities and geometry; Hirota's bilinear formalism for lattices; intriguing properties of discrete Painlevé equations; and the novel theory of Lagrangian multiforms. The book builds the material in an organic way, emphasizing interconnections between the various approaches, while the exposition is mostly done through explicit computations on key examples. Written by respected experts in the field, the numerous exercises and the thoroug...
Exarchakis, Georgios; Lücke, Jörg
2017-11-01
Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.
International Nuclear Information System (INIS)
Kančev, Duško; Čepin, Marko; Gjorgiev, Blaže
2014-01-01
The benefits of utilizing the probabilistic safety assessment towards improvement of nuclear power plant safety are presented in this paper. Namely, a nuclear power plant risk reduction can be achieved by risk-informed optimization of the deterministically-determined surveillance requirements. A living probabilistic safety assessment tool for time-dependent risk analysis on component, system and plant level is developed. The study herein focuses on the application of this living probabilistic safety assessment tool as a computer platform for multi-objective multi-dimensional optimization of the surveillance requirements of selected safety equipment seen from the aspect of the risk-informed reasoning. The living probabilistic safety assessment tool is based on a newly developed model for calculating time-dependent unavailability of ageing safety equipment within nuclear power plants. By coupling the time-dependent unavailability model with a commercial software used for probabilistic safety assessment modelling on plant level, the frames of the new platform i.e. the living probabilistic safety assessment tool are established. In such way, the time-dependent core damage frequency is obtained and is further on utilized as first objective function within a multi-objective multi-dimensional optimization case study presented within this paper. The test and maintenance costs are designated as the second and the incurred dose due to performing the test and maintenance activities as the third objective function. The obtained results underline, in general, the usefulness and importance of a living probabilistic safety assessment, seen as a dynamic probabilistic safety assessment tool opposing the conventional, time-averaged unavailability-based, probabilistic safety assessment. The results of the optimization, in particular, indicate that test intervals derived as optimal differ from the deterministically-determined ones defined within the existing technical specifications
International Nuclear Information System (INIS)
Lai, Feili; Huang, Yunpeng; Miao, Yue-E; Liu, Tianxi
2015-01-01
Graphical Abstract: Multi-dimensional hybrid materials of nickel-cobalt layered double hydroxide nanorods/nanosheets grown on electrospun carbon nanofiber membranes were prepared via electrospinning combined with solution co-deposition for high-performance supercapacitor electrodes. - Highlights: • Ni-Co LDH@CNFhybridswerepreparedbyelectrospinningandsolutionco-deposition. • Ni-Co LDH@CNF hybrids show high electrochemical performance for supercapacitors. • This method can be extended to other bimetallic@CNF hybrids for electrode materials. - Abstract: Hybrid nanomaterials with hierarchical structures have been considered as one kind of the most promising electrode materials for high-performance supercapacitors with high capacity and long cycle lifetime. In this work, multi-dimensional hybrid materials of nickel-cobalt layered double hydroxide (Ni-Co LDH) nanorods/nanosheets on carbon nanofibers (CNFs) were prepared by electrospinning technique combined with one-step solution co-deposition method. Carbon nanofiber membranes were obtained by electrospinning of polyacrylonitrile (PAN) followed by pre-oxidation and carbonization. The successful growth of Ni-Co LDH with different morphologies on CNF membrane by using two kinds of auxiliary agents reveals the simplicity and universality of this method. The uniform and immense growth of Ni-Co LDH on CNFs significantly improves its dispersion and distribution. Meanwhile the hierarchical structure of carbon nanofiber@nickel-cobalt layered double hydroxide nanorods/nanosheets (CNF@Ni-Co LDH NR/NS) hybrid membranes provide not only more active sites for electrochemical reaction but also more efficient pathways for electron transport. Galvanostatic charge-discharge measurements reveal high specific capacitances of 1378.2 F g −1 and 1195.4 F g −1 (based on Ni-Co LDH mass) at 1 A g −1 for CNF@Ni-Co LDH NR and CNF@Ni-Co LDH NS hybrid membranes, respectively. Moreover, cycling stabilities for both hybrid membranes are
Introductory discrete mathematics
Balakrishnan, V K
2010-01-01
This concise text offers an introduction to discrete mathematics for undergraduate students in computer science and mathematics. Mathematics educators consider it vital that their students be exposed to a course in discrete methods that introduces them to combinatorial mathematics and to algebraic and logical structures focusing on the interplay between computer science and mathematics. The present volume emphasizes combinatorics, graph theory with applications to some stand network optimization problems, and algorithms to solve these problems.Chapters 0-3 cover fundamental operations involv
Prateek Sharma
2015-01-01
Abstract Simulation can be regarded as the emulation of the behavior of a real-world system over an interval of time. The process of simulation relies upon the generation of the history of a system and then analyzing that history to predict the outcome and improve the working of real systems. Simulations can be of various kinds but the topic of interest here is one of the most important kind of simulation which is Discrete-Event Simulation which models the system as a discrete sequence of ev...
Discrete linear canonical transform computation by adaptive method.
Zhang, Feng; Tao, Ran; Wang, Yue
2013-07-29
The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.
Energy Technology Data Exchange (ETDEWEB)
Bailey, Teresa S. [Texas A and M University, Department of Nuclear Engineering, College Station, TX 77843-3133 (United States)], E-mail: baileyte@tamu.edu; Adams, Marvin L. [Texas A and M University, Department of Nuclear Engineering, College Station, TX 77843-3133 (United States)], E-mail: mladams@tamu.edu; Yang, Brian [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Zika, Michael R. [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States)], E-mail: zika@llnl.gov
2008-04-01
We develop a piecewise linear (PWL) Galerkin finite element spatial discretization for the multi-dimensional radiation diffusion equation. It uses recently introduced piecewise linear weight and basis functions in the finite element approximation and it can be applied on arbitrary polygonal (2D) or polyhedral (3D) grids. We first demonstrate some analytical properties of the PWL method and perform a simple mode analysis to compare the PWL method with Palmer's vertex-centered finite-volume method and with a bilinear continuous finite element method. We then show that this new PWL method gives solutions comparable to those from Palmer's. However, since the PWL method produces a symmetric positive-definite coefficient matrix, it should be substantially more computationally efficient than Palmer's method, which produces an asymmetric matrix. We conclude that the Galerkin PWL method is an attractive option for solving diffusion equations on unstructured grids.
Energy Technology Data Exchange (ETDEWEB)
Bailey, T.S.; Adams, M.L. [Texas A M Univ., Dept. of Nuclear Engineering, College Station, TX (United States); Yang, B.; Zika, M.R. [Lawrence Livermore National Lab., Livermore, CA (United States)
2005-07-01
We develop a piecewise linear (PWL) Galerkin finite element spatial discretization for the multi-dimensional radiation diffusion equation. It uses piecewise linear weight and basis functions in the finite element approximation, and it can be applied on arbitrary polygonal (2-dimensional) or polyhedral (3-dimensional) grids. We show that this new PWL method gives solutions comparable to those from Palmer's finite-volume method. However, since the PWL method produces a symmetric positive definite coefficient matrix, it should be substantially more computationally efficient than Palmer's method, which produces an asymmetric matrix. We conclude that the Galerkin PWL method is an attractive option for solving diffusion equations on unstructured grids. (authors)
International Nuclear Information System (INIS)
Bailey, Teresa S.; Adams, Marvin L.; Yang, Brian; Zika, Michael R.
2008-01-01
We develop a piecewise linear (PWL) Galerkin finite element spatial discretization for the multi-dimensional radiation diffusion equation. It uses recently introduced piecewise linear weight and basis functions in the finite element approximation and it can be applied on arbitrary polygonal (2D) or polyhedral (3D) grids. We first demonstrate some analytical properties of the PWL method and perform a simple mode analysis to compare the PWL method with Palmer's vertex-centered finite-volume method and with a bilinear continuous finite element method. We then show that this new PWL method gives solutions comparable to those from Palmer's. However, since the PWL method produces a symmetric positive-definite coefficient matrix, it should be substantially more computationally efficient than Palmer's method, which produces an asymmetric matrix. We conclude that the Galerkin PWL method is an attractive option for solving diffusion equations on unstructured grids
Are strategies in physics discrete? A remote controlled investigation
Heck, Robert; Sherson, Jacob F.; www. scienceathome. org Team; players Team
2017-04-01
In science, strategies are formulated based on observations, calculations, or physical insight. For any given physical process, often several distinct strategies are identified. Are these truly distinct or simply low dimensional representations of a high dimensional continuum of solutions? Our online citizen science platform www.scienceathome.org used by more than 150,000 people recently enabled finding solutions to fast, 1D single atom transport [Nature2016]. Surprisingly, player trajectories bunched into discrete solution strategies (clans) yielding clear, distinct physical insight. Introducing the multi-dimensional vector in the direction of other local maxima we locate narrow, high-yield ``bridges'' connecting the clans. This demonstrates for this problem that a continuum of solutions with no clear physical interpretation does in fact exist. Next, four distinct strategies for creating Bose-Einstein condensates were investigated experimentally: hybrid and crossed dipole trap configurations in combination with either large volume or dimple loading from a magnetic trap. We find that although each conventional strategy appears locally optimal, ``bridges'' can be identified. In a novel approach, the problem was gamified allowing 750 citizen scientists to contribute to the experimental optimization yielding nearly a factor two improvement in atom number.
Indian Academy of Sciences (India)
We also describe discrete-time systems in terms of difference ... A more modern alternative, especially for larger systems, is to convert ... In other words, ..... picture?) State-variable equations are also called state-space equations because the ...
Discrete Lorentzian quantum gravity
Loll, R.
2000-01-01
Just as for non-abelian gauge theories at strong coupling, discrete lattice methods are a natural tool in the study of non-perturbative quantum gravity. They have to reflect the fact that the geometric degrees of freedom are dynamical, and that therefore also the lattice theory must be formulated
Sharp, Karen Tobey
This paper cites information received from a number of sources, e.g., mathematics teachers in two-year colleges, publishers, and convention speakers, about the nature of discrete mathematics and about what topics a course in this subject should contain. Note is taken of the book edited by Ralston and Young which discusses the future of college…
Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung
2013-05-01
Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two student groups. Moreover, within-culture comparisons were made in terms of gender. The results showed that, first, the SLSE instrument was valid and reliable for measuring the Singaporean and Taiwanese students' SLSE. Second, through a two-way multivariate analysis of variance analysis (nationality by gender), the main result indicated that the SLSE held by the Singaporean eighth graders was significantly higher than that of their Taiwanese counterparts in all dimensions, including 'conceptual understanding and higher-order cognitive skills', 'practical work (PW)', 'everyday application', and 'science communication'. In addition, the within-culture gender comparisons indicated that the male Singaporean students tended to possess higher SLSE than the female students did in all SLSE dimensions except for the 'PW' dimension. However, no gender differences were found in the Taiwanese sample. The findings unraveled in this study were interpreted from a socio-cultural perspective in terms of the curriculum differences, societal expectations of science education, and educational policies in Singapore and Taiwan.
Yang, Hyun-Jin; Ratnapriya, Rinki; Cogliati, Tiziana; Kim, Jung-Woong; Swaroop, Anand
2015-05-01
Genomics and genetics have invaded all aspects of biology and medicine, opening uncharted territory for scientific exploration. The definition of "gene" itself has become ambiguous, and the central dogma is continuously being revised and expanded. Computational biology and computational medicine are no longer intellectual domains of the chosen few. Next generation sequencing (NGS) technology, together with novel methods of pattern recognition and network analyses, has revolutionized the way we think about fundamental biological mechanisms and cellular pathways. In this review, we discuss NGS-based genome-wide approaches that can provide deeper insights into retinal development, aging and disease pathogenesis. We first focus on gene regulatory networks (GRNs) that govern the differentiation of retinal photoreceptors and modulate adaptive response during aging. Then, we discuss NGS technology in the context of retinal disease and develop a vision for therapies based on network biology. We should emphasize that basic strategies for network construction and analyses can be transported to any tissue or cell type. We believe that specific and uniform guidelines are required for generation of genome, transcriptome and epigenome data to facilitate comparative analysis and integration of multi-dimensional data sets, and for constructing networks underlying complex biological processes. As cellular homeostasis and organismal survival are dependent on gene-gene and gene-environment interactions, we believe that network-based biology will provide the foundation for deciphering disease mechanisms and discovering novel drug targets for retinal neurodegenerative diseases. Published by Elsevier Ltd.
Perna, Simone; Francis, Matthew D'Arcy; Bologna, Chiara; Moncaglieri, Francesca; Riva, Antonella; Morazzoni, Paolo; Allegrini, Pietro; Isu, Antonio; Vigo, Beatrice; Guerriero, Fabio; Rondanelli, Mariangela
2017-01-04
The aim of this study was to evaluate the performance of Edmonton Frail Scale (EFS) on frailty assessment in association with multi-dimensional conditions assessed with specific screening tools and to explore the prevalence of frailty by gender. We enrolled 366 hospitalised patients (women\\men: 251\\115), mean age 81.5 years. The EFS was given to the patients to evaluate their frailty. Then we collected data concerning cognitive status through Mini-Mental State Examination (MMSE), health status (evaluated with the number of diseases), functional independence (Barthel Index and Activities Daily Living; BI, ADL, IADL), use of drugs (counting of drugs taken every day), Mini Nutritional Assessment (MNA), Geriatric Depression Scale (GDS), Skeletal Muscle Index of sarcopenia (SMI), osteoporosis and functionality (Handgrip strength). According with the EFS, the 19.7% of subjects were classified as non frail, 66.4% as apparently vulnerable and 13.9% with severe frailty. The EFS scores were associated with cognition (MMSE: β = 0.980; p nutrition (MNA: β = -0.413; p performance (Handgrip: β = -0.114, p performance tool for stratifying the state of fragility in a group of institutionalized elderly. As matter of facts the EFS has been shown to be associated with several geriatric conditions such independence, drugs assumption, mood, mental, functional and nutritional status.
International Nuclear Information System (INIS)
Guenther, Uwe; Zhuk, Alexander; Bezerra, Valdir B; Romero, Carlos
2005-01-01
We study multi-dimensional gravitational models with scalar curvature nonlinearities of types R -1 and R 4 . It is assumed that the corresponding higher dimensional spacetime manifolds undergo a spontaneous compactification to manifolds with a warped product structure. Special attention has been paid to the stability of the extra-dimensional factor spaces. It is shown that for certain parameter regions the systems allow for a freezing stabilization of these spaces. In particular, we find for the R -1 model that configurations with stabilized extra dimensions do not provide a late-time acceleration (they are AdS), whereas the solution branch which allows for accelerated expansion (the dS branch) is incompatible with stabilized factor spaces. In the case of the R 4 model, we obtain that the stability region in parameter space depends on the total dimension D = dim(M) of the higher dimensional spacetime M. For D > 8 the stability region consists of a single (absolutely stable) sector which is shielded from a conformal singularity (and an antigravity sector beyond it) by a potential barrier of infinite height and width. This sector is smoothly connected with the stability region of a curvature-linear model. For D 4 model
Merritt, Elizabeth; Doss, Forrest; Loomis, Eric; Flippo, Kirk; Devolder, Barbara; Welser-Sherrill, Leslie; Fincke, James; Kline, John
2014-10-01
The counter-propagating shear campaign is examining instability growth and its transition to turbulence relevant to mix in ICF capsules. Experimental platforms on both OMEGA and NIF use anti-symmetric flows about a shear interface to examine isolated Kelvin-Helmholtz instability growth. Measurements of interface (an Al or Ti tracer layer) dynamics are used to benchmark the LANL RAGE hydrocode with BHR turbulence model. The tracer layer does not expand uniformly, but breaks up into multi-dimensional structures that are initially quasi-2D due to the target geometry. We are developing techniques to analyze the multi-D structure growth along the tracer surface with a focus on characterizing the time-dependent structures' spectrum of scales in order to appraise a transition to turbulence in the system and potentially provide tighter constraints on initialization schemes for the BHR model. To this end, we use a wavelet based analysis to diagnose single-time radiographs of the tracer layer surface (w/low and amplified roughness for random noise seeding) with observed spatially non-repetitive features, in order to identify spatial and temporal trends in radiographs taken at different times across several experimental shots. This work conducted under the auspices of the U.S. Department of Energy by LANL under Contract DE-AC52-06NA25396.
Discrete Exterior Calculus Discretization of Incompressible Navier-Stokes Equations
Mohamed, Mamdouh S.; Hirani, Anil N.; Samtaney, Ravi
2017-01-01
A conservative discretization of incompressible Navier-Stokes equations over surface simplicial meshes is developed using discrete exterior calculus (DEC). Numerical experiments for flows over surfaces reveal a second order accuracy
Discrete mKdV and discrete sine-Gordon flows on discrete space curves
International Nuclear Information System (INIS)
Inoguchi, Jun-ichi; Kajiwara, Kenji; Matsuura, Nozomu; Ohta, Yasuhiro
2014-01-01
In this paper, we consider the discrete deformation of the discrete space curves with constant torsion described by the discrete mKdV or the discrete sine-Gordon equations, and show that it is formulated as the torsion-preserving equidistant deformation on the osculating plane which satisfies the isoperimetric condition. The curve is reconstructed from the deformation data by using the Sym–Tafel formula. The isoperimetric equidistant deformation of the space curves does not preserve the torsion in general. However, it is possible to construct the torsion-preserving deformation by tuning the deformation parameters. Further, it is also possible to make an arbitrary choice of the deformation described by the discrete mKdV equation or by the discrete sine-Gordon equation at each step. We finally show that the discrete deformation of discrete space curves yields the discrete K-surfaces. (paper)
Discrete mathematics with applications
Koshy, Thomas
2003-01-01
This approachable text studies discrete objects and the relationsips that bind them. It helps students understand and apply the power of discrete math to digital computer systems and other modern applications. It provides excellent preparation for courses in linear algebra, number theory, and modern/abstract algebra and for computer science courses in data structures, algorithms, programming languages, compilers, databases, and computation.* Covers all recommended topics in a self-contained, comprehensive, and understandable format for students and new professionals * Emphasizes problem-solving techniques, pattern recognition, conjecturing, induction, applications of varying nature, proof techniques, algorithm development and correctness, and numeric computations* Weaves numerous applications into the text* Helps students learn by doing with a wealth of examples and exercises: - 560 examples worked out in detail - More than 3,700 exercises - More than 150 computer assignments - More than 600 writing projects*...
Discrete and computational geometry
Devadoss, Satyan L
2011-01-01
Discrete geometry is a relatively new development in pure mathematics, while computational geometry is an emerging area in applications-driven computer science. Their intermingling has yielded exciting advances in recent years, yet what has been lacking until now is an undergraduate textbook that bridges the gap between the two. Discrete and Computational Geometry offers a comprehensive yet accessible introduction to this cutting-edge frontier of mathematics and computer science. This book covers traditional topics such as convex hulls, triangulations, and Voronoi diagrams, as well as more recent subjects like pseudotriangulations, curve reconstruction, and locked chains. It also touches on more advanced material, including Dehn invariants, associahedra, quasigeodesics, Morse theory, and the recent resolution of the Poincaré conjecture. Connections to real-world applications are made throughout, and algorithms are presented independently of any programming language. This richly illustrated textbook also fe...
2002-01-01
Discrete geometry investigates combinatorial properties of configurations of geometric objects. To a working mathematician or computer scientist, it offers sophisticated results and techniques of great diversity and it is a foundation for fields such as computational geometry or combinatorial optimization. This book is primarily a textbook introduction to various areas of discrete geometry. In each area, it explains several key results and methods, in an accessible and concrete manner. It also contains more advanced material in separate sections and thus it can serve as a collection of surveys in several narrower subfields. The main topics include: basics on convex sets, convex polytopes, and hyperplane arrangements; combinatorial complexity of geometric configurations; intersection patterns and transversals of convex sets; geometric Ramsey-type results; polyhedral combinatorics and high-dimensional convexity; and lastly, embeddings of finite metric spaces into normed spaces. Jiri Matousek is Professor of Com...
Time Discretization Techniques
Gottlieb, S.
2016-10-12
The time discretization of hyperbolic partial differential equations is typically the evolution of a system of ordinary differential equations obtained by spatial discretization of the original problem. Methods for this time evolution include multistep, multistage, or multiderivative methods, as well as a combination of these approaches. The time step constraint is mainly a result of the absolute stability requirement, as well as additional conditions that mimic physical properties of the solution, such as positivity or total variation stability. These conditions may be required for stability when the solution develops shocks or sharp gradients. This chapter contains a review of some of the methods historically used for the evolution of hyperbolic PDEs, as well as cutting edge methods that are now commonly used.
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Czech Academy of Sciences Publication Activity Database
Mesiar, Radko; Li, J.; Pap, E.
2013-01-01
Roč. 54, č. 3 (2013), s. 357-364 ISSN 0888-613X R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : concave integral * pseudo-addition * pseudo-multiplication Subject RIV: BA - General Mathematics Impact factor: 1.977, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-discrete pseudo-integrals.pdf
Discrete variational Hamiltonian mechanics
International Nuclear Information System (INIS)
Lall, S; West, M
2006-01-01
The main contribution of this paper is to present a canonical choice of a Hamiltonian theory corresponding to the theory of discrete Lagrangian mechanics. We make use of Lagrange duality and follow a path parallel to that used for construction of the Pontryagin principle in optimal control theory. We use duality results regarding sensitivity and separability to show the relationship between generating functions and symplectic integrators. We also discuss connections to optimal control theory and numerical algorithms
International Nuclear Information System (INIS)
Jalnapurkar, Sameer M; Leok, Melvin; Marsden, Jerrold E; West, Matthew
2006-01-01
This paper develops the theory of Abelian Routh reduction for discrete mechanical systems and applies it to the variational integration of mechanical systems with Abelian symmetry. The reduction of variational Runge-Kutta discretizations is considered, as well as the extent to which symmetry reduction and discretization commute. These reduced methods allow the direct simulation of dynamical features such as relative equilibria and relative periodic orbits that can be obscured or difficult to identify in the unreduced dynamics. The methods are demonstrated for the dynamics of an Earth orbiting satellite with a non-spherical J 2 correction, as well as the double spherical pendulum. The J 2 problem is interesting because in the unreduced picture, geometric phases inherent in the model and those due to numerical discretization can be hard to distinguish, but this issue does not appear in the reduced algorithm, where one can directly observe interesting dynamical structures in the reduced phase space (the cotangent bundle of shape space), in which the geometric phases have been removed. The main feature of the double spherical pendulum example is that it has a non-trivial magnetic term in its reduced symplectic form. Our method is still efficient as it can directly handle the essential non-canonical nature of the symplectic structure. In contrast, a traditional symplectic method for canonical systems could require repeated coordinate changes if one is evoking Darboux' theorem to transform the symplectic structure into canonical form, thereby incurring additional computational cost. Our method allows one to design reduced symplectic integrators in a natural way, despite the non-canonical nature of the symplectic structure
Discrete port-Hamiltonian systems
Talasila, V.; Clemente-Gallardo, J.; Schaft, A.J. van der
2006-01-01
Either from a control theoretic viewpoint or from an analysis viewpoint it is necessary to convert smooth systems to discrete systems, which can then be implemented on computers for numerical simulations. Discrete models can be obtained either by discretizing a smooth model, or by directly modeling
A paradigm for discrete physics
International Nuclear Information System (INIS)
Noyes, H.P.; McGoveran, D.; Etter, T.; Manthey, M.J.; Gefwert, C.
1987-01-01
An example is outlined for constructing a discrete physics using as a starting point the insight from quantum physics that events are discrete, indivisible and non-local. Initial postulates are finiteness, discreteness, finite computability, absolute nonuniqueness (i.e., homogeneity in the absence of specific cause) and additivity
Digital Resonant Controller based on Modified Tustin Discretization Method
Directory of Open Access Journals (Sweden)
STOJIC, D.
2016-11-01
Full Text Available Resonant controllers are used in power converter voltage and current control due to their simplicity and accuracy. However, digital implementation of resonant controllers introduces problems related to zero and pole mapping from the continuous to the discrete time domain. Namely, some discretization methods introduce significant errors in the digital controller resonant frequency, resulting in the loss of the asymptotic AC reference tracking, especially at high resonant frequencies. The delay compensation typical for resonant controllers can also be compromised. Based on the existing analysis, it can be concluded that the Tustin discretization with frequency prewarping represents a preferable choice from the point of view of the resonant frequency accuracy. However, this discretization method has a shortcoming in applications that require real-time frequency adaptation, since complex trigonometric evaluation is required for each frequency change. In order to overcome this problem, in this paper the modified Tustin discretization method is proposed based on the Taylor series approximation of the frequency prewarping function. By comparing the novel discretization method with commonly used two-integrator-based proportional-resonant (PR digital controllers, it is shown that the resulting digital controller resonant frequency and time delay compensation errors are significantly reduced for the novel controller.
International Nuclear Information System (INIS)
Iwahara, Junji; Clore, G. Marius
2006-01-01
Due to practical limitations in available 15 N rf field strength, imperfections in 15 N 180 o pulses arising from off-resonance effects can result in significant sensitivity loss, even if the chemical shift offset is relatively small. Indeed, in multi-dimensional NMR experiments optimized for protein backbone amide groups, cross-peaks arising from the Arg guanidino 15 Nε (∼85 ppm) are highly attenuated by the presence of multiple INEPT transfer steps. To improve the sensitivity for correlations involving Arg Nε-Hε groups, we have incorporated 15 N broadband 180 deg. pulses into 3D 15 N-separated NOE-HSQC and HNCACB experiments. Two 15 N-WURST pulses incorporated at the INEPT transfer steps of the 3D 15 N-separated NOE-HSQC pulse sequence resulted in a ∼1.5-fold increase in sensitivity for the Arg Nε-Hε signals at 800 MHz. For the 3D HNCACB experiment, five 15 N Abramovich-Vega pulses were incorporated for broadband inversion and refocusing, and the sensitivity of Arg 1 Hε- 15 Nε- 13 Cγ/ 13 Cδ correlation peaks was enhanced by a factor of ∼1.7 at 500 MHz. These experiments eliminate the necessity for additional experiments to assign Arg 1 Hε and 15 Nε resonances. In addition, the increased sensitivity afforded for the detection of NOE cross-peaks involving correlations with the 15 Nε/ 1 Hε of Arg in 3D 15 N-separated NOE experiments should prove to be very useful for structural analysis of interactions involving Arg side-chains
International Nuclear Information System (INIS)
Kachelriess, Marc; Watzke, Oliver; Kalender, Willi A.
2001-01-01
In modern computed tomography (CT) there is a strong desire to reduce patient dose and/or to improve image quality by increasing spatial resolution and decreasing image noise. These are conflicting demands since increasing resolution at a constant noise level or decreasing noise at a constant resolution level implies a higher demand on x-ray power and an increase of patient dose. X-ray tube power is limited due to technical reasons. We therefore developed a generalized multi-dimensional adaptive filtering approach that applies nonlinear filters in up to three dimensions in the raw data domain. This new method differs from approaches in the literature since our nonlinear filters are applied not only in the detector row direction but also in the view and in the z-direction. This true three-dimensional filtering improves the quantum statistics of a measured projection value proportional to the third power of the filter size. Resolution tradeoffs are shared among these three dimensions and thus are considerably smaller as compared to one-dimensional smoothing approaches. Patient data of spiral and sequential single- and multi-slice CT scans as well as simulated spiral cone-beam data were processed to evaluate these new approaches. Image quality was assessed by evaluation of difference images, by measuring the image noise and the noise reduction, and by calculating the image resolution using point spread functions. The use of generalized adaptive filters helps to reduce image noise or, alternatively, patient dose. Image noise structures, typically along the direction of the highest attenuation, are effectively reduced. Noise reduction values of typically 30%-60% can be achieved in noncylindrical body regions like the shoulder. The loss in image resolution remains below 5% for all cases. In addition, the new method has a great potential to reduce metal artifacts, e.g., in the hip region
Energy Technology Data Exchange (ETDEWEB)
Guenther, Uwe [Gravitationsprojekt, Mathematische Physik I, Institut fuer Mathematik, Universitaet Potsdam, Am Neuen Palais 10, PF 601553, D-14415 Potsdam (Germany); Zhuk, Alexander [Department of Physics, University of Odessa, 2 Dvoryanskaya St, Odessa 65100 (Ukraine); Bezerra, Valdir B [Departamento de Fisica, Universidade Federal de ParaIba C Postal 5008, Joao Pessoa, PB, 58059-970 (Brazil); Romero, Carlos [Departamento de Fisica, Universidade Federal de ParaIba C Postal 5008, Joao Pessoa, PB, 58059-970 (Brazil)
2005-08-21
We study multi-dimensional gravitational models with scalar curvature nonlinearities of types R{sup -1} and R{sup 4}. It is assumed that the corresponding higher dimensional spacetime manifolds undergo a spontaneous compactification to manifolds with a warped product structure. Special attention has been paid to the stability of the extra-dimensional factor spaces. It is shown that for certain parameter regions the systems allow for a freezing stabilization of these spaces. In particular, we find for the R{sup -1} model that configurations with stabilized extra dimensions do not provide a late-time acceleration (they are AdS), whereas the solution branch which allows for accelerated expansion (the dS branch) is incompatible with stabilized factor spaces. In the case of the R{sup 4} model, we obtain that the stability region in parameter space depends on the total dimension D = dim(M) of the higher dimensional spacetime M. For D > 8 the stability region consists of a single (absolutely stable) sector which is shielded from a conformal singularity (and an antigravity sector beyond it) by a potential barrier of infinite height and width. This sector is smoothly connected with the stability region of a curvature-linear model. For D < 8 an additional (metastable) sector exists which is separated from the conformal singularity by a potential barrier of finite height and width so that systems in this sector are prone to collapse into the conformal singularity. This second sector is not smoothly connected with the first (absolutely stable) one. Several limiting cases and the possibility of inflation are discussed for the R{sup 4} model.
Energy Technology Data Exchange (ETDEWEB)
Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas, E-mail: bjmuellr@mpa-garching.mpg.de, E-mail: thj@mpa-garching.mpg.de [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany)
2012-09-01
We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M{sub Sun} progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.
Energy Technology Data Exchange (ETDEWEB)
Kang, Hyung Seok; Kim, Jongtae; Kim, Sang-Baik; Hong, Seong-Wan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-10-15
The COM3D analyze an overpressure buildup resulting from a propagation of hydrogen flame along the structure and wall in the containment using the hydrogen distribution result calculated by the GASFLOW. The MAAP evaluates a hydrogen source during a severe accident and transfer it to the GASFLOW. We performed a hydrogen combustion analysis using the multidimensional hydrogen analysis system for a station blackout (SBO) accident under the assumption of 100% metal-water reaction in the reactor vessel. The COM3D results showed that the pressure buildup was about 250 kPa because the flame speed was not increased above 300 m/s and the pressure wave passed through the open spaces in the large containment. To increase the reliability of the COM3D calculation, it is necessary to perform the hydrogen combustion analysis for another accident such as a small break loss of coolant (SBLOCA). KAERI performed a hydrogen combustion analysis for a SBLOCA accident using the multi-dimensional hydrogen analysis system under the assumption of 100% metal-water reaction in the reactor vessel. From the COM3D results, we can know that the pressure buildup was approximately 310 kPa because the flame speed was not increased above 100 m/s owing to the high steam concentration and low oxygen concentration in the hydrogen distributed region of the containment. The predicted maximum overpressure in the SBLOCA accident is similar to that of the COM3D results for the SBO accident. Thus, we found that the maximum overpressure due to the hydrogen combustion in the containment may depend on the amount of hydrogen mass released from the reactor vessel.
Two new discrete integrable systems
International Nuclear Information System (INIS)
Chen Xiao-Hong; Zhang Hong-Qing
2013-01-01
In this paper, we focus on the construction of new (1+1)-dimensional discrete integrable systems according to a subalgebra of loop algebra Ã 1 . By designing two new (1+1)-dimensional discrete spectral problems, two new discrete integrable systems are obtained, namely, a 2-field lattice hierarchy and a 3-field lattice hierarchy. When deriving the two new discrete integrable systems, we find the generalized relativistic Toda lattice hierarchy and the generalized modified Toda lattice hierarchy. Moreover, we also obtain the Hamiltonian structures of the two lattice hierarchies by means of the discrete trace identity
Improved fat suppression of the breast using discretized frequency shimming
van der Velden, Tijl A.; Luijten, Peter R.; Klomp, DWJ
2018-01-01
Purpose: Robust fat suppression is essential in bilateral breast MRI at 7 Tesla. The lack of good fat suppression can result in errors when calculating the enhancement curve from dynamic contrast-enhanced acquisitions. In this work we propose discretized frequency shimming to improve the quality of
Temperature-dependent errors in nuclear lattice simulations
International Nuclear Information System (INIS)
Lee, Dean; Thomson, Richard
2007-01-01
We study the temperature dependence of discretization errors in nuclear lattice simulations. We find that for systems with strong attractive interactions the predominant error arises from the breaking of Galilean invariance. We propose a local 'well-tempered' lattice action which eliminates much of this error. The well-tempered action can be readily implemented in lattice simulations for nuclear systems as well as cold atomic Fermi systems
DEFF Research Database (Denmark)
Bergstrøm-Nielsen, Carl
2006-01-01
First part of this work examines the concept of musical parameter theory and discusses its methodical use. Second part is an annotated catalogue of 33 different students' compositions, presented in their totality with English translations, created between 1985 and 2006 as part of the subject...... Intuitive Music at Music Therapy, AAU. 20 of these have sound files as well. The work thus serves as an anthology of this form of composition. All the compositions are systematically presented according to parameters: pitch, duration, dynamics, timbre, density, pulse-no pulse, tempo, stylistic...
The Full—Discrete Mixed Finite Element Methods for Nonlinear Hyperbolic Equations
Institute of Scientific and Technical Information of China (English)
YanpingCHEN; YunqingHUANG
1998-01-01
This article treats mixed finite element methods for second order nonlinear hyperbolic equations.A fully discrete scheme is presented and improved L2-error estimates are established.The convergence of both the function value andthe flux is demonstrated.
Asynchronous discrete event schemes for PDEs
Stone, D.; Geiger, S.; Lord, G. J.
2017-08-01
A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.
Hirsch, M; Peinado, E; Valle, J W F
2010-01-01
We propose a new motivation for the stability of dark matter (DM). We suggest that the same non-abelian discrete flavor symmetry which accounts for the observed pattern of neutrino oscillations, spontaneously breaks to a Z2 subgroup which renders DM stable. The simplest scheme leads to a scalar doublet DM potentially detectable in nuclear recoil experiments, inverse neutrino mass hierarchy, hence a neutrinoless double beta decay rate accessible to upcoming searches, while reactor angle equal to zero gives no CP violation in neutrino oscillations.
Wuensche, Andrew
DDLab is interactive graphics software for creating, visualizing, and analyzing many aspects of Cellular Automata, Random Boolean Networks, and Discrete Dynamical Networks in general and studying their behavior, both from the time-series perspective — space-time patterns, and from the state-space perspective — attractor basins. DDLab is relevant to research, applications, and education in the fields of complexity, self-organization, emergent phenomena, chaos, collision-based computing, neural networks, content addressable memory, genetic regulatory networks, dynamical encryption, generative art and music, and the study of the abstract mathematical/physical/dynamical phenomena in their own right.
Learning from prescribing errors
Dean, B
2002-01-01
The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...
Error reduction techniques for Monte Carlo neutron transport calculations
International Nuclear Information System (INIS)
Ju, J.H.W.
1981-01-01
Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas
A Portfolio Approach to Risk Reduction in Discretely Rebalanced Option Hedges
Antonio S. Mello; Henrik J. Neuhaus
1998-01-01
This paper analyses the accumulated hedging errors generated by discretely rebalanced option hedges. We show that simple generalizations of the prior research can underestimate the variance of the accumulated hedging errors and that even with daily rebalancing, these accumulated hedging errors can introduce substantial risk in arbitrage strategies suggested by the Black-Scholes option pricing model. We also show that the correlation between the accumulated hedging errors for different options...
International Nuclear Information System (INIS)
Souza, Manoelito M. de
1997-01-01
We discuss the physical meaning and the geometric interpretation of implementation in classical field theories. The origin of infinities and other inconsistencies in field theories is traced to fields defined with support on the light cone; a finite and consistent field theory requires a light-cone generator as the field support. Then, we introduce a classical field theory with support on the light cone generators. It results on a description of discrete (point-like) interactions in terms of localized particle-like fields. We find the propagators of these particle-like fields and discuss their physical meaning, properties and consequences. They are conformally invariant, singularity-free, and describing a manifestly covariant (1 + 1)-dimensional dynamics in a (3 = 1) spacetime. Remarkably this conformal symmetry remains even for the propagation of a massive field in four spacetime dimensions. We apply this formalism to Classical electrodynamics and to the General Relativity Theory. The standard formalism with its distributed fields is retrieved in terms of spacetime average of the discrete field. Singularities are the by-products of the averaging process. This new formalism enlighten the meaning and the problem of field theory, and may allow a softer transition to a quantum theory. (author)
Sun, HongGuang; Liu, Xiaoting; Zhang, Yong; Pang, Guofei; Garrard, Rhiannon
2017-09-01
Fractional-order diffusion equations (FDEs) extend classical diffusion equations by quantifying anomalous diffusion frequently observed in heterogeneous media. Real-world diffusion can be multi-dimensional, requiring efficient numerical solvers that can handle long-term memory embedded in mass transport. To address this challenge, a semi-discrete Kansa method is developed to approximate the two-dimensional spatiotemporal FDE, where the Kansa approach first discretizes the FDE, then the Gauss-Jacobi quadrature rule solves the corresponding matrix, and finally the Mittag-Leffler function provides an analytical solution for the resultant time-fractional ordinary differential equation. Numerical experiments are then conducted to check how the accuracy and convergence rate of the numerical solution are affected by the distribution mode and number of spatial discretization nodes. Applications further show that the numerical method can efficiently solve two-dimensional spatiotemporal FDE models with either a continuous or discrete mixing measure. Hence this study provides an efficient and fast computational method for modeling super-diffusive, sub-diffusive, and mixed diffusive processes in large, two-dimensional domains with irregular shapes.
Meshes optimized for discrete exterior calculus (DEC).
Energy Technology Data Exchange (ETDEWEB)
Mousley, Sarah C. [Univ. of Illinois, Urbana-Champaign, IL (United States); Deakin, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knupp, Patrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-12-01
We study the optimization of an energy function used by the meshing community to measure and improve mesh quality. This energy is non-traditional because it is dependent on both the primal triangulation and its dual Voronoi (power) diagram. The energy is a measure of the mesh's quality for usage in Discrete Exterior Calculus (DEC), a method for numerically solving PDEs. In DEC, the PDE domain is triangulated and this mesh is used to obtain discrete approximations of the continuous operators in the PDE. The energy of a mesh gives an upper bound on the error of the discrete diagonal approximation of the Hodge star operator. In practice, one begins with an initial mesh and then makes adjustments to produce a mesh of lower energy. However, we have discovered several shortcomings in directly optimizing this energy, e.g. its non-convexity, and we show that the search for an optimized mesh may lead to mesh inversion (malformed triangles). We propose a new energy function to address some of these issues.
Discrete Exterior Calculus Discretization of Incompressible Navier-Stokes Equations
Mohamed, Mamdouh S.
2017-05-23
A conservative discretization of incompressible Navier-Stokes equations over surface simplicial meshes is developed using discrete exterior calculus (DEC). Numerical experiments for flows over surfaces reveal a second order accuracy for the developed scheme when using structured-triangular meshes, and first order accuracy otherwise. The mimetic character of many of the DEC operators provides exact conservation of both mass and vorticity, in addition to superior kinetic energy conservation. The employment of barycentric Hodge star allows the discretization to admit arbitrary simplicial meshes. The discretization scheme is presented along with various numerical test cases demonstrating its main characteristics.
Advances in discrete differential geometry
2016-01-01
This is one of the first books on a newly emerging field of discrete differential geometry and an excellent way to access this exciting area. It surveys the fascinating connections between discrete models in differential geometry and complex analysis, integrable systems and applications in computer graphics. The authors take a closer look at discrete models in differential geometry and dynamical systems. Their curves are polygonal, surfaces are made from triangles and quadrilaterals, and time is discrete. Nevertheless, the difference between the corresponding smooth curves, surfaces and classical dynamical systems with continuous time can hardly be seen. This is the paradigm of structure-preserving discretizations. Current advances in this field are stimulated to a large extent by its relevance for computer graphics and mathematical physics. This book is written by specialists working together on a common research project. It is about differential geometry and dynamical systems, smooth and discrete theories, ...
Poisson hierarchy of discrete strings
International Nuclear Information System (INIS)
Ioannidou, Theodora; Niemi, Antti J.
2016-01-01
The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.
Poisson hierarchy of discrete strings
Energy Technology Data Exchange (ETDEWEB)
Ioannidou, Theodora, E-mail: ti3@auth.gr [Faculty of Civil Engineering, School of Engineering, Aristotle University of Thessaloniki, 54249, Thessaloniki (Greece); Niemi, Antti J., E-mail: Antti.Niemi@physics.uu.se [Department of Physics and Astronomy, Uppsala University, P.O. Box 803, S-75108, Uppsala (Sweden); Laboratoire de Mathematiques et Physique Theorique CNRS UMR 6083, Fédération Denis Poisson, Université de Tours, Parc de Grandmont, F37200, Tours (France); Department of Physics, Beijing Institute of Technology, Haidian District, Beijing 100081 (China)
2016-01-28
The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.
MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.
2007-05-01
SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs
Meertens, C. M.; Murray, D.; McWhirter, J.
2004-12-01
Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements
International Nuclear Information System (INIS)
Picard, R.R.
1989-01-01
Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process
Martínez-Legaz, Juan Enrique; Soubeyran, Antoine
2003-01-01
We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.
Principles of discrete time mechanics
Jaroszkiewicz, George
2014-01-01
Could time be discrete on some unimaginably small scale? Exploring the idea in depth, this unique introduction to discrete time mechanics systematically builds the theory up from scratch, beginning with the historical, physical and mathematical background to the chronon hypothesis. Covering classical and quantum discrete time mechanics, this book presents all the tools needed to formulate and develop applications of discrete time mechanics in a number of areas, including spreadsheet mechanics, classical and quantum register mechanics, and classical and quantum mechanics and field theories. A consistent emphasis on contextuality and the observer-system relationship is maintained throughout.
Generalized Gaussian Error Calculus
Grabe, Michael
2010-01-01
For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...
Dark discrete gauge symmetries
International Nuclear Information System (INIS)
Batell, Brian
2011-01-01
We investigate scenarios in which dark matter is stabilized by an Abelian Z N discrete gauge symmetry. Models are surveyed according to symmetries and matter content. Multicomponent dark matter arises when N is not prime and Z N contains one or more subgroups. The dark sector interacts with the visible sector through the renormalizable kinetic mixing and Higgs portal operators, and we highlight the basic phenomenology in these scenarios. In particular, multiple species of dark matter can lead to an unconventional nuclear recoil spectrum in direct detection experiments, while the presence of new light states in the dark sector can dramatically affect the decays of the Higgs at the Tevatron and LHC, thus providing a window into the gauge origin of the stability of dark matter.
International Nuclear Information System (INIS)
Noyes, H.P.; Starson, S.
1991-03-01
Discrete physics, because it replaces time evolution generated by the energy operator with a global bit-string generator (program universe) and replaces ''fields'' with the relativistic Wheeler-Feynman ''action at a distance,'' allows the consistent formulation of the concept of signed gravitational charge for massive particles. The resulting prediction made by this version of the theory is that free anti-particles near the surface of the earth will ''fall'' up with the same acceleration that the corresponding particles fall down. So far as we can see, no current experimental information is in conflict with this prediction of our theory. The experiment crusis will be one of the anti-proton or anti-hydrogen experiments at CERN. Our prediction should be much easier to test than the small effects which those experiments are currently designed to detect or bound. 23 refs
A new discrete dipole kernel for quantitative susceptibility mapping.
Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian
2018-09-01
Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.
Medication errors: prescribing faults and prescription errors.
Velo, Giampaolo P; Minuz, Pietro
2009-06-01
1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.
International Nuclear Information System (INIS)
Tang, K.
2012-01-01
When numerically investigating multiphase phenomena during severe accidents in a reactor system, characteristic lengths of the multi-fluid zone (non-reactive and reactive) are found to be much smaller than the volume of the reactor containment, which makes the direct modeling of the configuration hardly achievable. Alternatively, we propose to consider the physical multiphase mixture zone as an infinitely thin interface. Then, the reactive Riemann solver is inserted into the Reactive Discrete Equations Method (RDEM) to compute high speed combustion waves represented by discontinuous interfaces. An anti-diffusive approach is also coupled with RDEM to accurately simulate reactive interfaces. Increased robustness and efficiency when computing both multiphase interfaces and reacting flows are achieved thanks to an original upwind downwind-controlled splitting method (UDCS). UDCS is capable of accurately solving interfaces on multi-dimensional unstructured meshes, including reacting fronts for both deflagration and detonation configurations. (author)
Control of Discrete Event Systems
Smedinga, Rein
1989-01-01
Systemen met discrete gebeurtenissen spelen in vele gebieden een rol. In dit proefschrift staat de volgorde van gebeurtenissen centraal en worden tijdsaspecten buiten beschouwing gelaten. In dat geval kunnen systemen met discrete gebeurtenissen goed worden gemodelleerd door gebruik te maken van
Discrete Mathematics and Its Applications
Oxley, Alan
2010-01-01
The article gives ideas that lecturers of undergraduate Discrete Mathematics courses can use in order to make the subject more interesting for students and encourage them to undertake further studies in the subject. It is possible to teach Discrete Mathematics with little or no reference to computing. However, students are more likely to be…
Discrete Mathematics and Curriculum Reform.
Kenney, Margaret J.
1996-01-01
Defines discrete mathematics as the mathematics necessary to effect reasoned decision making in finite situations and explains how its use supports the current view of mathematics education. Discrete mathematics can be used by curriculum developers to improve the curriculum for students of all ages and abilities. (SLD)
Connections on discrete fibre bundles
International Nuclear Information System (INIS)
Manton, N.S.; Cambridge Univ.
1987-01-01
A new approach to gauge fields on a discrete space-time is proposed, in which the fundamental object is a discrete version of a principal fibre bundle. If the bundle is twisted, the gauge fields are topologically non-trivial automatically. (orig.)
Discrete dynamics versus analytic dynamics
DEFF Research Database (Denmark)
Toxværd, Søren
2014-01-01
For discrete classical Molecular dynamics obtained by the “Verlet” algorithm (VA) with the time increment h there exists a shadow Hamiltonian H˜ with energy E˜(h) , for which the discrete particle positions lie on the analytic trajectories for H˜ . Here, we proof that there, independent...... of such an analytic analogy, exists an exact hidden energy invariance E * for VA dynamics. The fact that the discrete VA dynamics has the same invariances as Newtonian dynamics raises the question, which of the formulations that are correct, or alternatively, the most appropriate formulation of classical dynamics....... In this context the relation between the discrete VA dynamics and the (general) discrete dynamics investigated by Lee [Phys. Lett. B122, 217 (1983)] is presented and discussed....
Modern approaches to discrete curvature
Romon, Pascal
2017-01-01
This book provides a valuable glimpse into discrete curvature, a rich new field of research which blends discrete mathematics, differential geometry, probability and computer graphics. It includes a vast collection of ideas and tools which will offer something new to all interested readers. Discrete geometry has arisen as much as a theoretical development as in response to unforeseen challenges coming from applications. Discrete and continuous geometries have turned out to be intimately connected. Discrete curvature is the key concept connecting them through many bridges in numerous fields: metric spaces, Riemannian and Euclidean geometries, geometric measure theory, topology, partial differential equations, calculus of variations, gradient flows, asymptotic analysis, probability, harmonic analysis, graph theory, etc. In spite of its crucial importance both in theoretical mathematics and in applications, up to now, almost no books have provided a coherent outlook on this emerging field.
Energy Technology Data Exchange (ETDEWEB)
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Discretion and Disproportionality
Directory of Open Access Journals (Sweden)
Jason A. Grissom
2015-12-01
Full Text Available Students of color are underrepresented in gifted programs relative to White students, but the reasons for this underrepresentation are poorly understood. We investigate the predictors of gifted assignment using nationally representative, longitudinal data on elementary students. We document that even among students with high standardized test scores, Black students are less likely to be assigned to gifted services in both math and reading, a pattern that persists when controlling for other background factors, such as health and socioeconomic status, and characteristics of classrooms and schools. We then investigate the role of teacher discretion, leveraging research from political science suggesting that clients of government services from traditionally underrepresented groups benefit from diversity in the providers of those services, including teachers. Even after conditioning on test scores and other factors, Black students indeed are referred to gifted programs, particularly in reading, at significantly lower rates when taught by non-Black teachers, a concerning result given the relatively low incidence of assignment to own-race teachers among Black students.
International Nuclear Information System (INIS)
Vlad, Valentin I.; Ionescu-Pallas, Nicholas
2000-10-01
The Planck radiation spectrum of ideal cubic and spherical cavities, in the region of small adiabatic invariance, γ = TV 1/3 , is shown to be discrete and strongly dependent on the cavity geometry and temperature. This behavior is the consequence of the random distribution of the state weights in the cubic cavity and of the random overlapping of the successive multiplet components, for the spherical cavity. The total energy (obtained by summing up the exact contributions of the eigenvalues and their weights, for low values of the adiabatic invariance) does not obey any longer Stefan-Boltzmann law. The new law includes a corrective factor depending on γ and imposes a faster decrease of the total energy to zero, for γ → 0. We have defined the double quantized regime both for cubic and spherical cavities by the superior and inferior limits put on the principal quantum numbers or the adiabatic invariance. The total energy of the double quantized cavities shows large differences from the classical calculations over unexpected large intervals, which are measurable and put in evidence important macroscopic quantum effects. (author)
Tedetti, Marc; Cuet, Pascale; Guigue, Catherine; Goutx, Madeleine
2011-05-01
highly impacted by sewage effluents, numerous in this coastal area of La Réunion Island. We conclude that multi-dimensional fluorescence spectroscopy (EEM) coupled to the determination of HIX and BIX is a good tool for assessing the origin and distribution of DOM in the coral reef ecosystems submitted to anthropogenic impacts. Copyright © 2011 Elsevier B.V. All rights reserved.
Prescription Errors in Psychiatry
African Journals Online (AJOL)
Arun Kumar Agnihotri
clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.
Towards automatic global error control: Computable weak error expansion for the tau-leap method
Karlsson, Peer Jesper; Tempone, Raul
2011-01-01
This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.
Perfect discretization of path integrals
International Nuclear Information System (INIS)
Steinhaus, Sebastian
2012-01-01
In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discussed. Furthermore we show that a reparametrization invariant path integral implies discretization independence and acts as a projector onto physical states.
Perfect discretization of path integrals
Steinhaus, Sebastian
2012-05-01
In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discussed. Furthermore we show that a reparametrization invariant path integral implies discretization independence and acts as a projector onto physical states.
The origin of discrete particles
Bastin, T
2009-01-01
This book is a unique summary of the results of a long research project undertaken by the authors on discreteness in modern physics. In contrast with the usual expectation that discreteness is the result of mathematical tools for insertion into a continuous theory, this more basic treatment builds up the world from the discrimination of discrete entities. This gives an algebraic structure in which certain fixed numbers arise. As such, one agrees with the measured value of the fine-structure constant to one part in 10,000,000 (10 7 ). Sample Chapter(s). Foreword (56 KB). Chapter 1: Introduction
Kartush, J M
1996-11-01
Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.
Synchronization Techniques in Parallel Discrete Event Simulation
Lindén, Jonatan
2018-01-01
Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...
3-D Discrete Analytical Ridgelet Transform
Helbert , David; Carré , Philippe; Andrès , Éric
2006-01-01
International audience; In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines:...
Liao, Bolin; Zhang, Yunong; Jin, Long
2016-02-01
In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.
An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems
Karlsson, Peer Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul
2015-01-01
This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system
Error analysis in Fourier methods for option pricing for exponential Lévy processes
Crocce, Fabian; Hä ppö lä , Juho; Keissling, Jonas; Tempone, Raul
2015-01-01
We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions
Discrete Discriminant analysis based on tree-structured graphical models
DEFF Research Database (Denmark)
Perez de la Cruz, Gonzalo; Eslava, Guillermina
The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant a...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression.......The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...
Simplified discrete ordinates method in spherical geometry
International Nuclear Information System (INIS)
Elsawi, M.A.; Abdurrahman, N.M.; Yavuz, M.
1999-01-01
The authors extend the method of simplified discrete ordinates (SS N ) to spherical geometry. The motivation for such an extension is that the appearance of the angular derivative (redistribution) term in the spherical geometry transport equation makes it difficult to decide which differencing scheme best approximates this term. In the present method, the angular derivative term is treated implicitly and thus avoids the need for the approximation of such term. This method can be considered to be analytic in nature with the advantage of being free from spatial truncation errors from which most of the existing transport codes suffer. In addition, it treats the angular redistribution term implicitly with the advantage of avoiding approximations to that term. The method also can handle scattering in a very general manner with the advantage of spending almost the same computational effort for all scattering modes. Moreover, the methods can easily be applied to higher-order S N calculations
Exact analysis of discrete data
Hirji, Karim F
2005-01-01
Researchers in fields ranging from biology and medicine to the social sciences, law, and economics regularly encounter variables that are discrete or categorical in nature. While there is no dearth of books on the analysis and interpretation of such data, these generally focus on large sample methods. When sample sizes are not large or the data are otherwise sparse, exact methods--methods not based on asymptotic theory--are more accurate and therefore preferable.This book introduces the statistical theory, analysis methods, and computation techniques for exact analysis of discrete data. After reviewing the relevant discrete distributions, the author develops the exact methods from the ground up in a conceptually integrated manner. The topics covered range from univariate discrete data analysis, a single and several 2 x 2 tables, a single and several 2 x K tables, incidence density and inverse sampling designs, unmatched and matched case -control studies, paired binary and trinomial response models, and Markov...
Discrete geometric structures for architecture
Pottmann, Helmut
2010-01-01
. The talk will provide an overview of recent progress in this field, with a particular focus on discrete geometric structures. Most of these result from practical requirements on segmenting a freeform shape into planar panels and on the physical realization
Causal Dynamics of Discrete Surfaces
Directory of Open Access Journals (Sweden)
Pablo Arrighi
2014-03-01
Full Text Available We formalize the intuitive idea of a labelled discrete surface which evolves in time, subject to two natural constraints: the evolution does not propagate information too fast; and it acts everywhere the same.
Error Concealment for 3-D DWT Based Video Codec Using Iterative Thresholding
DEFF Research Database (Denmark)
Belyaev, Evgeny; Forchhammer, Søren; Codreanu, Marian
2017-01-01
Error concealment for video coding based on a 3-D discrete wavelet transform (DWT) is considered. We assume that the video sequence has a sparse representation in a known basis different from the DWT, e.g., in a 2-D discrete cosine transform basis. Then, we formulate the concealment problem as l1...
Perfect discretization of path integrals
Steinhaus, Sebastian
2011-01-01
In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discu...
Alfa, Attahiru S
2016-01-01
This book introduces the theoretical fundamentals for modeling queues in discrete-time, and the basic procedures for developing queuing models in discrete-time. There is a focus on applications in modern telecommunication systems. It presents how most queueing models in discrete-time can be set up as discrete-time Markov chains. Techniques such as matrix-analytic methods (MAM) that can used to analyze the resulting Markov chains are included. This book covers single node systems, tandem system and queueing networks. It shows how queues with time-varying parameters can be analyzed, and illustrates numerical issues associated with computations for the discrete-time queueing systems. Optimal control of queues is also covered. Applied Discrete-Time Queues targets researchers, advanced-level students and analysts in the field of telecommunication networks. It is suitable as a reference book and can also be used as a secondary text book in computer engineering and computer science. Examples and exercises are includ...
Estimating and localizing the algebraic and total numerical errors using flux reconstructions
Czech Academy of Sciences Publication Activity Database
Papež, Jan; Strakoš, Z.; Vohralík, M.
2018-01-01
Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016
Error minimizing algorithms for nearest eighbor classifiers
Energy Technology Data Exchange (ETDEWEB)
Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Zimmer, G. Beate [TEXAS A& M
2011-01-03
Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.
Discrete Variational Approach for Modeling Laser-Plasma Interactions
Reyes, J. Paxon; Shadwick, B. A.
2014-10-01
The traditional approach for fluid models of laser-plasma interactions begins by approximating fields and derivatives on a grid in space and time, leading to difference equations that are manipulated to create a time-advance algorithm. In contrast, by introducing the spatial discretization at the level of the action, the resulting Euler-Lagrange equations have particular differencing approximations that will exactly satisfy discrete versions of the relevant conservation laws. For example, applying a spatial discretization in the Lagrangian density leads to continuous-time, discrete-space equations and exact energy conservation regardless of the spatial grid resolution. We compare the results of two discrete variational methods using the variational principles from Chen and Sudan and Brizard. Since the fluid system conserves energy and momentum, the relative errors in these conserved quantities are well-motivated physically as figures of merit for a particular method. This work was supported by the U. S. Department of Energy under Contract No. DE-SC0008382 and by the National Science Foundation under Contract No. PHY-1104683.
Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A
2014-03-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.
Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.
2014-01-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.
Zero-Error Capacity of a Class of Timing Channels
DEFF Research Database (Denmark)
Kovacevic, M.; Popovski, Petar
2014-01-01
We analyze the problem of zero-error communication through timing channels that can be interpreted as discrete-time queues with bounded waiting times. The channel model includes the following assumptions: 1) time is slotted; 2) at most N particles are sent in each time slot; 3) every particle is ...
Conditional Standard Errors of Measurement for Scale Scores.
Kolen, Michael J.; And Others
1992-01-01
A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)
The error in total error reduction.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2014-02-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.
Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano
2013-01-01
Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...
National Research Council Canada - National Science Library
Byrne, Michael D
2006-01-01
.... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...
International Nuclear Information System (INIS)
Wahlstroem, B.
1993-01-01
Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)
Discrete Curvature Theories and Applications
Sun, Xiang
2016-08-25
Discrete Di erential Geometry (DDG) concerns discrete counterparts of notions and methods in di erential geometry. This thesis deals with a core subject in DDG, discrete curvature theories on various types of polyhedral surfaces that are practically important for free-form architecture, sunlight-redirecting shading systems, and face recognition. Modeled as polyhedral surfaces, the shapes of free-form structures may have to satisfy di erent geometric or physical constraints. We study a combination of geometry and physics { the discrete surfaces that can stand on their own, as well as having proper shapes for the manufacture. These proper shapes, known as circular and conical meshes, are closely related to discrete principal curvatures. We study curvature theories that make such surfaces possible. Shading systems of freeform building skins are new types of energy-saving structures that can re-direct the sunlight. From these systems, discrete line congruences across polyhedral surfaces can be abstracted. We develop a new curvature theory for polyhedral surfaces equipped with normal congruences { a particular type of congruences de ned by linear interpolation of vertex normals. The main results are a discussion of various de nitions of normality, a detailed study of the geometry of such congruences, and a concept of curvatures and shape operators associated with the faces of a triangle mesh. These curvatures are compatible with both normal congruences and the Steiner formula. In addition to architecture, we consider the role of discrete curvatures in face recognition. We use geometric measure theory to introduce the notion of asymptotic cones associated with a singular subspace of a Riemannian manifold, which is an extension of the classical notion of asymptotic directions. We get a simple expression of these cones for polyhedral surfaces, as well as convergence and approximation theorems. We use the asymptotic cones as facial descriptors and demonstrate the
Analysis of Discrete Mittag - Leffler Functions
Directory of Open Access Journals (Sweden)
N. Shobanadevi
2015-03-01
Full Text Available Discrete Mittag - Leffler functions play a major role in the development of the theory of discrete fractional calculus. In the present article, we analyze qualitative properties of discrete Mittag - Leffler functions and establish sufficient conditions for convergence, oscillation and summability of the infinite series associated with discrete Mittag - Leffler functions.
Foundations of a discrete physics
International Nuclear Information System (INIS)
McGoveran, D.; Noyes, P.
1988-01-01
Starting from the principles of finiteness, discreteness, finite computability and absolute nonuniqueness, we develop the ordering operator calculus, a strictly constructive mathematical system having the empirical properties required by quantum mechanical and special relativistic phenomena. We show how to construct discrete distance functions, and both rectangular and spherical coordinate systems(with a discrete version of ''π''). The richest discrete space constructible without a preferred axis and preserving translational and rotational invariance is shown to be a discrete 3-space with the usual symmetries. We introduce a local ordering parameter with local (proper) time-like properties and universal ordering parameters with global (cosmological) time-like properties. Constructed ''attribute velocities'' connect ensembles with attributes that are invariant as the appropriate time-like parameter increases. For each such attribute, we show how to construct attribute velocities which must satisfy the '' relativistic Doppler shift'' and the ''relativistic velocity composition law,'' as well as the Lorentz transformations. By construction, these velocities have finite maximum and minimum values. In the space of all attributes, the minimum of these maximum velocities will predominate in all multiple attribute computations, and hence can be identified as a fundamental limiting velocity, General commutation relations are constructed which under the physical interpretation are shown to reduce to the usual quantum mechanical commutation relations. 50 refs., 18 figs
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
International Nuclear Information System (INIS)
Jakeman, J.D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation
Bryant, C. M.; Prudhomme, S.; Wildey, T.
2015-01-01
In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.
Metcalfe, Janet
2017-01-01
Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…
Discretizing LTI Descriptor (Regular Differential Input Systems with Consistent Initial Conditions
Directory of Open Access Journals (Sweden)
Athanasios D. Karageorgos
2010-01-01
Full Text Available A technique for discretizing efficiently the solution of a Linear descriptor (regular differential input system with consistent initial conditions, and Time-Invariant coefficients (LTI is introduced and fully discussed. Additionally, an upper bound for the error ‖x¯(kT−x¯k‖ that derives from the procedure of discretization is also provided. Practically speaking, we are interested in such kind of systems, since they are inherent in many physical, economical and engineering phenomena.
Discrete differential geometry. Consistency as integrability
Bobenko, Alexander I.; Suris, Yuri B.
2005-01-01
A new field of discrete differential geometry is presently emerging on the border between differential and discrete geometry. Whereas classical differential geometry investigates smooth geometric shapes (such as surfaces), and discrete geometry studies geometric shapes with finite number of elements (such as polyhedra), the discrete differential geometry aims at the development of discrete equivalents of notions and methods of smooth surface theory. Current interest in this field derives not ...
Integrable structure in discrete shell membrane theory.
Schief, W K
2014-05-08
We present natural discrete analogues of two integrable classes of shell membranes. By construction, these discrete shell membranes are in equilibrium with respect to suitably chosen internal stresses and external forces. The integrability of the underlying equilibrium equations is proved by relating the geometry of the discrete shell membranes to discrete O surface theory. We establish connections with generalized barycentric coordinates and nine-point centres and identify a discrete version of the classical Gauss equation of surface theory.
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Degree distribution in discrete case
International Nuclear Information System (INIS)
Wang, Li-Na; Chen, Bin; Yan, Zai-Zai
2011-01-01
Vertex degree of many network models and real-life networks is limited to non-negative integer. By means of measure and integral, the relation of the degree distribution and the cumulative degree distribution in discrete case is analyzed. The degree distribution, obtained by the differential of its cumulative, is only suitable for continuous case or discrete case with constant degree change. When degree change is not a constant but proportional to degree itself, power-law degree distribution and its cumulative have the same exponent and the mean value is finite for power-law exponent greater than 1. -- Highlights: → Degree change is the crux for using the cumulative degree distribution method. → It suits for discrete case with constant degree change. → If degree change is proportional to degree, power-law degree distribution and its cumulative have the same exponent. → In addition, the mean value is finite for power-law exponent greater than 1.
Institute of Scientific and Technical Information of China (English)
付云骁; 贾利民; 季常煦; 姚德臣; 李文球
2014-01-01
单独提取滚动轴承振动信号的时域或频域特征进行故障诊断，是目前常用的轴承诊断方法，诊断精度有待提高。以时域和频域的多维振动特征参量为指标，以历史诊断正确率作为特征参量权值，分别对滚动轴承的无故障和经常出现的滚珠故障、内环故障和外环故障工况进行特征提取和故障识别。多维时频域振动特征是单维特征依据诊断精度权重的集合。运用BP神经网络分别对信号的时域特征(TDF)、IMF能量矩(IEM)、小波包能量矩(WPEM)，以及多维时频域特征进行智能故障判别。实验验证用多维时频域振动特征参量综合诊断的方法进行滚动轴承故障诊断，比单维特征的诊断结果精确且效率较高，该方法可以在滚动轴承故障诊断领域展开应用。%Extracting the time-domain or the frequency-domain features of vibration signals for analysis is a conventional method for rolling bearings fault diagnosis. But the effects of this diagnosis method need to be improved. In this paper, taking the multi-dimensional vibration characteristic parameters in time-domain and frequency-domain as the indexes and the correctness rate of historical diagnosis as the parametric weight, the features of fault-free rolling bearings and the features of rolling bearings with ball fault, inner and outer race faults are extracted and the faults are identified. It shows that the multi-dimensional vibration characteristic in time-frequency domains is the assemblage of single features. BP neural network is used for intelligent fault classification of signals according to the time-domain feature (TDF) parameters, IMF energy moment (IEM), wavelet package energy moment (WPEM) and multi-dimensional features respectively. Results of the diagnoses are compared one another. The experiment results verify that using the multi-dimensional feature in time and frequency domains to evaluate the rolling bearing faults is
International Nuclear Information System (INIS)
Bas, Esra
2011-01-01
In this paper, a general framework for child injury prevention and a multi-objective, multi-dimensional mixed 0-1 knapsack model were developed to determine the optimal time to introduce preventive measures against child injuries. Furthermore, the model maximises the prevention of injuries with the highest risks for each age period by combining preventive measures and supervision as well as satisfying budget limits and supervision time constraints. The risk factors for each injury, variable, and time period were based on risk priority numbers (RPNs) obtained from failure mode and effects analysis (FMEA) methodology, and these risk factors were incorporated into the model as objective function parameters. A numerical experiment based on several different situations was conducted, revealing that the model provided optimal timing of preventive measures for child injuries based on variables considered.
Yu, Jinpeng; Shi, Peng; Yu, Haisheng; Chen, Bing; Lin, Chong
2015-07-01
This paper considers the problem of discrete-time adaptive position tracking control for a interior permanent magnet synchronous motor (IPMSM) based on fuzzy-approximation. Fuzzy logic systems are used to approximate the nonlinearities of the discrete-time IPMSM drive system which is derived by direct discretization using Euler method, and a discrete-time fuzzy position tracking controller is designed via backstepping approach. In contrast to existing results, the advantage of the scheme is that the number of the adjustable parameters is reduced to two only and the problem of coupling nonlinearity can be overcome. It is shown that the proposed discrete-time fuzzy controller can guarantee the tracking error converges to a small neighborhood of the origin and all the signals are bounded. Simulation results illustrate the effectiveness and the potentials of the theoretic results obtained.
On the discrete Gabor transform and the discrete Zak transform
Bastiaans, M.J.; Geilen, M.C.W.
1996-01-01
Gabor's expansion of a discrete-time signal into a set of shifted and modulated versions of an elementary signal (or synthesis window) and the inverse operation -- the Gabor transform -- with which Gabor's expansion coefficients can be determined, are introduced. It is shown how, in the case of a
Discrete Choice and Rational Inattention
DEFF Research Database (Denmark)
Fosgerau, Mogens; Melo, Emerson; de Palma, André
2017-01-01
This paper establishes a general equivalence between discrete choice and rational inattention models. Matejka and McKay (2015, AER) showed that when information costs are modelled using the Shannon entropy, the result- ing choice probabilities in the rational inattention model take the multinomial...... logit form. We show that when information costs are modelled using a class of generalized entropies, then the choice probabilities in any rational inattention model are observationally equivalent to some additive random utility discrete choice model and vice versa. This equivalence arises from convex...
Aliasing errors in measurements of beam position and ellipticity
International Nuclear Information System (INIS)
Ekdahl, Carl
2005-01-01
Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all
Aliasing errors in measurements of beam position and ellipticity
Ekdahl, Carl
2005-09-01
Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.
Nonclassical measurements errors in nonlinear models
DEFF Research Database (Denmark)
Madsen, Edith; Mulalic, Ismir
Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Directory of Open Access Journals (Sweden)
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
Preventing Errors in Laterality
Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie
2014-01-01
An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...
International Nuclear Information System (INIS)
Reason, J.
1988-01-01
This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated