WorldWideScience

Sample records for time truncation error

  1. Angular truncation errors in integrating nephelometry

    International Nuclear Information System (INIS)

    Moosmueller, Hans; Arnott, W. Patrick

    2003-01-01

    Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error

  2. Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2014-01-01

    In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...

  3. Local and accumulated truncation errors in a class of perturbative numerical methods

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.; Corciovei, A.

    1980-01-01

    The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)

  4. Error bounds for augmented truncations of discrete-time block-monotone Markov chains under subgeometric drift conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2015-01-01

    This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...

  5. Reduction of Truncation Errors in Planar Near-Field Aperture Antenna Measurements Using the Gerchberg-Papoulis Algorithm

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2008-01-01

    A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna is...

  6. Adaptation of the delta-m and δ-fit truncation methods to vector radiative transfer: Effect of truncation on radiative transfer accuracy

    International Nuclear Information System (INIS)

    Sanghavi, Suniti; Stephens, Graeme

    2015-01-01

    In the presence of aerosol and/or clouds, the use of appropriate truncation methods becomes indispensable for accurate but cost-efficient radiative transfer computations. Truncation methods allow the reduction of the large number (usually several hundreds) of Fourier components associated with particulate scattering functions to a more manageable number, thereby making it possible to carry out radiative transfer computations with a modest number of streams. While several truncation methods have been discussed for scalar radiative transfer, few rigorous studies have been made of truncation methods for the vector case. Here, we formally derive the vector form of Wiscombe's delta-m truncation method. Two main sources of error associated with delta-m truncation are identified as the delta-separation error (DSE) and the phase-truncation error (PTE). The view angles most affected by truncation error occur in the vicinity of the direction of exact backscatter. This view geometry occurs commonly in satellite based remote sensing applications, and is hence of considerable importance. In order to deal with these errors, we adapt the δ-fit approach of Hu et al. (2000) [17] to vector radiative transfer. The resulting δBGE-fit is compared with the vectorized delta-m method. For truncation at l=25 of an original phase matrix consisting of over 300 Fourier components, the use of the δBGE-fit minimizes the error due to truncation at these view angles, while practically eliminating error at other angles. We also show how truncation errors have a distorting effect on hyperspectral absorption line shapes. The choice of the δBGE-fit method over delta-m truncation minimizes errors in absorption line depths, thus affording greater accuracy for sensitive retrievals such as those of XCO 2 from OCO-2 or GOSAT measurements. - Highlights: • Derives vector form for delta-m truncation method. • Adapts δ-fit truncation approach to vector RTE as δBGE-fit. • Compares truncation

  7. Reduction of truncation errors in planar near-field aperture antenna measurements using the method of alternating orthogonal projections

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2006-01-01

    A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...

  8. Truncated predictor feedback for time-delay systems

    CERN Document Server

    Zhou, Bin

    2014-01-01

    This book provides a systematic approach to the design of predictor based controllers for (time-varying) linear systems with either (time-varying) input or state delays. Differently from those traditional predictor based controllers, which are infinite-dimensional static feedback laws and may cause difficulties in their practical implementation, this book develops a truncated predictor feedback (TPF) which involves only finite dimensional static state feedback. Features and topics: A novel approach referred to as truncated predictor feedback for the stabilization of (time-varying) time-delay systems in both the continuous-time setting and the discrete-time setting is built systematically Semi-global and global stabilization problems of linear time-delay systems subject to either magnitude saturation or energy constraints are solved in a systematic manner Both stabilization of a single system and consensus of a group of systems (multi-agent systems) are treated in a unified manner by applying the truncated pre...

  9. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  10. A numerical method for multigroup slab-geometry discrete ordinates problems with no spatial truncation error

    International Nuclear Information System (INIS)

    Barros, R.C. de; Larsen, E.W.

    1991-01-01

    A generalization of the one-group Spectral Green's Function (SGF) method is developed for multigroup, slab-geometry discrete ordinates (S N ) problems. The multigroup SGF method is free from spatial truncation errors; it generated numerical values for the cell-edge and cell-average angular fluxes that agree with the analytic solution of the multigroup S N equations. Numerical results are given to illustrate the method's accuracy

  11. Accurate evolutions of inspiralling neutron-star binaries: assessment of the truncation error

    International Nuclear Information System (INIS)

    Baiotti, Luca; Giacomazzo, Bruno; Rezzolla, Luciano

    2009-01-01

    We have recently presented an investigation in full general relativity of the dynamics and gravitational-wave emission from binary neutron stars which inspiral and merge, producing a black hole surrounded by a torus (Baiotti et al 2008 Phys. Rev. D 78 084033). We discuss here in more detail the convergence properties of the results presented in Baiotti et al (2008 Phys. Rev. D 78 084033) and, in particular, the deterioration of the convergence rate at the merger and during the survival of the merged object, when strong shocks are formed and turbulence develops. We also show that physically reasonable and numerically convergent results obtained at low resolution suffer however from large truncation errors and hence are of little physical use. We summarize our findings in an 'error budget', which includes the different sources of possible inaccuracies we have investigated and provides a first quantitative assessment of the precision in the modelling of compact fluid binaries.

  12. Analysis of truncation limit in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Cepin, Marko

    2005-01-01

    A truncation limit defines the boundaries of what is considered in the probabilistic safety assessment and what is neglected. The truncation limit that is the focus here is the truncation limit on the size of the minimal cut set contribution at which to cut off. A new method was developed, which defines truncation limit in probabilistic safety assessment. The method specifies truncation limits with more stringency than presenting existing documents dealing with truncation criteria in probabilistic safety assessment do. The results of this paper indicate that the truncation limits for more complex probabilistic safety assessments, which consist of larger number of basic events, should be more severe than presently recommended in existing documents if more accuracy is desired. The truncation limits defined by the new method reduce the relative errors of importance measures and produce more accurate results for probabilistic safety assessment applications. The reduced relative errors of importance measures can prevent situations, where the acceptability of change of equipment under investigation according to RG 1.174 would be shifted from region, where changes can be accepted, to region, where changes cannot be accepted, if the results would be calculated with smaller truncation limit

  13. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks

    Directory of Open Access Journals (Sweden)

    Hesham Mostafa

    2017-09-01

    Full Text Available Artificial neural networks (ANNs trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.

  14. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.

    Science.gov (United States)

    Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert

    2017-01-01

    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.

  15. Repair for scattering expansion truncation errors in transport calculations

    International Nuclear Information System (INIS)

    Emmett, M.B.; Childs, R.L.; Rhoades, W.A.

    1980-01-01

    Legendre expansion of angular scattering distributions is usually limited to P 3 in practical transport calculations. This truncation often results in non-trivial errors, especially alternating negative and positive lateral scattering peaks. The effect is especially prominent in forward-peaked situations such as the within-group component of the Compton Scattering of gammas. Increasing the expansion to P 7 often makes the peaks larger and narrower. Ward demonstrated an accurate repair, but his method requires special cross section sets and codes. The DOT IV code provides fully-compatible, but heuristic, repair of the erroneous scattering. An analytical Klein-Nishina estimator, newly available in the MORSE code, allows a test of this method. In the MORSE calculation, particle scattering histories are calculated in the usual way, with scoring by an estimator routine at each collision site. Results for both the conventional P 3 estimator and the analytical estimator were obtained. In the DOT calculation, the source moments are expanded into the directional representation at each iteration. Optionally a sorting procedure removes all negatives, and removes enough small positive values to restore particle conservation. The effect of this is to replace the alternating positive and negative values with positive values of plausible magnitude. The accuracy of those values is examined herein

  16. Error characterization for asynchronous computations: Proxy equation approach

    Science.gov (United States)

    Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath

    2017-11-01

    Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.

  17. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  18. Frequency interval balanced truncation of discrete-time bilinear systems

    DEFF Research Database (Denmark)

    Jazlan, Ahmad; Sreeram, Victor; Shaker, Hamid Reza

    2016-01-01

    This paper presents the development of a new model reduction method for discrete-time bilinear systems based on the balanced truncation framework. In many model reduction applications, it is advantageous to analyze the characteristics of the system with emphasis on particular frequency intervals...... are the solution to a pair of new generalized Lyapunov equations. The conditions for solvability of these new generalized Lyapunov equations are derived and a numerical solution method for solving these generalized Lyapunov equations is presented. Numerical examples which illustrate the usage of the new...... generalized frequency interval controllability and observability gramians as part of the balanced truncation framework are provided to demonstrate the performance of the proposed method....

  19. New results to BDD truncation method for efficient top event probability calculation

    International Nuclear Information System (INIS)

    Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang

    2012-01-01

    A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.

  20. How Truncating Are 'Truncating Languages'? Evidence from Russian and German.

    Science.gov (United States)

    Rathcke, Tamara V

    Russian and German have pr eviously been described as 'truncating', or cutting off target frequencies of the phrase-final pitch trajectories when the time available for voicing is compromised. However, supporting evidence is rare and limited to only a few pitch categories. This paper reports a production study conducted to document pitch adjustments to linguistic materials, in which the amount of voicing available for the realization of a pitch pattern varies from relatively long to extremely short. Productions of nuclear H+L*, H* and L*+H pitch accents followed by a low boundary tone were investigated in the two languages. The results of the study show that speakers of both 'truncating languages' do not utilize truncation exclusively when accommodating to different segmental environments. On the contrary, they employ several strategies - among them is truncation but also compression and temporal re-alignment - to produce the target pitch categories under increasing time pressure. Given that speakers can systematically apply all three adjustment strategies to produce some pitch patterns (H* L% in German and Russian) while not using truncation in others (H+L* L% particularly in Russian), we question the effectiveness of the typological classification of these two languages as 'truncating'. Moreover, the phonetic detail of truncation varies considerably, both across and within the two languages, indicating that truncation cannot be easily modeled as a unified phenomenon. The results further suggest that the phrase-final pitch adjustments are sensitive to the phonological composition of the tonal string and the status of a particular tonal event (associated vs. boundary tone), and do not apply to falling vs. rising pitch contours across the board, as previously put forward for German. Implications for the intonational phonology and prosodic typology are addressed in the discussion. © 2017 S. Karger AG, Basel.

  1. Numerical method for multigroup one-dimensional SN eigenvalue problems with no spatial truncation error

    International Nuclear Information System (INIS)

    Abreu, M.P.; Filho, H.A.; Barros, R.C.

    1993-01-01

    The authors describe a new nodal method for multigroup slab-geometry discrete ordinates S N eigenvalue problems that is completely free from all spatial truncation errors. The unknowns in the method are the node-edge angular fluxes, the node-average angular fluxes, and the effective multiplication factor k eff . The numerical values obtained for these quantities are exactly those of the dominant analytic solution of the S N eigenvalue problem apart from finite arithmetic considerations. This method is based on the use of the standard balance equation and two nonstandard auxiliary equations. In the nonmultiplying regions, e.g., the reflector, we use the multigroup spectral Green's function (SGF) auxiliary equations. In the fuel regions, we use the multigroup spectral diamond (SD) auxiliary equations. The SD auxiliary equation is an extension of the conventional auxiliary equation used in the diamond difference (DD) method. This hybrid characteristic of the SD-SGF method improves both the numerical stability and the convergence rate

  2. Applications of Fast Truncated Multiplication in Cryptography

    Directory of Open Access Journals (Sweden)

    Laszlo Hars

    2006-12-01

    Full Text Available Truncated multiplications compute truncated products, contiguous subsequences of the digits of integer products. For an n-digit multiplication algorithm of time complexity O(nα, with 1<α≤2, there is a truncated multiplication algorithm, which is constant times faster when computing a short enough truncated product. Applying these fast truncated multiplications, several cryptographic long integer arithmetic algorithms are improved, including integer reciprocals, divisions, Barrett and Montgomery multiplications, 2n-digit modular multiplication on hardware for n-digit half products. For example, Montgomery multiplication is performed in 2.6 Karatsuba multiplication time.

  3. Truncation scheme of time-dependent density-matrix approach II

    Energy Technology Data Exchange (ETDEWEB)

    Tohyama, Mitsuru [Kyorin University School of Medicine, Mitaka, Tokyo (Japan); Schuck, Peter [Institut de Physique Nucleaire, IN2P3-CNRS, Universite Paris-Sud, Orsay (France); Laboratoire de Physique et de Modelisation des Milieux Condenses, CNRS et Universite Joseph Fourier, Grenoble (France)

    2017-09-15

    A truncation scheme of the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy for reduced density matrices, where a three-body density matrix is approximated by two-body density matrices, is improved to take into account a normalization effect. The truncation scheme is tested for the Lipkin model. It is shown that the obtained results are in good agreement with the exact solutions. (orig.)

  4. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  5. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  6. Truncation of CPC solar collectors and its effect on energy collection

    Science.gov (United States)

    Carvalho, M. J.; Collares-Pereira, M.; Gordon, J. M.; Rabl, A.

    1985-01-01

    Analytic expressions are derived for the angular acceptance function of two-dimensional compound parabolic concentrator solar collectors (CPC's) of arbitrary degree of truncation. Taking into account the effect of truncation on both optical and thermal losses in real collectors, the increase in monthly and yearly collectible energy is also evaluated. Prior analyses that have ignored the correct behavior of the angular acceptance function at large angles for truncated collectors are shown to be in error by 0-2 percent in calculations of yearly collectible energy for stationary collectors.

  7. Theoretical analysis of balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2012-01-01

    In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians and their singu......In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians...... for showing this independence is realization theory of linear switched systems. [1] H. R. Shaker and R. Wisniewski, "Generalized gramian framework for model/controller order reduction of switched systems", International Journal of Systems Science, Vol. 42, Issue 8, 2011, 1277-1291. [2] H. R. Shaker and R....... Wisniewski, "Switched Systems Reduction Framework Based on Convex Combination of Generalized Gramians", Journal of Control Science and Engineering, 2009....

  8. Clustered survival data with left-truncation

    DEFF Research Database (Denmark)

    Eriksson, Frank; Martinussen, Torben; Scheike, Thomas H.

    2015-01-01

    Left-truncation occurs frequently in survival studies, and it is well known how to deal with this for univariate survival times. However, there are few results on how to estimate dependence parameters and regression effects in semiparametric models for clustered survival data with delayed entry....... Surprisingly, existing methods only deal with special cases. In this paper, we clarify different kinds of left-truncation and suggest estimators for semiparametric survival models under specific truncation schemes. The large-sample properties of the estimators are established. Small-sample properties...

  9. A model for the statistical description of analytical errors occurring in clinical chemical laboratories with time.

    Science.gov (United States)

    Hyvärinen, A

    1985-01-01

    ). This indicates that a substantial part of the variation comes from intralaboratory variation with time rather than from constant interlaboratory differences. Normality and consistency of statistical distributions were best achieved in the long-term intralaboratory sets of the data, under which conditions the statistical estimates of error variability were also most characteristic of the individual laboratories rather than necessarily being similar to one another. Mixing of data from different laboratories may give heterogeneous and nonparametric distributions and hence is not advisable.(ABSTRACT TRUNCATED AT 400 WORDS)

  10. Left truncation results in substantial bias of the relation between time-dependent exposures and adverse events

    NARCIS (Netherlands)

    Hazelbag, Christijan M; Klungel, Olaf H; van Staa, Tjeerd P; de Boer, Anthonius; Groenwold, Rolf H H

    PURPOSE: To assess the impact of random left truncation of data on the estimation of time-dependent exposure effects. METHODS: A simulation study was conducted in which the relation between exposure and outcome was based on an immediate exposure effect, a first-time exposure effect, or a cumulative

  11. Left truncation results in substantial bias of the relation between time-dependent exposures and adverse events

    NARCIS (Netherlands)

    Hazelbag, Christijan M.; Klungel, Olaf H.; van Staa, Tjeerd P.; de Boer, Anthonius; Groenwold, Rolf H H

    2015-01-01

    PURPOSE: To assess the impact of random left truncation of data on the estimation of time-dependent exposure effects. METHODS: A simulation study was conducted in which the relation between exposure and outcome was based on an immediate exposure effect, a first-time exposure effect, or a cumulative

  12. Truncation of the many body hierarchy and relaxation times in the McKean model

    International Nuclear Information System (INIS)

    Schmitt, K.J.

    1987-01-01

    In the McKean model the BBGKY-hierarchy is equivalent to a simple hierarchy of coupled equations for the p-particle correlation functions. Truncation effects and the convergence of the one-particle distribution towards its exact shape have been studied. In the long time limit the equations can be solved in a closed form. It turns out that the p-particle correlation decays p-times faster than the non-equilibrium one-particle distribution

  13. TRUNCATION OF THE MANY BODY HIERARCHY AND RELAXATION TIMES IN THE McKEAN MODEL

    OpenAIRE

    Schmitt , K.-J.

    1987-01-01

    In the McKean model the BBGKY-hierarchy is equivalent to a simple hierarchy of coupled equations for the p-particle correlation functions. Truncation effects and the convergence of the one-particle distribution towards its exact shape have been studied. In the long time limit the equations can be solved in a closed form. It turns out that the p-particle correlation decays p-times faster than the non-equilibrium one-particle distribution.

  14. A Post-Truncation Parameterization of Truncated Normal Technical Inefficiency

    OpenAIRE

    Christine Amsler; Peter Schmidt; Wen-Jen Tsay

    2013-01-01

    In this paper we consider a stochastic frontier model in which the distribution of technical inefficiency is truncated normal. In standard notation, technical inefficiency u is distributed as N^+ (μ,σ^2). This distribution is affected by some environmental variables z that may or may not affect the level of the frontier but that do affect the shortfall of output from the frontier. We will distinguish the pre-truncation mean (μ) and variance (σ^2) from the post-truncation mean μ_*=E(u) and var...

  15. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    Science.gov (United States)

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  16. The effect of truncation on very small cardiac SPECT camera systems

    International Nuclear Information System (INIS)

    Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.

    2006-01-01

    Background: The limited transaxial field-of-view (FOV) of a very small cardiac SPECT camera system causes view-dependent truncation of the projection of structures exterior to, but near the heart. Basic tomographic principles suggest that the reconstruction of non-attenuated truncated data gives a distortion-free image in the interior of the truncated region, but the DC term of the Fourier spectrum of the reconstructed image is incorrect, meaning that the intensity scale of the reconstruction is inaccurate. The purpose of this study was to characterize the reconstructed image artifacts from truncated data, and to quantify their effects on the measurement of tracer uptake in the myocardial. Particular attention was given to instances where the heart wall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heart region. Truncated and non-truncated projections were formed both with and without attenuation. The reconstructions were analyzed for artifacts in the myocardium caused by truncation, and for the effect that attenuation has relative to increasing those artifacts. Results: The inaccuracy due to truncation is primarily caused by an incorrect DC component. For visualizing the left ventricular wall, this error is not worse than the effect of attenuation. The addition of a small hot bowel-like structure near the left ventricle causes few changes in counts on the wall. Larger artifacts due to the truncation are located at the boundary of the truncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstruction results than an analytical filtered back-projection reconstruction algorithm. Conclusion: Small inaccuracies in reconstructed images from small FOV camera systems should have little effect on clinical interpretation. However, changes in the degree of inaccuracy in counts from slice to slice are due to changes in

  17. Determination of αS from scaling violations of truncated moments of structure functions

    International Nuclear Information System (INIS)

    Forte, Stefano; Latorre, J.I.; Magnea, Lorenzo; Piccione, Andrea

    2002-01-01

    We determine the strong coupling α S (M Z ) from scaling violations of truncated moments of the nonsinglet deep inelastic structure function F 2 . Truncated moments are determined from BCDMS and NMC data using a neural network parametrization which retains the full experimental information on errors and correlations. Our method minimizes all sources of theoretical uncertainty and bias which characterize extractions of α S from scaling violations. We obtain α S (M Z )=0.124 +0.004 -0.007 (exp.) +0.003 -0.004 (th.)

  18. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    Science.gov (United States)

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  19. R Programs for Truncated Distributions

    Directory of Open Access Journals (Sweden)

    Saralees Nadarajah

    2006-08-01

    Full Text Available Truncated distributions arise naturally in many practical situations. In this note, we provide programs for computing six quantities of interest (probability density function, mean, variance, cumulative distribution function, quantile function and random numbers for any truncated distribution: whether it is left truncated, right truncated or doubly truncated. The programs are written in R: a freely downloadable statistical software.

  20. On the effect of numerical errors in large eddy simulations of turbulent flows

    International Nuclear Information System (INIS)

    Kravchenko, A.G.; Moin, P.

    1997-01-01

    Aliased and dealiased numerical simulations of a turbulent channel flow are performed using spectral and finite difference methods. Analytical and numerical studies show that aliasing errors are more destructive for spectral and high-order finite-difference calculations than for low-order finite-difference simulations. Numerical errors have different effects for different forms of the nonlinear terms in the Navier-Stokes equations. For divergence and convective forms, spectral methods are energy-conserving only if dealiasing is performed. For skew-symmetric and rotational forms, both spectral and finite-difference methods are energy-conserving even in the presence of aliasing errors. It is shown that discrepancies between the results of dealiased spectral and standard nondialiased finite-difference methods are due to both aliasing and truncation errors with the latter being the leading source of differences. The relative importance of aliasing and truncation errors as compared to subgrid scale model terms in large eddy simulations is analyzed and discussed. For low-order finite-difference simulations, truncation errors can exceed the magnitude of the subgrid scale term. 25 refs., 17 figs., 1 tab

  1. A response matrix method for one-speed discrete ordinates fixed source problems in slab geometry with no spatial truncation error

    International Nuclear Information System (INIS)

    Lydia, Emilio J.; Barros, Ricardo C.

    2011-01-01

    In this paper we describe a response matrix method for one-speed slab-geometry discrete ordinates (SN) neutral particle transport problems that is completely free from spatial truncation errors. The unknowns in the method are the cell-edge angular fluxes of particles. The numerical results generated for these quantities are exactly those obtained from the analytic solution of the SN problem apart from finite arithmetic considerations. Our method is based on a spectral analysis that we perform in the SN equations with scattering inside a discretization cell of the spatial grid set up on the slab. As a result of this spectral analysis, we are able to obtain an expression for the local general solution of the SN equations. With this local general solution, we determine the response matrix and use the prescribed boundary conditions and continuity conditions to sweep across the discretization cells from left to right and from right to left across the slab, until a prescribed convergence criterion is satisfied. (author)

  2. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...

  3. Analysis of the upper-truncated Weibull distribution for wind speed

    International Nuclear Information System (INIS)

    Kantar, Yeliz Mert; Usta, Ilhan

    2015-01-01

    Highlights: • Upper-truncated Weibull distribution is proposed to model wind speed. • Upper-truncated Weibull distribution nests Weibull distribution as special case. • Maximum likelihood is the best method for upper-truncated Weibull distribution. • Fitting accuracy of upper-truncated Weibull is analyzed on wind speed data. - Abstract: Accurately modeling wind speed is critical in estimating the wind energy potential of a certain region. In order to model wind speed data smoothly, several statistical distributions have been studied. Truncated distributions are defined as a conditional distribution that results from restricting the domain of statistical distribution and they also cover base distribution. This paper proposes, for the first time, the use of upper-truncated Weibull distribution, in modeling wind speed data and also in estimating wind power density. In addition, a comparison is made between upper-truncated Weibull distribution and well known Weibull distribution using wind speed data measured in various regions of Turkey. The obtained results indicate that upper-truncated Weibull distribution shows better performance than Weibull distribution in estimating wind speed distribution and wind power. Therefore, upper-truncated Weibull distribution can be an alternative for use in the assessment of wind energy potential

  4. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    Directory of Open Access Journals (Sweden)

    Na Wei

    2016-05-01

    Full Text Available With sparse and uneven site distribution, Global Positioning System (GPS data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  5. Zero-truncated negative binomial - Erlang distribution

    Science.gov (United States)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  6. Structural damage detection robust against time synchronization errors

    International Nuclear Information System (INIS)

    Yan, Guirong; Dyke, Shirley J

    2010-01-01

    Structural damage detection based on wireless sensor networks can be affected significantly by time synchronization errors among sensors. Precise time synchronization of sensor nodes has been viewed as crucial for addressing this issue. However, precise time synchronization over a long period of time is often impractical in large wireless sensor networks due to two inherent challenges. First, time synchronization needs to be performed periodically, requiring frequent wireless communication among sensors at significant energy cost. Second, significant time synchronization errors may result from node failures which are likely to occur during long-term deployment over civil infrastructures. In this paper, a damage detection approach is proposed that is robust against time synchronization errors in wireless sensor networks. The paper first examines the ways in which time synchronization errors distort identified mode shapes, and then proposes a strategy for reducing distortion in the identified mode shapes. Modified values for these identified mode shapes are then used in conjunction with flexibility-based damage detection methods to localize damage. This alternative approach relaxes the need for frequent sensor synchronization and can tolerate significant time synchronization errors caused by node failures. The proposed approach is successfully demonstrated through numerical simulations and experimental tests in a lab

  7. Effects of system net charge and electrostatic truncation on all-atom constant pH molecular dynamics.

    Science.gov (United States)

    Chen, Wei; Shen, Jana K

    2014-10-15

    Constant pH molecular dynamics offers a means to rigorously study the effects of solution pH on dynamical processes. Here, we address two critical questions arising from the most recent developments of the all-atom continuous constant pH molecular dynamics (CpHMD) method: (1) What is the effect of spatial electrostatic truncation on the sampling of protonation states? (2) Is the enforcement of electrical neutrality necessary for constant pH simulations? We first examined how the generalized reaction field and force-shifting schemes modify the electrostatic forces on the titration coordinates. Free energy simulations of model compounds were then carried out to delineate the errors in the deprotonation free energy and salt-bridge stability due to electrostatic truncation and system net charge. Finally, CpHMD titration of a mini-protein HP36 was used to understand the manifestation of the two types of errors in the calculated pK(a) values. The major finding is that enforcing charge neutrality under all pH conditions and at all time via cotitrating ions significantly improves the accuracy of protonation-state sampling. We suggest that such finding is also relevant for simulations with particle mesh Ewald, considering the known artifacts due to charge-compensating background plasma. Copyright © 2014 Wiley Periodicals, Inc.

  8. Minimum decoding trellis length and truncation depth of wrap-around Viterbi algorithm for TBCC in mobile WiMAX

    Directory of Open Access Journals (Sweden)

    Liu Yu-Sun

    2011-01-01

    Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.

  9. Modelling noise in second generation sequencing forensic genetics STR data using a one-inflated (zero-truncated) negative binomial model

    DEFF Research Database (Denmark)

    Vilsen, Søren B.; Tvedebrink, Torben; Mogensen, Helle Smidt

    2015-01-01

    We present a model fitting the distribution of non-systematic errors in STR second generation sequencing, SGS, analysis. The model fits the distribution of non-systematic errors, i.e. the noise, using a one-inflated, zero-truncated, negative binomial model. The model is a two component model...

  10. Time-dependent phase error correction using digital waveform synthesis

    Science.gov (United States)

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  11. Application of a truncated normal failure distribution in reliability testing

    Science.gov (United States)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  12. Effects of system net charge and electrostatic truncation on all-atom constant pH molecular dynamics †

    Science.gov (United States)

    Chen, Wei; Shen, Jana K.

    2014-01-01

    Constant pH molecular dynamics offers a means to rigorously study the effects of solution pH on dynamical processes. Here we address two critical questions arising from the most recent developments of the all-atom continuous constant pH molecular dynamics (CpHMD) method: 1) What is the effect of spatial electrostatic truncation on the sampling of protonation states? 2) Is the enforcement of electrical neutrality necessary for constant pH simulations? We first examined how the generalized reaction field and force shifting schemes modify the electrostatic forces on the titration coordinates. Free energy simulations of model compounds were then carried out to delineate the errors in the deprotonation free energy and salt-bridge stability due to electrostatic truncation and system net charge. Finally, CpHMD titration of a mini-protein HP36 was used to understand the manifestation of the two types of errors in the calculated pK a values. The major finding is that enforcing charge neutrality under all pH conditions and at all time via co-titrating ions significantly improves the accuracy of protonation-state sampling. We suggest that such finding is also relevant for simulations with particle-mesh Ewald, considering the known artifacts due to charge-compensating background plasma. PMID:25142416

  13. Truncation Depth Rule-of-Thumb for Convolutional Codes

    Science.gov (United States)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  14. Exact error estimation for solutions of nuclide chain equations

    International Nuclear Information System (INIS)

    Tachihara, Hidekazu; Sekimoto, Hiroshi

    1999-01-01

    The exact solution of nuclide chain equations within arbitrary figures is obtained for a linear chain by employing the Bateman method in the multiple-precision arithmetic. The exact error estimation of major calculation methods for a nuclide chain equation is done by using this exact solution as a standard. The Bateman, finite difference, Runge-Kutta and matrix exponential methods are investigated. The present study confirms the following. The original Bateman method has very low accuracy in some cases, because of large-scale cancellations. The revised Bateman method by Siewers reduces the occurrence of cancellations and thereby shows high accuracy. In the time difference method as the finite difference and Runge-Kutta methods, the solutions are mainly affected by the truncation errors in the early decay time, and afterward by the round-off errors. Even though the variable time mesh is employed to suppress the accumulation of round-off errors, it appears to be nonpractical. Judging from these estimations, the matrix exponential method is the best among all the methods except the Bateman method whose calculation process for a linear chain is not identical with that for a general one. (author)

  15. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  16. Truncated Groebner fans and lattice ideals

    OpenAIRE

    Lauritzen, Niels

    2005-01-01

    We outline a generalization of the Groebner fan of a homogeneous ideal with maximal cells parametrizing truncated Groebner bases. This "truncated" Groebner fan is usually much smaller than the full Groebner fan and offers the natural framework for conversion between truncated Groebner bases. The generic Groebner walk generalizes naturally to this setting by using the Buchberger algorithm with truncation on facets. We specialize to the setting of lattice ideals. Here facets along the generic w...

  17. Diagnostic efficiency of truncated area under the curve from 0 to 2 h (AUC₀₋₂) of mycophenolic acid in kidney transplant recipients receiving mycophenolate mofetil and concomitant tacrolimus.

    Science.gov (United States)

    Lampón, Natalia; Tutor-Crespo, María J; Romero, Rafael; Tutor, José C

    2011-07-01

    Recently, the use of the truncated area under the curve from 0 to 2 h (AUC(0-2)) of mycophenolic acid (MPA) has been proposed for therapeutic monitoring in liver transplant recipients. The aim of our study was the evaluation of the clinical usefulness of truncated AUC(0-2) in kidney transplant patients. Plasma MPA was measured in samples taken before the morning dose of mycophenolate mofetil, and one-half and 2 h post-dose, completing 63 MPA concentration-time profiles from 40 adult kidney transplant recipients. The AUC from 0 to 12 h (AUC(0-12)) was calculated using the validated algorithm of Pawinski et al. The truncated AUC(0-2) was calculated using the linear trapezoidal rule, and extrapolated to 0-12 h (trapezoidal extrapolated AUC(0-12)) as previously described. Algorithm calculated and trapezoidal extrapolated AUC(0-12) values showed high correlation (r=0.995) and acceptable dispersion (ma68=0.71 μg·h/mL), median prediction error (6.6%) and median absolute prediction error (12.6%). The truncated AUC(0-2) had acceptable diagnostic efficiency (87%) in the classification of subtherapeutic, therapeutic or supratherapeutic values with respect to AUC(0-12). However, due to the high inter-individual variation of the drug absorption-rate, the dispersion between both pharmacokinetic variables (ma68=6.9 μg·h/mL) was unacceptable. The substantial dispersion between truncated AUC(0-2) and AUC(0-12) values may be a serious objection for the routine use of MPA AUC(0-2) in clinical practice.

  18. Statistical estimation for truncated exponential families

    CERN Document Server

    Akahira, Masafumi

    2017-01-01

    This book presents new findings on nonregular statistical estimation. Unlike other books on this topic, its major emphasis is on helping readers understand the meaning and implications of both regularity and irregularity through a certain family of distributions. In particular, it focuses on a truncated exponential family of distributions with a natural parameter and truncation parameter as a typical nonregular family. This focus includes the (truncated) Pareto distribution, which is widely used in various fields such as finance, physics, hydrology, geology, astronomy, and other disciplines. The family is essential in that it links both regular and nonregular distributions, as it becomes a regular exponential family if the truncation parameter is known. The emphasis is on presenting new results on the maximum likelihood estimation of a natural parameter or truncation parameter if one of them is a nuisance parameter. In order to obtain more information on the truncation, the Bayesian approach is also considere...

  19. NLO renormalization in the Hamiltonian truncation

    Science.gov (United States)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-09-01

    Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.

  20. Accurate characterization of 3D diffraction gratings using time domain discontinuous Galerkin method with exact absorbing boundary conditions

    KAUST Repository

    Sirenko, Kostyantyn

    2013-07-01

    Exact absorbing and periodic boundary conditions allow to truncate grating problems\\' infinite physical domains without introducing any errors. This work presents exact absorbing boundary conditions for 3D diffraction gratings and describes their discretization within a high-order time-domain discontinuous Galerkin finite element method (TD-DG-FEM). The error introduced by the boundary condition discretization matches that of the TD-DG-FEM; this results in an optimal solver in terms of accuracy and computation time. Numerical results demonstrate the superiority of this solver over TD-DG-FEM with perfectly matched layers (PML)-based domain truncation. © 2013 IEEE.

  1. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  2. Stability of Slopes Reinforced with Truncated Piles

    Directory of Open Access Journals (Sweden)

    Shu-Wei Sun

    2016-01-01

    Full Text Available Piles are extensively used as a means of slope stabilization. A novel engineering technique of truncated piles that are unlike traditional piles is introduced in this paper. A simplified numerical method is proposed to analyze the stability of slopes stabilized with truncated piles based on the shear strength reduction method. The influential factors, which include pile diameter, pile spacing, depth of truncation, and existence of a weak layer, are systematically investigated from a practical point of view. The results show that an optimum ratio exists between the depth of truncation and the pile length above a slip surface, below which truncating behavior has no influence on the piled slope stability. This optimum ratio is bigger for slopes stabilized with more flexible piles and piles with larger spacing. Besides, truncated piles are more suitable for slopes with a thin weak layer than homogenous slopes. In practical engineering, the piles could be truncated reasonably while ensuring the reinforcement effect. The truncated part of piles can be filled with the surrounding soil and compacted to reduce costs by using fewer materials.

  3. Anticipating cognitive effort: roles of perceived error-likelihood and time demands.

    Science.gov (United States)

    Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F

    2017-11-13

    Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.

  4. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    Science.gov (United States)

    Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox

    2015-01-01

    A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186

  5. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    Directory of Open Access Journals (Sweden)

    Chi-Jim Chen

    2015-03-01

    Full Text Available A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS, successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM, based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates.

  6. Computing correct truncated excited state wavefunctions

    Science.gov (United States)

    Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.

    2016-12-01

    We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.

  7. Time Error Analysis of SOE System Using Network Time Protocol

    International Nuclear Information System (INIS)

    Keum, Jong Yong; Park, Geun Ok; Park, Heui Youn

    2005-01-01

    To find the accuracy of time in the fully digitalized SOE (Sequence of Events) system, we used a formal specification of the Network Time Protocol (NTP) Version 3, which is used to synchronize time keeping among a set of distributed computers. Through constructing a simple experimental environments and experimenting internet time synchronization, we analyzed the time errors of local clocks of SOE system synchronized with a time server via computer networks

  8. Mixtures of truncated basis functions

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2012-01-01

    In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for representing general hybrid Bayesian networks. The proposed framework generalizes both the mixture of truncated exponentials (MTEs) framework and the mixture of polynomials (MoPs) framework. Similar t...

  9. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  10. Properties of truncated multiplicity distributions

    International Nuclear Information System (INIS)

    Lupia, S.

    1995-01-01

    Truncation effects on multiplicity distributions are discussed. Observables sensitive to the tail, like factorial moments, factorial cumulants and their ratio, are shown to be strongly affected by truncation. A possible way to overcome this problem by looking at the head of the distribution is suggested. (author)

  11. Properties of truncated multiplicity distributions

    Energy Technology Data Exchange (ETDEWEB)

    Lupia, S. [Turin Univ. (Italy). Dipt. di Fisica

    1995-12-31

    Truncation effects on multiplicity distributions are discussed. Observables sensitive to the tail, like factorial moments, factorial cumulants and their ratio, are shown to be strongly affected by truncation. A possible way to overcome this problem by looking at the head of the distribution is suggested. (author)

  12. Truncated Calogero-Sutherland models

    Science.gov (United States)

    Pittman, S. M.; Beau, M.; Olshanii, M.; del Campo, A.

    2017-05-01

    A one-dimensional quantum many-body system consisting of particles confined in a harmonic potential and subject to finite-range two-body and three-body inverse-square interactions is introduced. The range of the interactions is set by truncation beyond a number of neighbors and can be tuned to interpolate between the Calogero-Sutherland model and a system with nearest and next-nearest neighbors interactions discussed by Jain and Khare. The model also includes the Tonks-Girardeau gas describing impenetrable bosons as well as an extension with truncated interactions. While the ground state wave function takes a truncated Bijl-Jastrow form, collective modes of the system are found in terms of multivariable symmetric polynomials. We numerically compute the density profile, one-body reduced density matrix, and momentum distribution of the ground state as a function of the range r and the interaction strength.

  13. Lamp with a truncated reflector cup

    Science.gov (United States)

    Li, Ming; Allen, Steven C.; Bazydola, Sarah; Ghiu, Camil-Daniel

    2013-10-15

    A lamp assembly, and method for making same. The lamp assembly includes first and second truncated reflector cups. The lamp assembly also includes at least one base plate disposed between the first and second truncated reflector cups, and a light engine disposed on a top surface of the at least one base plate. The light engine is configured to emit light to be reflected by one of the first and second truncated reflector cups.

  14. The timing of spontaneous detection and repair of naming errors in aphasia.

    Science.gov (United States)

    Schuchard, Julia; Middleton, Erica L; Schwartz, Myrna F

    2017-08-01

    This study examined the timing of spontaneous self-monitoring in the naming responses of people with aphasia. Twelve people with aphasia completed a 615-item naming test twice, in separate sessions. Naming attempts were scored for accuracy and error type, and verbalizations indicating detection were coded as negation (e.g., "no, not that") or repair attempts (i.e., a changed naming attempt). Focusing on phonological and semantic errors, we measured the timing of the errors and of the utterances that provided evidence of detection. The effects of error type and detection response type on error-to-detection latencies were analyzed using mixed-effects regression modeling. We first asked whether phonological errors and semantic errors differed in the timing of the detection process or repair planning. Results suggested that the two error types primarily differed with respect to repair planning. Specifically, repair attempts for phonological errors were initiated more quickly than repair attempts for semantic errors. We next asked whether this difference between the error types could be attributed to the tendency for phonological errors to have a high degree of phonological similarity with the subsequent repair attempts, thereby speeding the programming of the repairs. Results showed that greater phonological similarity between the error and the repair was associated with faster repair times for both error types, providing evidence of error-to-repair priming in spontaneous self-monitoring. When controlling for phonological overlap, significant effects of error type and repair accuracy on repair times were also found. These effects indicated that correct repairs of phonological errors were initiated particularly quickly, whereas repairs of semantic errors were initiated relatively slowly, regardless of their accuracy. We discuss the implications of these findings for theoretical accounts of self-monitoring and the role of speech error repair in learning. Copyright

  15. Error Recovery in the Time-Triggered Paradigm with FTT-CAN.

    Science.gov (United States)

    Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís

    2018-01-11

    Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.

  16. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  17. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  18. Formal truncations of connected kernel equations

    International Nuclear Information System (INIS)

    Dixon, R.M.

    1977-01-01

    The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems

  19. On the determinants of measurement error in time-driven costing

    NARCIS (Netherlands)

    Cardinaels, E.; Labro, E.

    2008-01-01

    Although time estimates are used extensively for costing purposes, they are prone to measurement error. In an experimental setting, we research how measurement error in time estimates varies with: (1) the level of aggregation in the definition of costing system activities (aggregated or

  20. Perspective on rainbow-ladder truncation

    International Nuclear Information System (INIS)

    Eichmann, G.; Alkofer, R.; Krassnigg, A.; Cloeet, I. C.; Roberts, C. D.

    2008-01-01

    Prima facie the systematic implementation of corrections to the rainbow-ladder truncation of QCD's Dyson-Schwinger equations will uniformly reduce in magnitude those calculated mass-dimensioned results for pseudoscalar and vector meson properties that are not tightly constrained by symmetries. The aim and interpretation of studies employing rainbow-ladder truncation are reconsidered in this light

  1. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    Science.gov (United States)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration

  2. Tracking errors in a prototype real-time tumour tracking system

    International Nuclear Information System (INIS)

    Sharp, Gregory C; Jiang, Steve B; Shimizu, Shinichi; Shirato, Hiroki

    2004-01-01

    In motion-compensated radiation therapy, radio-opaque markers can be implanted in or near a tumour and tracked in real-time using fluoroscopic imaging. Tracking these implanted markers gives highly accurate position information, except when tracking fails due to poor or ambiguous imaging conditions. This study investigates methods for automatic detection of tracking errors, and assesses the frequency and impact of tracking errors on treatments using the prototype real-time tumour tracking system. We investigated four indicators for automatic detection of tracking errors, and found that the distance between corresponding rays was most effective. We also found that tracking errors cause a loss of gating efficiency of between 7.6 and 10.2%. The incidence of treatment beam delivery during tracking errors was estimated at between 0.8% and 1.25%

  3. A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains

    Directory of Open Access Journals (Sweden)

    Kemin Wang

    2014-01-01

    Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.

  4. Linear and Quadratic Interpolators Using Truncated-Matrix Multipliers and Squarers

    Directory of Open Access Journals (Sweden)

    E. George Walters III

    2015-11-01

    Full Text Available This paper presents a technique for designing linear and quadratic interpolators for function approximation using truncated multipliers and squarers. Initial coefficient values are found using a Chebyshev-series approximation and then adjusted through exhaustive simulation to minimize the maximum absolute error of the interpolator output. This technique is suitable for any function and any precision up to 24 bits (IEEE single precision. Designs for linear and quadratic interpolators that implement the 1/x, 1/ √ x, log2(1+2x, log2(x and 2x functions are presented and analyzed as examples. Results show that a proposed 24-bit interpolator computing 1/x with a design specification of ±1 unit in the last place of the product (ulp error uses 16.4% less area and 15.3% less power than a comparable standard interpolator with the same error specification. Sixteen-bit linear interpolators for other functions are shown to use up to 17.3% less area and 12.1% less power, and 16-bit quadratic interpolators are shown to use up to 25.8% less area and 24.7% less power.

  5. Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements

    Science.gov (United States)

    Deeg, H. J.

    2015-06-01

    Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.

  6. Design Margin Elimination Through Robust Timing Error Detection at Ultra-Low Voltage

    OpenAIRE

    Reyserhove, Hans; Dehaene, Wim

    2017-01-01

    This paper discusses a timing error masking-aware ARM Cortex M0 microcontroller system. Through in-path timing error detection, operation at the point-of-first-failure is possi- ble without corrupting the pipeline state, effectively eliminat- ing traditional timing margins. Error events are flagged and gathered to allow dynamic voltage scaling. The error-aware microcontroller was implemented in a 40nm CMOS process and realizes ultra-low voltage operation down to 0.29V at 5MHz consuming 12.90p...

  7. Truncation correction for oblique filtering lines

    International Nuclear Information System (INIS)

    Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Guenter; Dennerlein, Frank; Noo, Frederic

    2008-01-01

    State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.

  8. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    Science.gov (United States)

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  9. The truncated Wigner method for Bose-condensed gases: limits of validity and applications

    International Nuclear Information System (INIS)

    Sinatra, Alice; Lobo, Carlos; Castin, Yvan

    2002-01-01

    We study the truncated Wigner method applied to a weakly interacting spinless Bose-condensed gas which is perturbed away from thermal equilibrium by a time-dependent external potential. The principle of the method is to generate an ensemble of classical fields ψ(r) which samples the Wigner quasi-distribution function of the initial thermal equilibrium density operator of the gas, and then to evolve each classical field with the Gross-Pitaevskii equation. In the first part of the paper we improve the sampling technique over our previous work (Sinatra et al 2000 J. Mod. Opt. 47 2629-44) and we test its accuracy against the exactly solvable model of the ideal Bose gas. In the second part of the paper we investigate the conditions of validity of the truncated Wigner method. For short evolution times it is known that the time-dependent Bogoliubov approximation is valid for almost pure condensates. The requirement that the truncated Wigner method reproduces the Bogoliubov prediction leads to the constraint that the number of field modes in the Wigner simulation must be smaller than the number of particles in the gas. For longer evolution times the nonlinear dynamics of the noncondensed modes of the field plays an important role. To demonstrate this we analyse the case of a three-dimensional spatially homogeneous Bose-condensed gas and we test the ability of the truncated Wigner method to correctly reproduce the Beliaev-Landau damping of an excitation of the condensate. We have identified the mechanism which limits the validity of the truncated Wigner method: the initial ensemble of classical fields, driven by the time-dependent Gross-Pitaevskii equation, thermalizes to a classical field distribution at a temperature T class which is larger than the initial temperature T of the quantum gas. When T class significantly exceeds T a spurious damping is observed in the Wigner simulation. This leads to the second validity condition for the truncated Wigner method, T class - T

  10. Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.

    Science.gov (United States)

    Limongi, Roberto; Silva, Angélica M

    2016-11-01

    The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.

  11. Decrease in medical command errors with use of a "standing orders" protocol system.

    Science.gov (United States)

    Holliman, C J; Wuerz, R C; Meador, S A

    1994-05-01

    The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. FEM for time-fractional diffusion equations, novel optimal error analyses

    OpenAIRE

    Mustapha, Kassem

    2016-01-01

    A semidiscrete Galerkin finite element method applied to time-fractional diffusion equations with time-space dependent diffusivity on bounded convex spatial domains will be studied. The main focus is on achieving optimal error results with respect to both the convergence order of the approximate solution and the regularity of the initial data. By using novel energy arguments, for each fixed time $t$, optimal error bounds in the spatial $L^2$- and $H^1$-norms are derived for both cases: smooth...

  13. Error and symmetry analysis of Misner's algorithm for spherical harmonic decomposition on a cubic grid

    International Nuclear Information System (INIS)

    Fiske, David R

    2006-01-01

    Computing spherical harmonic decompositions is a ubiquitous technique that arises in a wide variety of disciplines and a large number of scientific codes. Because spherical harmonics are defined by integrals over spheres, however, one must perform some sort of interpolation in order to compute them when data are stored on a cubic lattice. Misner (2004 Class. Quantum Grav. 21 S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid, which has been found in real applications to be both efficient and robust to the presence of mesh refinement boundaries. At the same time, however, practical applications of the algorithm require knowledge of how the truncation errors of the algorithm depend on the various parameters in the algorithm. Based on analytic arguments and experience using the algorithm in real numerical simulations, I explore these dependences and provide a rule of thumb for choosing the parameters based on the truncation errors of the underlying data. I also demonstrate that symmetries in the spherical harmonics themselves allow for an even more efficient implementation of the algorithm than was suggested by Misner in his original paper

  14. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Science.gov (United States)

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  15. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Directory of Open Access Journals (Sweden)

    Waddah Waheeb

    Full Text Available Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN and the Dynamic Ridge Polynomial Neural Network (DRPNN. Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  16. Distance error correction for time-of-flight cameras

    Science.gov (United States)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  17. Zero-Error Capacity of a Class of Timing Channels

    DEFF Research Database (Denmark)

    Kovacevic, M.; Popovski, Petar

    2014-01-01

    We analyze the problem of zero-error communication through timing channels that can be interpreted as discrete-time queues with bounded waiting times. The channel model includes the following assumptions: 1) time is slotted; 2) at most N particles are sent in each time slot; 3) every particle is ...

  18. Quark-gluon vertex dressing and meson masses beyond ladder-rainbow truncation

    International Nuclear Information System (INIS)

    Matevosyan, Hrayr H.; Thomas, Anthony W.; Tandy, Peter C.

    2007-01-01

    We include a generalized infinite class of quark-gluon vertex dressing diagrams in a study of how dynamics beyond the ladder-rainbow truncation influences the Bethe-Salpeter description of light-quark pseudoscalar and vector mesons. The diagrammatic specification of the vertex is mapped into a corresponding specification of the Bethe-Salpeter kernel, which preserves chiral symmetry. This study adopts the algebraic format afforded by the simple interaction kernel used in previous work on this topic. The new feature of the present work is that in every diagram summed for the vertex and the corresponding Bethe-Salpeter kernel, each quark-gluon vertex is required to be the self-consistent vertex solution. We also adopt from previous work the effective accounting for the role of the explicitly non-Abelian three-gluon coupling in a global manner through one parameter determined from recent lattice-QCD data for the vertex. Within the current model, the more consistent dressed vertex limits the ladder-rainbow truncation error for vector mesons to be never more than 10% as the current quark mass is varied from the u/d region to the b region

  19. A Residual Approach for Balanced Truncation Model Reduction (BTMR of Compartmental Systems

    Directory of Open Access Journals (Sweden)

    William La Cruz

    2014-05-01

    Full Text Available This paper presents a residual approach of the square root balanced truncation algorithm for model order reduction of continuous, linear and time-invariante compartmental systems. Specifically, the new approach uses a residual method to approximate the controllability and observability gramians, whose resolution is an essential step of the square root balanced truncation algorithm, that requires a great computational cost. Numerical experiences are included to highlight the efficacy of the proposed approach.

  20. Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations

    Science.gov (United States)

    Toosi, Siavash; Larsson, Johan

    2015-11-01

    Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid.

  1. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  2. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  3. Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems

    KAUST Repository

    Asner, Liya; Tavener, Simon; Kay, David

    2012-01-01

    We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.

  4. Experimental Evaluation of a Mixed Controller That Amplifies Spatial Errors and Reduces Timing Errors

    Directory of Open Access Journals (Sweden)

    Laura Marchal-Crespo

    2017-06-01

    Full Text Available Research on motor learning suggests that training with haptic guidance enhances learning of the timing components of motor tasks, whereas error amplification is better for learning the spatial components. We present a novel mixed guidance controller that combines haptic guidance and error amplification to simultaneously promote learning of the timing and spatial components of complex motor tasks. The controller is realized using a force field around the desired position. This force field has a stable manifold tangential to the trajectory that guides subjects in velocity-related aspects. The force field has an unstable manifold perpendicular to the trajectory, which amplifies the perpendicular (spatial error. We also designed a controller that applies randomly varying, unpredictable disturbing forces to enhance the subjects’ active participation by pushing them away from their “comfort zone.” We conducted an experiment with thirty-two healthy subjects to evaluate the impact of four different training strategies on motor skill learning and self-reported motivation: (i No haptics, (ii mixed guidance, (iii perpendicular error amplification and tangential haptic guidance provided in sequential order, and (iv randomly varying disturbing forces. Subjects trained two motor tasks using ARMin IV, a robotic exoskeleton for upper limb rehabilitation: follow circles with an ellipsoidal speed profile, and move along a 3D line following a complex speed profile. Mixed guidance showed no detectable learning advantages over the other groups. Results suggest that the effectiveness of the training strategies depends on the subjects’ initial skill level. Mixed guidance seemed to benefit subjects who performed the circle task with smaller errors during baseline (i.e., initially more skilled subjects, while training with no haptics was more beneficial for subjects who created larger errors (i.e., less skilled subjects. Therefore, perhaps the high functional

  5. On the effect of systematic errors in near real time accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1987-01-01

    Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically

  6. A high-order time-accurate interrogation method for time-resolved PIV

    International Nuclear Information System (INIS)

    Lynch, Kyle; Scarano, Fulvio

    2013-01-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In

  7. SOERP, Statistics and 2. Order Error Propagation for Function of Random Variables

    International Nuclear Information System (INIS)

    Cox, N. D.; Miller, C. F.

    1985-01-01

    1 - Description of problem or function: SOERP computes second-order error propagation equations for the first four moments of a function of independently distributed random variables. SOERP was written for a rigorous second-order error propagation of any function which may be expanded in a multivariable Taylor series, the input variables being independently distributed. The required input consists of numbers directly related to the partial derivatives of the function, evaluated at the nominal values of the input variables and the central moments of the input variables from the second through the eighth. 2 - Method of solution: The development of equations for computing the propagation of errors begins by expressing the function of random variables in a multivariable Taylor series expansion. The Taylor series expansion is then truncated, and statistical operations are applied to the series in order to obtain equations for the moments (about the origin) of the distribution of the computed value. If the Taylor series is truncated after powers of two, the procedure produces second-order error propagation equations. 3 - Restrictions on the complexity of the problem: The maximum number of component variables allowed is 30. The IBM version will only process one set of input data per run

  8. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  9. CLIM : A cross-level workload-aware timing error prediction model for functional units

    NARCIS (Netherlands)

    Jiao, Xun; Rahimi, Abbas; Jiang, Yu; Wang, Jianguo; Fatemi, Hamed; De Gyvez, Jose Pineda; Gupta, Rajesh K.

    2018-01-01

    Timing errors that are caused by the timing violations of sensitized circuit paths, have emerged as an important threat to the reliability of synchronous digital circuits. To protect circuits from these timing errors, designers typically use a conservative timing margin, which leads to operational

  10. New Schemes for Positive Real Truncation

    Directory of Open Access Journals (Sweden)

    Kari Unneland

    2007-07-01

    Full Text Available Model reduction, based on balanced truncation, of stable and of positive real systems are considered. An overview over some of the already existing techniques are given: Lyapunov balancing and stochastic balancing, which includes Riccati balancing. A novel scheme for positive real balanced truncation is then proposed, which is a combination of the already existing Lyapunov balancing and Riccati balancing. Using Riccati balancing, the solution of two Riccati equations are needed to obtain positive real reduced order systems. For the suggested method, only one Lyapunov equation and one Riccati equation are solved in order to obtain positive real reduced order systems, which is less computationally demanding. Further it is shown, that in order to get positive real reduced order systems, only one Riccati equation needs to be solved. Finally, this is used to obtain positive real frequency weighted balanced truncation.

  11. Prediction of the moments in advection-diffusion lattice Boltzmann method. I. Truncation dispersion, skewness, and kurtosis

    Science.gov (United States)

    Ginzburg, Irina

    2017-01-01

    established that the truncation errors in the three transport coefficients kT, Sk, and Ku decay with the second-order accuracy. While the physical values of the three transport coefficients are set by Péclet number, their truncation corrections additionally depend on the two adjustable relaxation rates and the two adjustable equilibrium weight families which independently determine the convective and diffusion discretization stencils. We identify flow- and dimension-independent optimal strategies for adjustable parameters and confront them to stability requirements. Through specific choices of two relaxation rates and weights, we expect our results be directly applicable to forward-time central differences and leap-frog central-convective Du Fort-Frankel-diffusion schemes. In straight channel, a quasi-exact validation of the truncation predictions through the numerical moments becomes possible thanks to the specular-forward no-flux boundary rule. In the staircase description of a cylindrical capillary, we account for the spurious boundary-layer diffusion and dispersion because of the tangential constraint of the bounce-back no-flux boundary rule.

  12. Measuring a Truncated Disk in Aquila X-1

    Science.gov (United States)

    King, Ashley L.; Tomsick, John A.; Miller, Jon M.; Chenevez, Jerome; Barret, Didier; Boggs, Steven E.; Chakrabarty, Deepto; Christensen, Finn E.; Craig, William W.; Feurst, Felix; hide

    2016-01-01

    We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe K(alpha) line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner radius of 15 +/- 3RG. The disk is likely truncated by either the boundary layer and/or a magnetic field. Associating the truncated inner disk with pressure from a magnetic field gives an upper limit of B < 5+/- 2x10(exp 8) G. Although the radius is truncated far from the stellar surface, material is still reaching the neutron star surface as evidenced by the X-ray burst present in the NuSTAR observation.

  13. Evolution of truncated moments of singlet parton distributions

    International Nuclear Information System (INIS)

    Forte, S.; Magnea, L.; Piccione, A.; Ridolfi, G.

    2001-01-01

    We define truncated Mellin moments of parton distributions by restricting the integration range over the Bjorken variable to the experimentally accessible subset x 0 ≤x≤1 of the allowed kinematic range 0≤x≤1. We derive the evolution equations satisfied by truncated moments in the general (singlet) case in terms of an infinite triangular matrix of anomalous dimensions which couple each truncated moment to all higher moments with orders differing by integers. We show that the evolution of any moment can be determined to arbitrarily good accuracy by truncating the system of coupled moments to a sufficiently large but finite size, and show how the equations can be solved in a way suitable for numerical applications. We discuss in detail the accuracy of the method in view of applications to precision phenomenology

  14. Varying coefficient subdistribution regression for left-truncated semi-competing risks data.

    Science.gov (United States)

    Li, Ruosha; Peng, Limin

    2014-10-01

    Semi-competing risks data frequently arise in biomedical studies when time to a disease landmark event is subject to dependent censoring by death, the observation of which however is not precluded by the occurrence of the landmark event. In observational studies, the analysis of such data can be further complicated by left truncation. In this work, we study a varying co-efficient subdistribution regression model for left-truncated semi-competing risks data. Our method appropriately accounts for the specifical truncation and censoring features of the data, and moreover has the flexibility to accommodate potentially varying covariate effects. The proposed method can be easily implemented and the resulting estimators are shown to have nice asymptotic properties. We also present inference, such as Kolmogorov-Smirnov type and Cramér Von-Mises type hypothesis testing procedures for the covariate effects. Simulation studies and an application to the Denmark diabetes registry demonstrate good finite-sample performance and practical utility of the proposed method.

  15. The Dynamics of Truncated Black Hole Accretion Disks. I. Viscous Hydrodynamic Case

    Energy Technology Data Exchange (ETDEWEB)

    Hogg, J. Drew; Reynolds, Christopher S. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States)

    2017-07-10

    Truncated accretion disks are commonly invoked to explain the spectro-temporal variability in accreting black holes in both small systems, i.e., state transitions in galactic black hole binaries (GBHBs), and large systems, i.e., low-luminosity active galactic nuclei (LLAGNs). In the canonical truncated disk model of moderately low accretion rate systems, gas in the inner region of the accretion disk occupies a hot, radiatively inefficient phase, which leads to a geometrically thick disk, while the gas in the outer region occupies a cooler, radiatively efficient phase that resides in the standard geometrically thin disk. Observationally, there is strong empirical evidence to support this phenomenological model, but a detailed understanding of the dynamics of truncated disks is lacking. We present a well-resolved viscous, hydrodynamic simulation that uses an ad hoc cooling prescription to drive a thermal instability and, hence, produce the first sustained truncated accretion disk. With this simulation, we perform a study of the dynamics, angular momentum transport, and energetics of a truncated disk. We find that the time variability introduced by the quasi-periodic transition of gas from efficient cooling to inefficient cooling impacts the evolution of the simulated disk. A consequence of the thermal instability is that an outflow is launched from the hot/cold gas interface, which drives large, sub-Keplerian convective cells into the disk atmosphere. The convective cells introduce a viscous θ − ϕ stress that is less than the generic r − ϕ viscous stress component, but greatly influences the evolution of the disk. In the truncated disk, we find that the bulk of the accreted gas is in the hot phase.

  16. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  17. Phase retrieval via incremental truncated amplitude flow algorithm

    Science.gov (United States)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  18. Data and performance profiles applying an adaptive truncation criterion, within linesearch-based truncated Newton methods, in large scale nonconvex optimization

    Directory of Open Access Journals (Sweden)

    Andrea Caliciotti

    2018-04-01

    Full Text Available In this paper, we report data and experiments related to the research article entitled “An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization” by Caliciotti et al. [1]. In particular, in Caliciotti et al. [1], large scale unconstrained optimization problems are considered by applying linesearch-based truncated Newton methods. In this framework, a key point is the reduction of the number of inner iterations needed, at each outer iteration, to approximately solving the Newton equation. A novel adaptive truncation criterion is introduced in Caliciotti et al. [1] to this aim. Here, we report the details concerning numerical experiences over a commonly used test set, namely CUTEst (Gould et al., 2015 [2]. Moreover, comparisons are reported in terms of performance profiles (Dolan and Moré, 2002 [3], adopting different parameters settings. Finally, our linesearch-based scheme is compared with a renowned trust region method, namely TRON (Lin and Moré, 1999 [4].

  19. Conditional truncated plurigaussian simulation; Simulacao plurigaussiana truncada com condicionamento

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Vitor Hugo

    1997-12-01

    The goal of this work was a development of an algorithm for the Truncated Plurigaussian Stochastic Simulation and its validation in a complex geologic model. The reservoir data comes from Aux Vases Zone at Rural Hill Field in Illinois, USA, and from the 2D geological interpretation, described by WEIMER et al. (1982), three sets of samples, with different grid densities ware taken. These sets were used to condition the simulation and to refine the estimates of the non-stationary matrix of facies proportions, used to truncate the gaussian random functions (RF). The Truncated Plurigaussian Model is an extension of the Truncated Gaussian Model (TG). In this new model its possible to use several facies with different spatial structures, associated with the simplicity of TG. The geological interpretation, used as a validation model, was chosen because it shows a set of NW/SE elongated tidal channels cutting the NE/SW shoreline deposits interleaved by impermeable facies. These characteristics of spatial structures of sedimentary facies served to evaluate the simulation model. Two independent gaussian RF were used, as well as an 'erosive model' as the truncation strategy. Also, non-conditional simulations were proceeded, using linearly combined gaussian RF with varying correlation coefficients. It was analyzed the influence of some parameters like: number of gaussian RF,correlation coefficient, truncations strategy, in the outcome of simulation, and also the physical meaning of these parameters under a geological point of view. It was showed, step by step, using an example, the theoretical model, and how to construct an algorithm to simulate with the Truncated Plurigaussian Model. The conclusion of this work was that even with a plain algorithm of the Conditional Truncated Plurigaussian and a complex geological model it's possible to obtain a usefulness product. (author)

  20. Solving Schwinger-Dyson equations by truncation in zero-dimensional scalar quantum field theory

    International Nuclear Information System (INIS)

    Okopinska, A.

    1991-01-01

    Three sets of Schwinger-Dyson equations, for all Green's functions, for connected Green's functions, and for proper vertices, are considered in scalar quantum field theory. A truncation scheme applied to the three sets gives three different approximation series for Green's functions. For the theory in zero-dimensional space-time the results for respective two-point Green's functions are compared with the exact value calculated numerically. The best convergence of the truncation scheme is obtained for the case of proper vertices

  1. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  2. Truncation in diffraction pattern analysis. Pt. 1

    International Nuclear Information System (INIS)

    Delhez, R.; Keijser, T.H. de; Mittemeijer, E.J.; Langford, J.I.

    1986-01-01

    An evaluation of the concept of a line profile is provoked by truncation of the range of intensity measurement in practice. The measured truncated line profile can be considered either as part of the total intensity distribution which peaks at or near the reciprocal-lattice points (approach 1), or as part of a component line profile which is confined to a single reciprocal-lattice point (approach 2). Some false conceptions in line-profile analysis can then be avoided and recipes can be developed for the extrapolation of the tails of the truncated line profile. Fourier analysis of line profiles, according to the first approach, implies a Fourier series development of the total intensity distribution defined within [l - 1/2, l + 1/2] (l indicates the node considered in reciprocal space); the second approach implies a Fourier transformation of the component line profile defined within [ - ∞, + ∞]. Exact descriptions of size broadening are provided by both approaches, whereas combined size and strain broadening can only be evaluated adequately within the first approach. Straightforward methods are given for obtaining truncation-corrected values for the average crystallite size. (orig.)

  3. First photoelectron timing error evaluation of a new scintillation detector model

    International Nuclear Information System (INIS)

    Petrick, N.; Clinthorne, N.H.; Rogers, W.L.; Hero, A.O. III

    1991-01-01

    In this paper, a general timing system model for a scintillation detector developed is experimentally evaluated. The detector consists of a scintillator and a photodetector such as a photomultiplier tube or an avalanche photodiode. The model uses a Poisson point process to characterize the light output from the scintillator. This timing model was used to simulate a BGO scintillator with a Burle 8575 PMT using first photoelectron timing detection. Evaluation of the model consisted of comparing the RMS error from the simulations with the error from the actual detector system. The authors find that the general model compares well with the actual error results for the BGO/8575 PMT detector. In addition, the optimal threshold is found to be dependent upon the energy of the scintillation. In the low energy part of the spectrum, the authors find a low threshold is optimal while for higher energy pulses the optimal threshold increases

  4. First photoelectron timing error evaluation of a new scintillation detector model

    International Nuclear Information System (INIS)

    Petrick, N.; Clinthorne, N.H.; Rogers, W.L.; Hero, A.O. III

    1990-01-01

    In this paper, a general timing system model for a scintillation detector that was developed, is experimentally evaluated. The detector consists of a scintillator and a photodetector such as a photomultiplier tube or an avalanche photodiode. The model uses a Poisson point process to characterize the light output from the scintillator. This timing model was used to simulated a BGO scintillator with a Burle 8575 PMT using first photoelectron timing detection. Evaluation of the model consisted of comparing the RMS error from the simulations with the error from the actual detector system. We find that the general model compares well with the actual error results for the BGO/8575 PMT detector. In addition, the optimal threshold is found to be dependent upon the energy of the scintillation. In the low energy part of the spectrum, we find a low threshold is optimal while for higher energy pulses the optimal threshold increases

  5. Investigation of propagation dynamics of truncated vector vortex beams.

    Science.gov (United States)

    Srinivas, P; Perumangatt, C; Lal, Nijil; Singh, R P; Srinivasan, B

    2018-06-01

    In this Letter, we experimentally investigate the propagation dynamics of truncated vector vortex beams generated using a Sagnac interferometer. Upon focusing, the truncated vector vortex beam is found to regain its original intensity structure within the Rayleigh range. In order to explain such behavior, the propagation dynamics of a truncated vector vortex beam is simulated by decomposing it into the sum of integral charge beams with associated complex weights. We also show that the polarization of the truncated composite vector vortex beam is preserved all along the propagation axis. The experimental observations are consistent with theoretical predictions based on previous literature and are in good agreement with our simulation results. The results hold importance as vector vortex modes are eigenmodes of the optical fiber.

  6. On an adaptive time stepping strategy for solving nonlinear diffusion equations

    International Nuclear Information System (INIS)

    Chen, K.; Baines, M.J.; Sweby, P.K.

    1993-01-01

    A new time step selection procedure is proposed for solving non- linear diffusion equations. It has been implemented in the ASWR finite element code of Lorenz and Svoboda [10] for 2D semiconductor process modelling diffusion equations. The strategy is based on equi-distributing the local truncation errors of the numerical scheme. The use of B-splines for interpolation (as well as for the trial space) results in a banded and diagonally dominant matrix. The approximate inverse of such a matrix can be provided to a high degree of accuracy by another banded matrix, which in turn can be used to work out the approximate finite difference scheme corresponding to the ASWR finite element method, and further to calculate estimates of the local truncation errors of the numerical scheme. Numerical experiments on six full simulation problems arising in semiconductor process modelling have been carried out. Results show that our proposed strategy is more efficient and better conserves the total mass. 18 refs., 6 figs., 2 tabs

  7. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    Science.gov (United States)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  8. Wigner distribution function of circularly truncated light beams

    NARCIS (Netherlands)

    Bastiaans, M.J.; Nijhawan, O.P.; Gupta, A.K.; Musla, A.K.; Singh, Kehar

    1998-01-01

    Truncating a light beam is expressed as a convolution of its Wigner distribution function and the WDF of the truncating aperture. The WDF of a circular aperture is derived and an approximate expression - which is exact in the space and the spatial-frequency origin and whose integral over the spatial

  9. Reducing Approximation Error in the Fourier Flexible Functional Form

    Directory of Open Access Journals (Sweden)

    Tristan D. Skolrud

    2017-12-01

    Full Text Available The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.

  10. Indirect inference with time series observed with error

    DEFF Research Database (Denmark)

    Rossi, Eduardo; Santucci de Magistris, Paolo

    estimation. We propose to solve this inconsistency by jointly estimating the nuisance and the structural parameters. Under standard assumptions, this estimator is consistent and asymptotically normal. A condition for the identification of ARMA plus noise is obtained. The proposed methodology is used......We analyze the properties of the indirect inference estimator when the observed series are contaminated by measurement error. We show that the indirect inference estimates are asymptotically biased when the nuisance parameters of the measurement error distribution are neglected in the indirect...... to estimate the parameters of continuous-time stochastic volatility models with auxiliary specifications based on realized volatility measures. Monte Carlo simulations shows the bias reduction of the indirect estimates obtained when the microstructure noise is explicitly modeled. Finally, an empirical...

  11. A SUZAKU OBSERVATION OF NGC 4593: ILLUMINATING THE TRUNCATED DISK

    International Nuclear Information System (INIS)

    Markowitz, A. G.; Reeves, J. N.

    2009-01-01

    We report results from a 2007 Suzaku observation of the Seyfert 1 AGN NGC 4593. The narrow Fe Kα emission line has a FWHM width ∼ 4000 km s -1 , indicating emission from ∼> 5000 R g . There is no evidence for a relativistically broadened Fe K line, consistent with the presence of a radiatively efficient outer disk which is truncated or transitions to an interior radiatively inefficient flow. The Suzaku observation caught the source in a low-flux state; comparison to a 2002 XMM-Newton observation indicates that the hard X-ray flux decreased by 3.6, while the Fe Kα line intensity and width σ each roughly halved. Two model-dependent explanations for the changes in Fe Kα line profile are explored. In one, the Fe Kα line width has decreased from ∼10,000 to ∼4000 km s -1 from 2002 to 2007, suggesting that the thin disk truncation/transition radius has increased from 1000-2000 to ∼>5000 R g . However, there are indications from other compact accreting systems that such truncation radii tend to be associated only with accretion rates relative to Eddington much lower than that of NGC 4593. In the second model, the line profile in the XMM-Newton observation consists of a time-invariant narrow component plus a broad component originating from the inner part of the truncated disk (∼300 R g ) which has responded to the drop in continuum flux. The Compton reflection component strength R is ∼ 1.1, consistent with the measured Fe Kα line total equivalent width with an Fe abundance 1.7 times the solar value. The modest soft excess, modeled well by either thermal bremsstrahlung emission or by Comptonization of soft seed photons in an optical thin plasma, has fallen by a factor of ∼20 from 2002 to 2007, ruling out emission from a region 5 lt-yr in size.

  12. Error Analysis Of Clock Time (T), Declination (*) And Latitude ...

    African Journals Online (AJOL)

    ), latitude (Φ), longitude (λ) and azimuth (A); which are aimed at establishing fixed positions and orientations of survey points and lines on the earth surface. The paper attempts the analysis of the individual and combined effects of error in time ...

  13. Neural Network Based Real-time Correction of Transducer Dynamic Errors

    Science.gov (United States)

    Roj, J.

    2013-12-01

    In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.

  14. Flexible scheme to truncate the hierarchy of pure states.

    Science.gov (United States)

    Zhang, P-P; Bentley, C D B; Eisfeld, A

    2018-04-07

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  15. Flexible scheme to truncate the hierarchy of pure states

    Science.gov (United States)

    Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.

    2018-04-01

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  16. MO-FG-202-07: Real-Time EPID-Based Detection Metric For VMAT Delivery Errors

    International Nuclear Information System (INIS)

    Passarge, M; Fix, M K; Manser, P; Stampanoni, M F M; Siebers, J V

    2016-01-01

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error

  17. MO-FG-202-07: Real-Time EPID-Based Detection Metric For VMAT Delivery Errors

    Energy Technology Data Exchange (ETDEWEB)

    Passarge, M; Fix, M K; Manser, P [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern (Switzerland); Stampanoni, M F M [Institute for Biomedical Engineering, ETH Zurich, and PSI, Villigen (Switzerland); Siebers, J V [Department of Radiation Oncology, University of Virginia, Charlottesville, VA (United States)

    2016-06-15

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error

  18. Clinical relevance of and risk factors associated with medication administration time errors

    NARCIS (Netherlands)

    Teunissen, R.; Bos, J.; Pot, H.; Pluim, M.; Kramers, C.

    2013-01-01

    PURPOSE: The clinical relevance of and risk factors associated with errors related to medication administration time were studied. METHODS: In this explorative study, 66 medication administration rounds were studied on two wards (surgery and neurology) of a hospital. Data on medication errors were

  19. A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates

    CERN Document Server

    Hagedorn, G A

    2004-01-01

    We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\

  20. Lethal mutants and truncated selection together solve a paradox of the origin of life.

    Directory of Open Access Journals (Sweden)

    David B Saakian

    Full Text Available BACKGROUND: Many attempts have been made to describe the origin of life, one of which is Eigen's cycle of autocatalytic reactions [Eigen M (1971 Naturwissenschaften 58, 465-523], in which primordial life molecules are replicated with limited accuracy through autocatalytic reactions. For successful evolution, the information carrier (either RNA or DNA or their precursor must be transmitted to the next generation with a minimal number of misprints. In Eigen's theory, the maximum chain length that could be maintained is restricted to 100-1000 nucleotides, while for the most primitive genome the length is around 7000-20,000. This is the famous error catastrophe paradox. How to solve this puzzle is an interesting and important problem in the theory of the origin of life. METHODOLOGY/PRINCIPAL FINDINGS: We use methods of statistical physics to solve this paradox by carefully analyzing the implications of neutral and lethal mutants, and truncated selection (i.e., when fitness is zero after a certain Hamming distance from the master sequence for the critical chain length. While neutral mutants play an important role in evolution, they do not provide a solution to the paradox. We have found that lethal mutants and truncated selection together can solve the error catastrophe paradox. There is a principal difference between prebiotic molecule self-replication and proto-cell self-replication stages in the origin of life. CONCLUSIONS/SIGNIFICANCE: We have applied methods of statistical physics to make an important breakthrough in the molecular theory of the origin of life. Our results will inspire further studies on the molecular theory of the origin of life and biological evolution.

  1. Effects of errors on the dynamic aperture of the Advanced Photon Source storage ring

    International Nuclear Information System (INIS)

    Bizek, H.; Crosbie, E.; Lessner, E.; Teng, L.; Wirsbinski, J.

    1991-01-01

    The individual tolerance limits for alignment errors and magnet fabrication errors in the 7-GeV Advanced Photon Source storage ring are determined by computer-simulated tracking. Limits are established for dipole strength and roll errors, quadrupole strength and alignment errors, sextupole strength and alignment errors, as well as higher order multipole strengths in dipole and quadrupole magnets. The effects of girder misalignments on the dynamic aperture are also studied. Computer simulations are obtained with the tracking program RACETRACK, with errors introduced from a user-defined Gaussian distribution, truncated at ±5 standard deviation units. For each error, the average and rms spread of the stable amplitudes are determined for ten distinct machines, defined as ten different seeds to the random distribution, and for five distinct initial directions of the tracking particle. 4 refs., 4 figs., 1 tab

  2. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    International Nuclear Information System (INIS)

    Lychak, Oleh V; Holyns’kiy, Ivan S

    2016-01-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen. (paper)

  3. Digestion proteomics: tracking lactoferrin truncation and peptide release during simulated gastric digestion.

    Science.gov (United States)

    Grosvenor, Anita J; Haigh, Brendan J; Dyer, Jolon M

    2014-11-01

    The extent to which nutritional and functional benefit is derived from proteins in food is related to its breakdown and digestion in the body after consumption. Further, detailed information about food protein truncation during digestion is critical to understanding and optimising the availability of bioactives, in controlling and limiting allergen release, and in minimising or monitoring the effects of processing and food preparation. However, tracking the complex array of products formed during the digestion of proteins is not easily accomplished using classical proteomics. We here present and develop a novel proteomic approach using isobaric labelling to mapping and tracking protein truncation and peptide release during simulated gastric digestion, using bovine lactoferrin as a model food protein. The relative abundance of related peptides was tracked throughout a digestion time course, and the effect of pasteurisation on peptide release assessed. The new approach to food digestion proteomics developed here therefore appears to be highly suitable not only for tracking the truncation and relative abundance of released peptides during gastric digestion, but also for determining the effects of protein modification on digestibility and potential bioavailability.

  4. Identifying afterloading PDR and HDR brachytherapy errors using real-time fiber-coupled Al2O3:C dosimetry and a novel statistical error decision criterion

    International Nuclear Information System (INIS)

    Kertzscher, Gustavo; Andersen, Claus E.; Siebert, Frank-Andre; Nielsen, Soren Kynde; Lindegaard, Jacob C.; Tanderup, Kari

    2011-01-01

    Background and purpose: The feasibility of a real-time in vivo dosimeter to detect errors has previously been demonstrated. The purpose of this study was to: (1) quantify the sensitivity of the dosimeter to detect imposed treatment errors under well controlled and clinically relevant experimental conditions, and (2) test a new statistical error decision concept based on full uncertainty analysis. Materials and methods: Phantom studies of two gynecological cancer PDR and one prostate cancer HDR patient treatment plans were performed using tandem ring applicators or interstitial needles. Imposed treatment errors, including interchanged pairs of afterloader guide tubes and 2-20 mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al 2 O 3 :C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated at three dose levels: dwell position, source channel, and fraction. The error criterion incorporated the correlated source position uncertainties and other sources of uncertainty, and it was applied both for the specific phantom patient plans and for a general case (source-detector distance 5-90 mm and position uncertainty 1-4 mm). Results: Out of 20 interchanged guide tube errors, time-resolved analysis identified 17 while fraction level analysis identified two. Channel and fraction level comparisons could leave 10 mm dosimeter displacement errors unidentified. Dwell position dose rate comparisons correctly identified displacements ≥5 mm. Conclusion: This phantom study demonstrates that Al 2 O 3 :C real-time dosimetry can identify applicator displacements ≥5 mm and interchanged guide tube errors during PDR and HDR brachytherapy. The study demonstrates the shortcoming of a constant error criterion and the advantage of a statistical error criterion.

  5. Maximum nondiffracting propagation distance of aperture-truncated Airy beams

    Science.gov (United States)

    Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu

    2018-05-01

    Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.

  6. An iterative reconstruction from truncated projection data

    International Nuclear Information System (INIS)

    Anon.

    1985-01-01

    Various methods have been proposed for tomographic reconstruction from truncated projection data. In this paper, a reconstructive method is discussed which consists of iterations of filtered back-projection, reprojection and some nonlinear processings. First, the method is so constructed that it converges to a fixed point. Then, to examine its effectiveness, comparisons are made by computer experiments with two existing reconstructive methods for truncated projection data, that is, the method of extrapolation based on the smooth assumption followed by filtered back-projection, and modified additive ART

  7. Real-time detection and elimination of nonorthogonality error in interference fringe processing

    International Nuclear Information System (INIS)

    Hu Haijiang; Zhang Fengdeng

    2011-01-01

    In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moire fringe and other optical instruments.

  8. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  9. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    Science.gov (United States)

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Truncated Wigner dynamics and conservation laws

    Science.gov (United States)

    Drummond, Peter D.; Opanchuk, Bogdan

    2017-10-01

    Ultracold Bose gases can be used to experimentally test many-body theory predictions. Here we point out that both exact conservation laws and dynamical invariants exist in the topical case of the one-dimensional Bose gas, and these provide an important validation of methods. We show that the first four quantum conservation laws are exactly conserved in the approximate truncated Wigner approach to many-body quantum dynamics. Center-of-mass position variance is also exactly calculable. This is nearly exact in the truncated Wigner approximation, apart from small terms that vanish as N-3 /2 as N →∞ with fixed momentum cutoff. Examples of this are calculated in experimentally relevant, mesoscopic cases.

  11. The impact of a closed-loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before-and-after study.

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-08-01

    To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.

  12. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  13. Identifying and Correcting Timing Errors at Seismic Stations in and around Iran

    International Nuclear Information System (INIS)

    Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; Begnaud, Michael Lee

    2017-01-01

    A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14 stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.

  14. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    Science.gov (United States)

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  15. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.

    2013-01-01

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our

  16. Supervised learning based model for predicting variability-induced timing errors

    NARCIS (Netherlands)

    Jiao, X.; Rahimi, A.; Narayanaswamy, B.; Fatemi, H.; Pineda de Gyvez, J.; Gupta, R.K.

    2015-01-01

    Circuit designers typically combat variations in hardware and workload by increasing conservative guardbanding that leads to operational inefficiency. Reducing this excessive guardband is highly desirable, but causes timing errors in synchronous circuits. We propose a methodology for supervised

  17. Inference for shared-frailty survival models with left-truncated data

    NARCIS (Netherlands)

    van den Berg, G.J.; Drepper, B.

    2016-01-01

    Shared-frailty survival models specify that systematic unobserved determinants of duration outcomes are identical within groups of individuals. We consider random-effects likelihood-based statistical inference if the duration data are subject to left-truncation. Such inference with left-truncated

  18. Truncated power control for improving TCP/IP performance over CDMA wireless links

    DEFF Research Database (Denmark)

    Cianca, Ernestina; Prasad, Ramjee; De Sanctis, Mauro

    2005-01-01

    The issue of the performance degradation of transmission control protocol/Internet Protocol (TCP/IP) over wireless links due to the presence of noncongestion-related packet losses has been addressed with a physical layer approach. The effectiveness of automatic repeat request techniques...... in enhancing TCP/IP performance depends on the tradeoff between frame transmission delay and residual errors after retransmissions. The paper shows how a truncated power control can be effectively applied to improve that tradeoff so that a higher transmission reliability is provided without increasing...... the frame transmission delay through the radio link layer and without increasing the energy consumption. An analytical framework has been developed to show the feasibility and effectiveness of the proposed power control. The analytical results, which are carried out assuming a constant multiuser...

  19. Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm

    Directory of Open Access Journals (Sweden)

    F Hermens

    2014-08-01

    Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.

  20. Immature truncated O-glycophenotype of cancer directly induces oncogenic features

    DEFF Research Database (Denmark)

    Radhakrishnan, Prakash; Dabelsteen, Sally; Madsen, Frey Brus

    2014-01-01

    Aberrant expression of immature truncated O-glycans is a characteristic feature observed on virtually all epithelial cancer cells, and a very high frequency is observed in early epithelial premalignant lesions that precede the development of adenocarcinomas. Expression of the truncated O-glycan s...

  1. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  2. Impact of degree truncation on the spread of a contagious process on networks.

    Science.gov (United States)

    Harling, Guy; Onnela, Jukka-Pekka

    2018-03-01

    Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue or study design, e.g., fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Past research has shown how FCD truncation affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and also used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes propagated on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.

  3. Multiple-scattering theory with a truncated basis set

    International Nuclear Information System (INIS)

    Zhang, X.; Butler, W.H.

    1992-01-01

    Multiple-scattering theory (MST) is an extremely efficient technique for calculating the electronic structure of an assembly of atoms. The wave function in MST is expanded in terms of spherical waves centered on each atom and indexed by their orbital and azimuthal quantum numbers, l and m. The secular equation which determines the characteristic energies can be truncated at a value of the orbital angular momentum l max , for which the higher angular momentum phase shifts, δ l (l>l max ), are sufficiently small. Generally, the wave-function coefficients which are calculated from the secular equation are also truncated at l max . Here we point out that this truncation of the wave function is not necessary and is in fact inconsistent with the truncation of the secular equation. A consistent procedure is described in which the states with higher orbital angular momenta are retained but with their phase shifts set to zero. We show that this treatment gives smooth, continuous, and correctly normalized wave functions and that the total charge density calculated from the corresponding Green function agrees with the Lloyd formula result. We also show that this augmented wave function can be written as a linear combination of Andersen's muffin-tin orbitals in the case of muffin-tin potentials, and can be used to generalize the muffin-tin orbital idea to full-cell potentals

  4. The impact of a closed‐loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before‐and‐after study

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-01-01

    Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication‐related tasks increased. PMID:17693676

  5. Importance-truncated shell model for multi-shell valence spaces

    Energy Technology Data Exchange (ETDEWEB)

    Stumpf, Christina; Vobig, Klaus; Roth, Robert [Institut fuer Kernphysik, TU Darmstadt (Germany)

    2016-07-01

    The valence-space shell model is one of the work horses in nuclear structure theory. In traditional applications, shell-model calculations are carried out using effective interactions constructed in a phenomenological framework for rather small valence spaces, typically spanned by one major shell. We improve on this traditional approach addressing two main aspects. First, we use new effective interactions derived in an ab initio approach and, thus, establish a connection to the underlying nuclear interaction providing access to single- and multi-shell valence spaces. Second, we extend the shell model to larger valence spaces by applying an importance-truncation scheme based on a perturbative importance measure. In this way, we reduce the model space to the relevant basis states for the description of a few target eigenstates and solve the eigenvalue problem in this physics-driven truncated model space. In particular multi-shell valence spaces are not tractable otherwise. We combine the importance-truncated shell model with refined extrapolation schemes to approximately recover the exact result. We present first results obtained in the importance-truncated shell model with the newly derived ab initio effective interactions for multi-shell valence spaces, e.g., the sdpf shell.

  6. Flow equation of quantum Einstein gravity in a higher-derivative truncation

    International Nuclear Information System (INIS)

    Lauscher, O.; Reuter, M.

    2002-01-01

    Motivated by recent evidence indicating that quantum Einstein gravity (QEG) might be nonperturbatively renormalizable, the exact renormalization group equation of QEG is evaluated in a truncation of theory space which generalizes the Einstein-Hilbert truncation by the inclusion of a higher-derivative term (R 2 ). The beta functions describing the renormalization group flow of the cosmological constant, Newton's constant, and the R 2 coupling are computed explicitly. The fixed point properties of the 3-dimensional flow are investigated, and they are confronted with those of the 2-dimensional Einstein-Hilbert flow. The non-Gaussian fixed point predicted by the latter is found to generalize to a fixed point on the enlarged theory space. In order to test the reliability of the R 2 truncation near this fixed point we analyze the residual scheme dependence of various universal quantities; it turns out to be very weak. The two truncations are compared in detail, and their numerical predictions are found to agree with a surprisingly high precision. Because of the consistency of the results it appears increasingly unlikely that the non-Gaussian fixed point is an artifact of the truncation. If it is present in the exact theory QEG is probably nonperturbatively renormalizable and ''asymptotically safe.'' We discuss how the conformal factor problem of Euclidean gravity manifests itself in the exact renormalization group approach and show that, in the R 2 truncation, the investigation of the fixed point is not afflicted with this problem. Also the Gaussian fixed point of the Einstein-Hilbert truncation is analyzed; it turns out that it does not generalize to a corresponding fixed point on the enlarged theory space

  7. Approximate truncation robust computed tomography—ATRACT

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Maier, Andreas

    2013-01-01

    We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)

  8. Stellar Disk Truncations: HI Density and Dynamics

    Science.gov (United States)

    Trujillo, Ignacio; Bakos, Judit

    2010-06-01

    Using HI Nearby Galaxy Survey (THINGS) 21-cm observations of a sample of nearby (nearly face-on) galaxies we explore whether the stellar disk truncation phenomenon produces any signature either in the HI gas density and/or in the gas dynamics. Recent cosmological simulations suggest that the origin of the break on the surface brightness distribution is produced by the appearance of a warp at the truncation position. This warp should produce a flaring on the gas distribution increasing the velocity dispersion of the HI component beyond the break. We do not find, however, any evidence of this increase in the gas velocity dispersion profile.

  9. A truncated accretion disk in the galactic black hole candidate source H1743-322

    International Nuclear Information System (INIS)

    Sriram, Kandulapati; Agrawal, Vivek Kumar; Rao, Arikkala Raghurama

    2009-01-01

    To investigate the geometry of the accretion disk in the source H1743-322, we have carried out a detailed X-ray temporal and spectral study using RXTE pointed observations. We have selected all data pertaining to the Steep Power Law (SPL) state during the 2003 outburst of this source. We find anti-correlated hard X-ray lags in three of the observations and the changes in the spectral and timing parameters (like the QPO frequency) confirm the idea of a truncated accretion disk in this source. Compiling data from similar observations of other sources, we find a correlation between the fractional change in the QPO frequency and the observed delay. We suggest that these observations indicate a definite size scale in the inner accretion disk (the radius of the truncated disk) and we explain the observed correlation using various disk parameters like Compton cooling time scale, viscous time scale etc. (research papers)

  10. Cross-layer combining of adaptive modulation and truncated ARQ under cognitive radio resource requirements

    KAUST Repository

    Yang, Yuli; Ma, Hao; Aï ssa, Sonia

    2012-01-01

    In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU's packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU's packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.

  11. Residual-based Methods for Controlling Discretization Error in CFD

    Science.gov (United States)

    2015-08-24

    ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1   . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a

  12. Timing of the Cenozoic basins of Southern Mexico and its relationship with the Pacific truncation process: Subduction erosion or detachment of the Chortís block

    Science.gov (United States)

    Silva-Romo, Gilberto; Mendoza-Rosales, Claudia Cristina; Campos-Madrigal, Emiliano; Hernández-Marmolejo, Yoalli Bianii; de la Rosa-Mora, Orestes Antonio; de la Torre-González, Alam Israel; Bonifacio-Serralde, Carlos; López-García, Nallely; Nápoles-Valenzuela, Juan Ivan

    2018-04-01

    In the central sector of the Sierra Madre del Sur in Southern Mexico, between approximately 36 and 16 Ma ago and in the west to east direction, a diachronic process of the formation of ∼north-south trending fault-bounded basins occurred. No tectono-sedimentary event in the period between 25 and 20 Ma is recognized in the study region. A period during which subduction erosion truncated the continental crust of southern Mexico has been proposed. The chronology, geometry and style of the formation of the Eocene Miocene fault-bounded basins are more congruent with crustal truncation by the detachment of the Chortís block, thus bringing into question the crustal truncation hypothesis of the Southern Mexico margin. Between Taxco and Tehuacán, using seven new Laser Ablation- Inductively-coupled plasma mass spectrometry (LA-ICP-MS) U-Pb ages in magmatic zircons, we refine the stratigraphy of the Tepenene, Tehuitzingo, Atzumba and Tepelmeme basins. The analyzed basins present similar tectono-sedimentary evolutions as follows: Stage 1, depocenter formation and filling by clastic rocks accumulated as alluvial fans and Stage 2, lacustrine sedimentation characterized by calcareous and/or evaporite beds. Based on our results, we propose the following hypothesis: in Southern Mexico, during Eocene-Miocene times, the diachronic formation of fault-bounded basins with general north-south trend occurred within the framework of the convergence between the plates of North and South America, and once the Chortís block had slipped towards the east, the basins formed in the cortical crust were recently left behind. On the other hand, the beginning of the basins' formation process related to left strike slip faults during Eocene-Oligocene times can be associated with the thermomechanical maturation cortical process that caused the brittle/ductile transition level in the continental crust to shallow.

  13. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Science.gov (United States)

    2010-01-01

    ... record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL... § 1605.22 Claims for correction of Board or TSP record keeper errors; time limitations. (a) Filing claims... after that time, the Board or TSP record keeper may use its sound discretion in deciding whether to...

  14. Enhancing propagation characteristics of truncated localized waves in silica

    KAUST Repository

    Salem, Mohamed

    2011-07-01

    The spectral characteristics of truncated Localized Waves propagating in dispersive silica are analyzed. Numerical experiments show that the immunity of the truncated Localized Waves propagating in dispersive silica to decay and distortion is enhanced as the non-linearity of the relation between the transverse spatial spectral components and the wave vector gets stronger, in contrast to free-space propagating waves, which suffer from early decay and distortion. © 2011 IEEE.

  15. Uniform stable conformal convolutional perfectly matched layer for enlarged cell technique conformal finite-difference time-domain method

    International Nuclear Information System (INIS)

    Wang Yue; Wang Jian-Guo; Chen Zai-Gao

    2015-01-01

    Based on conformal construction of physical model in a three-dimensional Cartesian grid, an integral-based conformal convolutional perfectly matched layer (CPML) is given for solving the truncation problem of the open port when the enlarged cell technique conformal finite-difference time-domain (ECT-CFDTD) method is used to simulate the wave propagation inside a perfect electric conductor (PEC) waveguide. The algorithm has the same numerical stability as the ECT-CFDTD method. For the long-time propagation problems of an evanescent wave in a waveguide, several numerical simulations are performed to analyze the reflection error by sweeping the constitutive parameters of the integral-based conformal CPML. Our numerical results show that the integral-based conformal CPML can be used to efficiently truncate the open port of the waveguide. (paper)

  16. Firewalls as artefacts of inconsistent truncations of quantum geometries

    Science.gov (United States)

    Germani, Cristiano; Sarkar, Debajyoti

    2016-01-01

    In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order ${\\cal O}(e^{-S})$, inexorably leads to a "localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by ${\\cal O}(e^{-S})$ effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like ${\\cal O}( g^{tt} e^{-S})$. Therefore, while at a distance far away from the horizon, where $g^{tt}\\approx 1$, quantum corrections {\\it are} perturbative, they {\\it do} diverge close to the horizon, where $g^{tt}\\rightarrow \\infty$. Nevertheless, these "corrections" nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes.

  17. Chaos and noise in a truncated Toda potential

    International Nuclear Information System (INIS)

    Habib, S.; Kandrup, H.E.; Mahon, M.E.

    1996-01-01

    Results are reported from a numerical investigation of orbits in a truncated Toda potential that is perturbed by weak friction and noise. Aside from the perturbations displaying a simple scaling in the amplitude of the friction and noise, it is found that even very weak friction and noise can induce an extrinsic diffusion through cantori on a time scale that is much shorter than that associated with intrinsic diffusion in the unperturbed system. The results have applications in galactic dynamics and in the formation of a beam halo in charged particle beams. copyright 1996 The American Physical Society

  18. Probabilistic error bounds for reduced order modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abdo, M.G.; Wang, C.; Abdel-Khalik, H.S., E-mail: abdo@purdue.edu, E-mail: wang1730@purdue.edu, E-mail: abdelkhalik@purdue.edu [Purdue Univ., School of Nuclear Engineering, West Lafayette, IN (United States)

    2015-07-01

    Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user. (author)

  19. Influence of planning time and treatment complexity on radiation therapy errors.

    Science.gov (United States)

    Gensheimer, Michael F; Zeng, Jing; Carlson, Joshua; Spady, Phil; Jordan, Loucille; Kane, Gabrielle; Ford, Eric C

    2016-01-01

    Radiation treatment planning is a complex process with potential for error. We hypothesized that shorter time from simulation to treatment would result in rushed work and higher incidence of errors. We examined treatment planning factors predictive for near-miss events. Treatments delivered from March 2012 through October 2014 were analyzed. Near-miss events were prospectively recorded and coded for severity on a 0 to 4 scale; only grade 3-4 (potentially severe/critical) events were studied in this report. For 4 treatment types (3-dimensional conformal, intensity modulated radiation therapy, stereotactic body radiation therapy [SBRT], neutron), logistic regression was performed to test influence of treatment planning time and clinical variables on near-miss events. There were 2257 treatment courses during the study period, with 322 grade 3-4 near-miss events. SBRT treatments had more frequent events than the other 3 treatment types (18% vs 11%, P = .04). For the 3-dimensional conformal group (1354 treatments), univariate analysis showed several factors predictive of near-miss events: longer time from simulation to first treatment (P = .01), treatment of primary site versus metastasis (P < .001), longer treatment course (P < .001), and pediatric versus adult patient (P = .002). However, on multivariate regression only pediatric versus adult patient remained predictive of events (P = 0.02). For the intensity modulated radiation therapy, SBRT, and neutron groups, time between simulation and first treatment was not found to be predictive of near-miss events on univariate or multivariate regression. When controlling for treatment technique and other clinical factors, there was no relationship between time spent in radiation treatment planning and near-miss events. SBRT and pediatric treatments were more error-prone, indicating that clinical and technical complexity of treatments should be taken into account when targeting safety interventions. Copyright © 2015 American

  20. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    Science.gov (United States)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  1. Transiently truncated and differentially regulated expression of midkine during mouse embryogenesis

    International Nuclear Information System (INIS)

    Chen Qin; Yuan Yuanyang; Lin Shuibin; Chang Youde; Zhuo Xinming; Wei Wei; Tao Ping; Ruan Lingjuan; Li Qifu; Li Zhixing

    2005-01-01

    Midkine (MK) is a retinoic acid response cytokine, mostly expressed in embryonic tissues. Aberrant expression of MK was found in numerous cancers. In human, a truncated MK was expressed specifically in tumor/cancer tissues. Here we report the discovery of a novel truncated form of MK transiently expressed during normal mouse embryonic development. In addition, MK is concentrated at the interface between developing epithelium and mesenchyme as well as highly proliferating cells. Its expression, which is closely coordinated with angiogenesis and vasculogenesis, is spatiotemporally regulated with peaks in extensive organogenesis period and undifferentiated cells tailing off in maturing cells, implying its role in nascent blood vessel (endothelial) signaling of tissue differentiation and stem cell renewal/differentiation.. Cloning and sequencing analysis revealed that the embryonic truncated MK, in which the conserved domain is in-frame deleted, presumably producing a novel secreted small peptide, is different from the truncated form in human cancer tissues, whose deletion results in a frame-shift mutation. Our data suggest that MK may play a role in epithelium-mesenchyme interactions, blood vessel signaling, and the decision of proliferation vs differentiation. Detection of the transiently expressed truncated MK reveals its novel function in development and sheds light on its role in carcinogenesis

  2. Time-series analysis of Nigeria rice supply and demand: Error ...

    African Journals Online (AJOL)

    The study examined a time-series analysis of Nigeria rice supply and demand with a view to determining any long-run equilibrium between them using the Error Correction Model approach (ECM). The data used for the study represents the annual series of 1960-2007 (47 years) for rice supply and demand in Nigeria, ...

  3. Modifications of Geometric Truncation of the Scattering Phase Function

    Science.gov (United States)

    Radkevich, A.

    2017-12-01

    Phase function (PF) of light scattering on large atmospheric particles has very strong peak in forward direction constituting a challenge for accurate numerical calculations of radiance. Such accurate (and fast) evaluations are important in the problems of remote sensing of the atmosphere. Scaling transformation replaces original PF with a sum of the delta function and a new regular smooth PF. A number of methods to construct such a PF were suggested. Delta-M and delta-fit methods require evaluation of the PF moments which imposes a numerical problem if strongly anisotropic PF is given as a function of angle. Geometric truncation keeps the original PF unchanged outside the forward peak cone replacing it with a constant within the cone. This approach is designed to preserve the asymmetry parameter. It has two disadvantages: 1) PF has discontinuity at the cone; 2) the choice of the cone is subjective, no recommendations were provided on the choice of the truncation angle. This choice affects both truncation fraction and the value of the phase function within the forward cone. Both issues are addressed in this study. A simple functional form of the replacement PF is suggested. This functional form allows for a number of modifications. This study consider 3 versions providing continuous PF. The considered modifications also bear either of three properties: preserve asymmetry parameter, provide continuity of the 1st derivative of the PF, and preserve mean scattering angle. The second problem mentioned above is addressed with a heuristic approach providing unambiguous criterion of selection of the truncation angle. The approach showed good performance on liquid water and ice clouds with different particle size distributions. Suggested modifications were tested on different cloud PFs using both discrete ordinates and Monte Carlo methods. It was showed that the modifications provide better accuracy of the radiance computation compare to the original geometric truncation.

  4. Propagation of truncated modified Laguerre-Gaussian beams

    Science.gov (United States)

    Deng, D.; Li, J.; Guo, Q.

    2010-01-01

    By expanding the circ function into a finite sum of complex Gaussian functions and applying the Collins formula, the propagation of hard-edge diffracted modified Laguerre-Gaussian beams (MLGBs) through a paraxial ABCD system is studied, and the approximate closed-form propagation expression of hard-edge diffracted MLGBs is obtained. The transverse intensity distribution of the MLGB carrying finite power can be characterized by a single bright and symmetric ring during propagation when the aperture radius is very large. Starting from the definition of the generalized truncated second-order moments, the beam quality factor of MLGBs through a hard-edged circular aperture is investigated in a cylindrical coordinate system, which turns out to be dependent on the truncated radius and the beam orders.

  5. Balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2013-01-01

    In this paper, we present a theoretical analysis of the model reduction algorithm for linear switched systems from Shaker and Wisniewski (2011, 2009) and . This algorithm is a reminiscence of the balanced truncation method for linear parameter varying systems (Wood et al., 1996) [3]. Specifically...

  6. Heat conduction errors and time lag in cryogenic thermometer installations

    Science.gov (United States)

    Warshawsky, I.

    1973-01-01

    Installation practices are recommended that will increase rate of heat exchange between the thermometric sensing element and the cryogenic fluid and that will reduce the rate of undesired heat transfer to higher-temperature objects. Formulas and numerical data are given that help to estimate the magnitude of heat-conduction errors and of time lag in response.

  7. Truncated Newton-Raphson Methods for Quasicontinuum Simulations

    National Research Council Canada - National Science Library

    Liang, Yu; Kanapady, Ramdev; Chung, Peter W

    2006-01-01

    .... In this research, we report the effectiveness of the truncated Newton-Raphson method and quasi-Newton method with low-rank Hessian update strategy that are evaluated against the full Newton-Raphson...

  8. Timing analysis for embedded systems using non-preemptive EDF scheduling under bounded error arrivals

    Directory of Open Access Journals (Sweden)

    Michael Short

    2017-07-01

    Full Text Available Embedded systems consist of one or more processing units which are completely encapsulated by the devices under their control, and they often have stringent timing constraints associated with their functional specification. Previous research has considered the performance of different types of task scheduling algorithm and developed associated timing analysis techniques for such systems. Although preemptive scheduling techniques have traditionally been favored, rapid increases in processor speeds combined with improved insights into the behavior of non-preemptive scheduling techniques have seen an increased interest in their use for real-time applications such as multimedia, automation and control. However when non-preemptive scheduling techniques are employed there is a potential lack of error confinement should any timing errors occur in individual software tasks. In this paper, the focus is upon adding fault tolerance in systems using non-preemptive deadline-driven scheduling. Schedulability conditions are derived for fault-tolerant periodic and sporadic task sets experiencing bounded error arrivals under non-preemptive deadline scheduling. A timing analysis algorithm is presented based upon these conditions and its run-time properties are studied. Computational experiments show it to be highly efficient in terms of run-time complexity and competitive ratio when compared to previous approaches.

  9. The Stars and Gas in Outer Parts of Galaxy Disks : Extended or Truncated, Flat or Warped?

    NARCIS (Netherlands)

    van der Kruit, P. C.; Funes, JG; Corsini, EM

    2008-01-01

    I review observations of truncations of stellar disks and models for their origin, compare observations of truncations in moderately inclined galaxies to those in edge-on systems and discuss the relation between truncations and H I-warps and their systematics and origin. Truncations are a common

  10. On truncated Taylor series and the position of their spurious zeros

    DEFF Research Database (Denmark)

    Christiansen, Søren; Madsen, Per A.

    2006-01-01

    A truncated Taylor series, or a Taylor polynomial, which may appear when treating the motion of gravity water waves, is obtained by truncating an infinite Taylor series for a complex, analytical function. For such a polynomial the position of the complex zeros is considered in case the Taylor...

  11. Analysis of potential errors in real-time streamflow data and methods of data verification by digital computer

    Science.gov (United States)

    Lystrom, David J.

    1972-01-01

    The magnitude, frequency, and types of errors inherent in real-time streamflow data are presented in part I. It was found that real-time data are generally less accurate than are historical data, primarily because real-time data are often used before errors can be detected and corrections applied.

  12. Considering the role of time budgets on copy-error rates in material culture traditions: an experimental assessment.

    Science.gov (United States)

    Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J

    2014-01-01

    Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.

  13. Effect of Truncating AUC at 12, 24 and 48 hr When Evaluating the Bioequivalence of Drugs with a Long Half-Life.

    Science.gov (United States)

    Moreno, Isabel; Ochoa, Dolores; Román, Manuel; Cabaleiro, Teresa; Abad-Santos, Francisco

    2016-01-01

    Bioequivalence studies of drugs with a long half-life require long periods of time for pharmacokinetic sampling. The latest update of the European guideline allows the area under the curve (AUC) truncated at 72 hr to be used as an alternative to AUC0-t as the primary parameter. The objective of this study was to evaluate the effect of truncating the AUC at 48, 24 and 12 hr on the acceptance of the bioequivalence criterion as compared with truncation at 72 hr in bioequivalence trials. The effect of truncated AUC on the within-individual coefficient of variation (CVw) and on the ratio of the formulations was also analysed. Twenty-eight drugs were selected from bioequivalence trials. Pharmacokinetic data were analysed using WinNonLin 2.0 based on the trapezoidal method. Analysis of variance (ANOVA) was performed to obtain the ratios and 90% confidence intervals for AUC at different time-points. The degree of agreement of AUC0-72 in relation to AUC0-48 and AUC0-24, according to the Landis and Koch classification, was 'almost perfect'. Statistically significant differences were observed when the CVw of AUC truncated at 72, 48 and 24 hr was compared with the CVw of AUC0-12. There were no statistically significant differences in the AUC ratio at any time-point. Compared to AUC0-72, Pearson's correlation coefficient for mean AUC, AUC ratio and AUC CVw was worse for AUC0-12 than AUC0-24 or AUC0-48. These preliminary results could suggest that AUC truncation at 24 or 48 hr is adequate to determine whether two formulations are bioequivalent. © 2015 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  14. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    Science.gov (United States)

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  15. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea

    2013-03-16

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our estimates hold without any restrictions on the time steps for dG with exact integration or Reynolds\\' quadrature. They involve a mild restriction on the time steps for the practical Runge-Kutta-Radau methods of any order. The key ingredients are the stability results shown earlier in Bonito et al. (Time-discrete higher order ALE formulations: stability, 2013) along with a novel ALE projection. Numerical experiments illustrate and complement our theoretical results. © 2013 Springer-Verlag Berlin Heidelberg.

  16. Closed-form kinetic parameter estimation solution to the truncated data problem

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Kadrmas, Dan J; Gullberg, Grant T

    2010-01-01

    In a dedicated cardiac single photon emission computed tomography (SPECT) system, the detectors are focused on the heart and the background is truncated in the projections. Reconstruction using truncated data results in biased images, leading to inaccurate kinetic parameter estimates. This paper has developed a closed-form kinetic parameter estimation solution to the dynamic emission imaging problem. This solution is insensitive to the bias in the reconstructed images that is caused by the projection data truncation. This paper introduces two new ideas: (1) it includes background bias as an additional parameter to estimate, and (2) it presents a closed-form solution for compartment models. The method is based on the following two assumptions: (i) the amount of the bias is directly proportional to the truncated activities in the projection data, and (ii) the background concentration is directly proportional to the concentration in the myocardium. In other words, the method assumes that the image slice contains only the heart and the background, without other organs, that the heart is not truncated, and that the background radioactivity is directly proportional to the radioactivity in the blood pool. As long as the background activity can be modeled, the proposed method is applicable regardless of the number of compartments in the model. For simplicity, the proposed method is presented and verified using a single compartment model with computer simulations using both noiseless and noisy projections.

  17. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    International Nuclear Information System (INIS)

    Chen, Ming; Yu, Hengyong

    2015-01-01

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units

  18. Thermal-Induced Errors Prediction and Compensation for a Coordinate Boring Machine Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2014-01-01

    Full Text Available To improve the CNC machine tools precision, a thermal error modeling for the motorized spindle was proposed based on time series analysis, considering the length of cutting tools and thermal declined angles, and the real-time error compensation was implemented. A five-point method was applied to measure radial thermal declinations and axial expansion of the spindle with eddy current sensors, solving the problem that the three-point measurement cannot obtain the radial thermal angle errors. Then the stationarity of the thermal error sequences was determined by the Augmented Dickey-Fuller Test Algorithm, and the autocorrelation/partial autocorrelation function was applied to identify the model pattern. By combining both Yule-Walker equations and information criteria, the order and parameters of the models were solved effectively, which improved the prediction accuracy and generalization ability. The results indicated that the prediction accuracy of the time series model could reach up to 90%. In addition, the axial maximum error decreased from 39.6 μm to 7 μm after error compensation, and the machining accuracy was improved by 89.7%. Moreover, the X/Y-direction accuracy can reach up to 77.4% and 86%, respectively, which demonstrated that the proposed methods of measurement, modeling, and compensation were effective.

  19. The lamppost model: effects of photon trapping, the bottom lamp and disc truncation

    Science.gov (United States)

    Niedźwiecki, Andrzej; Zdziarski, Andrzej A.

    2018-04-01

    We study the lamppost model, in which the primary X-ray sources in accreting black-hole systems are located symmetrically on the rotation axis on both sides of the black hole surrounded by an accretion disc. We show the importance of the emission of the source on the opposite side to the observer. Due to gravitational light bending, its emission can increase the direct (i.e., not re-emitted by the disc) flux by as much as an order of magnitude. This happens for near to face-on observers when the disc is even moderately truncated. For truncated discs, we also consider effects of emission of the top source gravitationally bent around the black hole. We also present results for the attenuation of the observed radiation with respect to that emitted by the lamppost as functions of the lamppost height, black-hole spin and the degree of disc truncation. This attenuation, which is due to the time dilation, gravitational redshift and the loss of photons crossing the black-hole horizon, can be as severe as by several orders of magnitude for low lamppost heights. We also consider the contribution to the observed flux due to re-emission by optically-thick matter within the innermost stable circular orbit.

  20. A Line Search Multilevel Truncated Newton Algorithm for Computing the Optical Flow

    Directory of Open Access Journals (Sweden)

    Lluís Garrido

    2015-06-01

    Full Text Available We describe the implementation details and give the experimental results of three optimization algorithms for dense optical flow computation. In particular, using a line search strategy, we evaluate the performance of the unilevel truncated Newton method (LSTN, a multiresolution truncated Newton (MR/LSTN and a full multigrid truncated Newton (FMG/LSTN. We use three image sequences and four models of optical flow for performance evaluation. The FMG/LSTN algorithm is shown to lead to better optical flow estimation with less computational work than both the LSTN and MR/LSTN algorithms.

  1. Minimum Time Trajectory Optimization of CNC Machining with Tracking Error Constraints

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2014-01-01

    Full Text Available An off-line optimization approach of high precision minimum time feedrate for CNC machining is proposed. Besides the ordinary considered velocity, acceleration, and jerk constraints, dynamic performance constraint of each servo drive is also considered in this optimization problem to improve the tracking precision along the optimized feedrate trajectory. Tracking error is applied to indicate the servo dynamic performance of each axis. By using variable substitution, the tracking error constrained minimum time trajectory planning problem is formulated as a nonlinear path constrained optimal control problem. Bang-bang constraints structure of the optimal trajectory is proved in this paper; then a novel constraint handling method is proposed to realize a convex optimization based solution of the nonlinear constrained optimal control problem. A simple ellipse feedrate planning test is presented to demonstrate the effectiveness of the approach. Then the practicability and robustness of the trajectory generated by the proposed approach are demonstrated by a butterfly contour machining example.

  2. Low-mode truncation methods in the sine-Gordon equation

    International Nuclear Information System (INIS)

    Xiong Chuyu.

    1991-01-01

    In this dissertation, the author studies the chaotic and coherent motions (i.e., low-dimensional chaotic attractor) in some near integrable partial differential equations, particularly the sine-Gordon equation and the nonlinear Schroedinger equation. In order to study the motions, he uses low mode truncation methods to reduce these partial differential equations to some truncated models (low-dimensional ordinary differential equations). By applying many methods available to low-dimensional ordinary differential equations, he can understand the low-dimensional chaotic attractor of PDE's much better. However, there are two important questions one needs to answer: (1) How many modes is good enough for the low mode truncated models to capture the dynamics uniformly? (2) Is the chaotic attractor in a low mode truncated model close to the chaotic attractor in the original PDE? And how close is? He has developed two groups of powerful methods to help to answer these two questions. They are the computation methods of continuation and local bifurcation, and local Lyapunov exponents and Lyapunov exponents. Using these methods, he concludes that the 2N-nls ODE is a good model for the sine-Gordon equation and the nonlinear Schroedinger equation provided one chooses a 'good' basis and uses 'enough' modes (where 'enough' depends on the parameters of the system but is small for the parameter studied here). Therefore, one can use 2N-nls ODE to study the chaos of PDE's in more depth

  3. Firewalls as artefacts of inconsistent truncations of quantum geometries

    Energy Technology Data Exchange (ETDEWEB)

    Germani, Cristiano [Max-Planck-Institut fuer Physik, Muenchen (Germany); Arnold Sommerfeld Center, Ludwig-Maximilians-University, Muenchen (Germany); Institut de Ciencies del Cosmos, Universitat de Barcelona (Spain); Sarkar, Debajyoti [Max-Planck-Institut fuer Physik, Muenchen (Germany); Arnold Sommerfeld Center, Ludwig-Maximilians-University, Muenchen (Germany)

    2016-01-15

    In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order O(e{sup -S}), inexorably leads to a ''localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by (e{sup -S}) effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like (g{sup tt}e{sup -S}). Therefore, while at a distance far away from the horizon, where g{sup tt} ∼ 1, quantum corrections are perturbative, they do diverge close to the horizon, where g{sup tt} → ∞. Nevertheless, these ''corrections'' nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  4. Firewalls as artefacts of inconsistent truncations of quantum geometries

    International Nuclear Information System (INIS)

    Germani, Cristiano; Sarkar, Debajyoti

    2016-01-01

    In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order O(e -S ), inexorably leads to a ''localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by (e -S ) effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like (g tt e -S ). Therefore, while at a distance far away from the horizon, where g tt ∼ 1, quantum corrections are perturbative, they do diverge close to the horizon, where g tt → ∞. Nevertheless, these ''corrections'' nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  5. Cross-layer combining of adaptive modulation and truncated ARQ under cognitive radio resource requirements

    KAUST Repository

    Yang, Yuli

    2012-11-01

    In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU\\'s packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU\\'s packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.

  6. N-terminally truncated POM121C inhibits HIV-1 replication.

    Directory of Open Access Journals (Sweden)

    Hideki Saito

    Full Text Available Recent studies have identified host cell factors that regulate early stages of HIV-1 infection including viral cDNA synthesis and orientation of the HIV-1 capsid (CA core toward the nuclear envelope, but it remains unclear how viral DNA is imported through the nuclear pore and guided to the host chromosomal DNA. Here, we demonstrate that N-terminally truncated POM121C, a component of the nuclear pore complex, blocks HIV-1 infection. This truncated protein is predominantly localized in the cytoplasm, does not bind to CA, does not affect viral cDNA synthesis, reduces the formation of 2-LTR and diminished the amount of integrated proviral DNA. Studies with an HIV-1-murine leukemia virus (MLV chimeric virus carrying the MLV-derived Gag revealed that Gag is a determinant of this inhibition. Intriguingly, mutational studies have revealed that the blockade by N-terminally-truncated POM121C is closely linked to its binding to importin-β/karyopherin subunit beta 1 (KPNB1. These results indicate that N-terminally-truncated POM121C inhibits HIV-1 infection after completion of reverse transcription and before integration, and suggest an important role for KPNB1 in HIV-1 replication.

  7. Near real-time geocoding of SAR imagery with orbit error removal.

    NARCIS (Netherlands)

    Smith, A.J.E.

    2003-01-01

    When utilizing knowledge of the spacecraft trajectory for near real-time geocoding of Synthetic Aperture Radar (SAR) images, the main problem is that predicted satellite orbits have to be used, which may be in error by several kilometres. As part of the development of a Dutch autonomous mobile

  8. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  9. Estimation of Panel Data Regression Models with Two-Sided Censoring or Truncation

    DEFF Research Database (Denmark)

    Alan, Sule; Honore, Bo E.; Hu, Luojia

    2014-01-01

    This paper constructs estimators for panel data regression models with individual speci…fic heterogeneity and two–sided censoring and truncation. Following Powell (1986) the estimation strategy is based on moment conditions constructed from re–censored or re–truncated residuals. While these moment...

  10. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DEFF Research Database (Denmark)

    Kertzscher Schwencke, Gustavo Adolfo Vladimir; Andersen, Claus E.; Tanderup, Kari

    2014-01-01

    Purpose:This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction ......, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-time in vivo point dosimetry....... of the dosimeter position reconstruction. Given its nearly exclusive dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods:In the event of a measured potential treatment error, the AEDA proposes the most...

  11. A generalized right truncated bivariate Poisson regression model with applications to health data.

    Science.gov (United States)

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  12. A Novel SCCA Approach via Truncated ℓ1-norm and Truncated Group Lasso for Brain Imaging Genetics.

    Science.gov (United States)

    Du, Lei; Liu, Kefei; Zhang, Tuo; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Han, Junwei; Guo, Lei; Saykin, Andrew J; Shen, Li

    2017-09-18

    Brain imaging genetics, which studies the linkage between genetic variations and structural or functional measures of the human brain, has become increasingly important in recent years. Discovering the bi-multivariate relationship between genetic markers such as single-nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is one major task in imaging genetics. Sparse Canonical Correlation Analysis (SCCA) has been a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants to induce sparsity. The ℓ 0 -norm penalty is a perfect sparsity-inducing tool which, however, is an NP-hard problem. In this paper, we propose the truncated ℓ 1 -norm penalized SCCA to improve the performance and effectiveness of the ℓ 1 -norm based SCCA methods. Besides, we propose an efficient optimization algorithms to solve this novel SCCA problem. The proposed method is an adaptive shrinkage method via tuning τ . It can avoid the time intensive parameter tuning if given a reasonable small τ . Furthermore, we extend it to the truncated group-lasso (TGL), and propose TGL-SCCA model to improve the group-lasso-based SCCA methods. The experimental results, compared with four benchmark methods, show that our SCCA methods identify better or similar correlation coefficients, and better canonical loading profiles than the competing methods. This demonstrates the effectiveness and efficiency of our methods in discovering interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/tlpscca/ . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Time dependence linear transport III convergence of the discrete ordinate method

    International Nuclear Information System (INIS)

    Wilson, D.G.

    1983-01-01

    In this paper the uniform pointwise convergence of the discrete ordinate method for weak and strong solutions of the time dependent, linear transport equation posed in a multidimensional, rectangular parallelepiped with partially reflecting walls is established. The first result is that a sequence of discrete ordinate solutions converges uniformly on the quadrature points to a solution of the continuous problem provided that the corresponding sequence of truncation errors for the solution of the continuous problem converges to zero in the same manner. The second result is that continuity of the solution with respect to the velocity variables guarantees that the truncation erros in the quadrature formula go the zero and hence that the discrete ordinate approximations converge to the solution of the continuous problem as the discrete ordinate become dense. An existence theory for strong solutions of the the continuous problem follows as a result

  14. Identifying afterloading PDR and HDR brachytherapy errors using real-time fiber-coupled Al2O3:C dosimetry and a novel statistical error decision criterion

    DEFF Research Database (Denmark)

    Kertzscher, Gustavo; Andersen, Claus Erik; Siebert, Frank-André

    2011-01-01

    treatment errors, including interchanged pairs of afterloader guide tubes and 2–20mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al2O3:C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated...

  15. Bounded real and positive real balanced truncation using Σ-normalised coprime factors

    NARCIS (Netherlands)

    Trentelman, H.L.

    2009-01-01

    In this article, we will extend the method of balanced truncation using normalised right coprime factors of the system transfer matrix to balanced truncation with preservation of half line dissipativity. Special cases are preservation of positive realness and bounded realness. We consider a half

  16. No chiral truncation of quantum log gravity?

    Science.gov (United States)

    Andrade, Tomás; Marolf, Donald

    2010-03-01

    At the classical level, chiral gravity may be constructed as a consistent truncation of a larger theory called log gravity by requiring that left-moving charges vanish. In turn, log gravity is the limit of topologically massive gravity (TMG) at a special value of the coupling (the chiral point). We study the situation at the level of linearized quantum fields, focussing on a unitary quantization. While the TMG Hilbert space is continuous at the chiral point, the left-moving Virasoro generators become ill-defined and cannot be used to define a chiral truncation. In a sense, the left-moving asymptotic symmetries are spontaneously broken at the chiral point. In contrast, in a non-unitary quantization of TMG, both the Hilbert space and charges are continuous at the chiral point and define a unitary theory of chiral gravity at the linearized level.

  17. Impact of habitat-specific GPS positional error on detection of movement scales by first-passage time analysis.

    Directory of Open Access Journals (Sweden)

    David M Williams

    Full Text Available Advances in animal tracking technologies have reduced but not eliminated positional error. While aware of such inherent error, scientists often proceed with analyses that assume exact locations. The results of such analyses then represent one realization in a distribution of possible outcomes. Evaluating results within the context of that distribution can strengthen or weaken our confidence in conclusions drawn from the analysis in question. We evaluated the habitat-specific positional error of stationary GPS collars placed under a range of vegetation conditions that produced a gradient of canopy cover. We explored how variation of positional error in different vegetation cover types affects a researcher's ability to discern scales of movement in analyses of first-passage time for white-tailed deer (Odocoileus virginianus. We placed 11 GPS collars in 4 different vegetative canopy cover types classified as the proportion of cover above the collar (0-25%, 26-50%, 51-75%, and 76-100%. We simulated the effect of positional error on individual movement paths using cover-specific error distributions at each location. The different cover classes did not introduce any directional bias in positional observations (1 m≤mean≤6.51 m, 0.24≤p≤0.47, but the standard deviation of positional error of fixes increased significantly with increasing canopy cover class for the 0-25%, 26-50%, 51-75% classes (SD = 2.18 m, 3.07 m, and 4.61 m, respectively and then leveled off in the 76-100% cover class (SD = 4.43 m. We then added cover-specific positional errors to individual deer movement paths and conducted first-passage time analyses on the noisy and original paths. First-passage time analyses were robust to habitat-specific error in a forest-agriculture landscape. For deer in a fragmented forest-agriculture environment, and species that move across similar geographic extents, we suggest that first-passage time analysis is robust with regard to

  18. Phase correction and error estimation in InSAR time series analysis

    Science.gov (United States)

    Zhang, Y.; Fattahi, H.; Amelung, F.

    2017-12-01

    During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same

  19. Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography

    Science.gov (United States)

    Borg, Leise; Jørgensen, Jakob S.; Frikel, Jürgen; Sporring, Jon

    2017-12-01

    Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance to sample is derived and verified numerically and with experimental data. The model accurately describes the arising variable-truncation artifacts across simulated variations of the experimental setup. Three variable-truncation artifact-reduction methods are proposed, all aimed at addressing sinogram discontinuities that are shown to be the source of the streaks. The ‘reduction to limited angle’ (RLA) method simply keeps only non-truncated projections; the ‘detector-directed smoothing’ (DDS) method smooths the discontinuities; while the ‘reflexive boundary condition’ (RBC) method enforces a zero derivative at the discontinuities. Experimental results using both simulated and real data show that the proposed methods effectively reduce variable-truncation artifacts. The RBC method is found to provide the best artifact reduction and preservation of image features using both visual and quantitative assessment. The analysis and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray tomography experiments.

  20. Amplitude reconstruction from complete photoproduction experiments and truncated partial-wave expansions

    International Nuclear Information System (INIS)

    Workman, R. L.; Tiator, L.; Wunderlich, Y.; Doring, M.; Haberzettl, H.

    2017-01-01

    Here, we compare the methods of amplitude reconstruction, for a complete experiment and a truncated partial-wave analysis, applied to the photoproduction of pseudoscalar mesons. The approach is pedagogical, showing in detail how the amplitude reconstruction (observables measured at a single energy and angle) is related to a truncated partial-wave analysis (observables measured at a single energy and a number of angles).

  1. Structure and dating errors in the geologic time scale and periodicity in mass extinctions

    Science.gov (United States)

    Stothers, Richard B.

    1989-01-01

    Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.

  2. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Modified Truncated Multiplicity Analysis to Improve Verification of Uranium Fuel Cycle Materials

    International Nuclear Information System (INIS)

    LaFleur, A.; Miller, K.; Swinhoe, M.; Belian, A.; Croft, S.

    2015-01-01

    Accurate verification of 235U enrichment and mass in UF6 storage cylinders and the UO2F2 holdup contained in the process equipment is needed to improve international safeguards and nuclear material accountancy at uranium enrichment plants. Small UF6 cylinders (1.5'' and 5'' diameter) are used to store the full range of enrichments from depleted to highly-enriched UF6. For independent verification of these materials, it is essential that the 235U mass and enrichment measurements do not rely on facility operator declarations. Furthermore, in order to be deployed by IAEA inspectors to detect undeclared activities (e.g., during complementary access), it is also imperative that the measurement technique is quick, portable, and sensitive to a broad range of 235U masses. Truncated multiplicity analysis is a technique that reduces the variance in the measured count rates by only considering moments 1, 2, and 3 of the multiplicity distribution. This is especially important for reducing the uncertainty in the measured doubles and triples rates in environments with a high cosmic ray background relative to the uranium signal strength. However, we believe that the existing truncated multiplicity analysis throws away too much useful data by truncating the distribution after the third moment. This paper describes a modified truncated multiplicity analysis method that determines the optimal moment to truncate the multiplicity distribution based on the measured data. Experimental measurements of small UF6 cylinders and UO2F2 working reference materials were performed at Los Alamos National Laboratory (LANL). The data were analyzed using traditional and modified truncated multiplicity analysis to determine the optimal moment to truncate the multiplicity distribution to minimize the uncertainty in the measured count rates. The results from this analysis directly support nuclear safeguards at enrichment plants and provide a more accurate verification method for UF6

  4. Modeling the Effect of APC Truncation on Destruction Complex Function in Colorectal Cancer Cells

    Science.gov (United States)

    Barua, Dipak; Hlavacek, William S.

    2013-01-01

    In colorectal cancer cells, APC, a tumor suppressor protein, is commonly expressed in truncated form. Truncation of APC is believed to disrupt degradation of β—catenin, which is regulated by a multiprotein complex called the destruction complex. The destruction complex comprises APC, Axin, β—catenin, serine/threonine kinases, and other proteins. The kinases and , which are recruited by Axin, mediate phosphorylation of β—catenin, which initiates its ubiquitination and proteosomal degradation. The mechanism of regulation of β—catenin degradation by the destruction complex and the role of truncation of APC in colorectal cancer are not entirely understood. Through formulation and analysis of a rule-based computational model, we investigated the regulation of β—catenin phosphorylation and degradation by APC and the effect of APC truncation on function of the destruction complex. The model integrates available mechanistic knowledge about site-specific interactions and phosphorylation of destruction complex components and is consistent with an array of published data. We find that the phosphorylated truncated form of APC can outcompete Axin for binding to β—catenin, provided that Axin is limiting, and thereby sequester β—catenin away from Axin and the Axin-recruited kinases and . Full-length APC also competes with Axin for binding to β—catenin; however, full-length APC is able, through its SAMP repeats, which bind Axin and which are missing in truncated oncogenic forms of APC, to bring β—catenin into indirect association with Axin and Axin-recruited kinases. Because our model indicates that the positive effects of truncated APC on β—catenin levels depend on phosphorylation of APC, at the first 20-amino acid repeat, and because phosphorylation of this site is mediated by , we suggest that is a potential target for therapeutic intervention in colorectal cancer. Specific inhibition of is predicted to limit binding of β—catenin to truncated

  5. Weighted-noise threshold based channel estimation for OFDM ...

    Indian Academy of Sciences (India)

    Existing optimal time-domain thresholds exhibit suboptimal behavior for completely unavailable KCS ... Compared with no truncation case, truncation improved the MSE ... channel estimation errors has been studied. ...... Consumer Electron.

  6. A Lynden-Bell integral estimator for the tail index of right-truncated ...

    African Journals Online (AJOL)

    By means of a Lynden-Bell integral with deterministic threshold, Worms and Worms [A Lynden-Bell integral estimator for extremes of randomly truncated data. Statist. Probab. Lett. 2016; 109: 106-117] recently introduced an asymptotically normal estimator of the tail index for randomly right-truncated Pareto-type data.

  7. Truncated States Obtained by Iteration

    International Nuclear Information System (INIS)

    Cardoso, W. B.; Almeida, N. G. de

    2008-01-01

    We introduce the concept of truncated states obtained via iterative processes (TSI) and study its statistical features, making an analogy with dynamical systems theory (DST). As a specific example, we have studied TSI for the doubling and the logistic functions, which are standard functions in studying chaos. TSI for both the doubling and logistic functions exhibit certain similar patterns when their statistical features are compared from the point of view of DST

  8. Eliminating the domain error in local explicitly correlated second-order Møller-Plesset perturbation theory.

    Science.gov (United States)

    Werner, Hans-Joachim

    2008-09-14

    A new explicitly correlated local MP2-F12 method is proposed in which the error caused by truncating the virtual orbital space to pair-specific local domains is almost entirely removed. This is achieved by a simple modification of the ansatz for the explicitly correlated wave function, which makes it possible that the explicitly correlated terms correct both for the basis set incompleteness error as well as for the domain error in the LMP2. Benchmark calculations are presented for 21 molecules and 16 chemical reactions. The results demonstrate that the local approximations have hardly any effect on the accuracy of the computed correlation energies and reaction energies, and the LMP2-F12 reaction energies agree within 0.1-0.2 kcal/mol with estimated MP2 basis set limits.

  9. Family Therapy for the "Truncated" Nuclear Family.

    Science.gov (United States)

    Zuk, Gerald H.

    1980-01-01

    The truncated nuclear family consists of a two-generation group in which conflict has produced a polarization of values. The single-parent family is at special risk. Go-between process enables the therapist to depolarize sharply conflicted values and reduce pathogenic relating. (Author)

  10. Molecular dynamics simulations of lipid bilayers : major artifacts due to truncating electrostatic interactions

    NARCIS (Netherlands)

    Patra, M.; Karttunen, M.E.J.; Hyvönen, M.T.; Falck, E.; Lindqvist, P.; Vattulainen, I.

    2003-01-01

    We study the influence of truncating the electrostatic interactions in a fully hydrated pure dipalmitoylphosphatidylcholine (DPPC) bilayer through 20 ns molecular dynamics simulations. The computations in which the electrostatic interactions were truncated are compared to similar simulations using

  11. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  12. On the viability of the truncated Israel–Stewart theory in cosmology

    International Nuclear Information System (INIS)

    Shogin, Dmitry; Amundsen, Per Amund; Hervik, Sigbjørn

    2015-01-01

    We apply the causal Israel–Stewart theory of irreversible thermodynamics to model the matter content of the Universe as a dissipative fluid with bulk and shear viscosity. Along with the full transport equations we consider their widely used truncated version. By implementing a dynamical systems approach to Bianchi type IV and V cosmological models with and without cosmological constant, we determine the future asymptotic states of such Universes and show that the truncated Israel–Stewart theory leads to solutions essentially different from the full theory. The solutions of the truncated theory may also manifest unphysical properties. Finally, we find that in the full theory shear viscosity can give a substantial rise to dissipative fluxes, driving the fluid extremely far from equilibrium, where the linear Israel–Stewart theory ceases to be valid. (paper)

  13. Some effects of random dose measurement errors on analysis of atomic bomb survivor data

    International Nuclear Information System (INIS)

    Gilbert, E.S.

    1985-01-01

    The effects of random dose measurement errors on analyses of atomic bomb survivor data are described and quantified for several procedures. It is found that the ways in which measurement error is most likely to mislead are through downward bias in the estimated regression coefficients and through distortion of the shape of the dose-response curve. The magnitude of the bias with simple linear regression is evaluated for several dose treatments including the use of grouped and ungrouped data, analyses with and without truncation at 600 rad, and analyses which exclude doses exceeding 200 rad. Limited calculations have also been made for maximum likelihood estimation based on Poisson regression. 16 refs., 6 tabs

  14. The investigation of the truncated mbtA gene within the mycobactin cluster of Mycobacterium avium subspecies paratuberculosis as a novel diagnostic marker for real-time PCR.

    Science.gov (United States)

    de Kruijf, Marcel; Coffey, Aidan; O'Mahony, Jim

    2017-05-01

    The inability of Mycobacterium avium subspecies paratuberculosis (MAP) to produce endogenous mycobactin in-vitro is most likely due to the presence of a truncated mbtA gene within the mycobactin cluster of MAP. The main goal of this study was to investigate this unique mbtA truncation as a potential novel PCR diagnostic marker for MAP. Novel primers were designed that were located within the truncated region and the contiguous MAP2179 gene. Primers were evaluated against non-MAP isolates and no amplicons were generated. The detection limit of this mbtA-MAP2179 target was evaluated using a range of MAP DNA concentrations, MAP inoculated faecal material and 20 MAP isolates. The performance of mbtA-MAP2179 was compared to the established f57 target. The detection limits recorded for MAP K-10 DNA and from MAP K-10 inoculated faecal samples were 0.34pg and 10 4 CFU/g respectively for both f57 and mbtA-MAP2179. A detection limit of 10 3 CFU/g was recorded for both targets, but not achieved consistently. The detection limit of MAP from inoculated faecal material was successful at 10 3 CFU/g for mbtA-MAP2179 when FAM probe real-time PCR was used. A MAP cell concentration of 10 2 CFU/g was detected successfully, but again not consistently achieved. All 20 mycobacterial isolates were successfully identified as MAP by f57 and mbtA-MAP2179. Interestingly, the mbtA-MAP2179 real-time PCR assay resulted in the formation of a unique melting curve profile that contained two melting curve peaks rather than one single peak. This melting curve phenomenon was attributed towards the asymmetrical GC% distribution within the mbtA-MAP2179 amplicon. This study investigated the implementation of the mbtA-MAP2179 target as a novel diagnostic marker and the detection limits obtained with mbtA-MAP2179 were comparable to the established f57 target, making the mbtA-MAP2179 an adequate confirmatory target. Moreover, the mbtA-MAP2179 target could be implemented in multiplex real-time PCR assays and

  15. Riesz Representation Theorem on Bilinear Spaces of Truncated Laurent Series

    Directory of Open Access Journals (Sweden)

    Sabarinsyah

    2017-06-01

    Full Text Available In this study a generalization of the Riesz representation theorem on non-degenerate bilinear spaces, particularly on spaces of truncated Laurent series, was developed. It was shown that any linear functional on a non-degenerate bilinear space is representable by a unique element of the space if and only if its kernel is closed. Moreover an explicit equivalent condition can be identified for the closedness property of the kernel when the bilinear space is a space of truncated Laurent series.

  16. Pressure-sensitive paint on a truncated cone in hypersonic flow at incidences

    International Nuclear Information System (INIS)

    Yang, L.; Erdem, E.; Zare-Behtash, H.; Kontis, K.; Saravanan, S.

    2012-01-01

    Highlights: ► Global pressure map over the truncated cone is obtained at various incidence angles in Mach 5 flow. ► Successful application of AA-PSP in hypersonic flow expands operation area of this technique. ► AA-PSP reveals complex three-dimensional pattern which is difficult for transducer to obtain. ► Quantitative data provides strong correlation with colour Schlieren and oil flow results. ► High spatial resolution pressure mappings identify small scale vortices and flow separation. - Abstract: The flow over a truncated cone is a classical and fundamental problem for aerodynamic research due to its three-dimensional and complicated characteristics. The flow is made more complex when examining high angles of incidence. Recently these types of flows have drawn more attention for the purposes of drag reduction in supersonic/hypersonic flows. In the present study the flow over a truncated cone at various incidences was experimentally investigated in a Mach 5 flow with a unit Reynolds number of 13.5 × 10 6 m −1 . The cone semi-apex angle is 15° and the truncation ratio (truncated length/cone length) is 0.5. The incidence of the model varied from −12° to 12° with 3° intervals relative to the freestream direction. The external flow around the truncated cone was visualised by colour Schlieren photography, while the surface flow pattern was revealed using the oil flow method. The surface pressure distribution was measured using the anodized aluminium pressure-sensitive paint (AA-PSP) technique. Both top and sideviews of the pressure distribution on the model surface were acquired at various incidences. AA-PSP showed high pressure sensitivity and captured the complicated flow structures which correlated well with the colour Schlieren and oil flow visualisation results.

  17. Relationship between Brazilian airline pilot errors and time of day

    Directory of Open Access Journals (Sweden)

    M.T. de Mello

    2008-12-01

    Full Text Available Flight safety is one of the most important and frequently discussed issues in aviation. Recent accident inquiries have raised questions as to how the work of flight crews is organized and the extent to which these conditions may have been contributing factors to accidents. Fatigue is based on physiologic limitations, which are reflected in performance deficits. The purpose of the present study was to provide an analysis of the periods of the day in which pilots working for a commercial airline presented major errors. Errors made by 515 captains and 472 copilots were analyzed using data from flight operation quality assurance systems. To analyze the times of day (shifts during which incidents occurred, we divided the light-dark cycle (24:00 in four periods: morning, afternoon, night, and early morning. The differences of risk during the day were reported as the ratio of morning to afternoon, morning to night and morning to early morning error rates. For the purposes of this research, level 3 events alone were taken into account, since these were the most serious in which company operational limits were exceeded or when established procedures were not followed. According to airline flight schedules, 35% of flights take place in the morning period, 32% in the afternoon, 26% at night, and 7% in the early morning. Data showed that the risk of errors increased by almost 50% in the early morning relative to the morning period (ratio of 1:1.46. For the period of the afternoon, the ratio was 1:1.04 and for the night a ratio of 1:1.05 was found. These results showed that the period of the early morning represented a greater risk of attention problems and fatigue.

  18. Relationship between Brazilian airline pilot errors and time of day.

    Science.gov (United States)

    de Mello, M T; Esteves, A M; Pires, M L N; Santos, D C; Bittencourt, L R A; Silva, R S; Tufik, S

    2008-12-01

    Flight safety is one of the most important and frequently discussed issues in aviation. Recent accident inquiries have raised questions as to how the work of flight crews is organized and the extent to which these conditions may have been contributing factors to accidents. Fatigue is based on physiologic limitations, which are reflected in performance deficits. The purpose of the present study was to provide an analysis of the periods of the day in which pilots working for a commercial airline presented major errors. Errors made by 515 captains and 472 co-pilots were analyzed using data from flight operation quality assurance systems. To analyze the times of day (shifts) during which incidents occurred, we divided the light-dark cycle (24:00) in four periods: morning, afternoon, night, and early morning. The differences of risk during the day were reported as the ratio of morning to afternoon, morning to night and morning to early morning error rates. For the purposes of this research, level 3 events alone were taken into account, since these were the most serious in which company operational limits were exceeded or when established procedures were not followed. According to airline flight schedules, 35% of flights take place in the morning period, 32% in the afternoon, 26% at night, and 7% in the early morning. Data showed that the risk of errors increased by almost 50% in the early morning relative to the morning period (ratio of 1:1.46). For the period of the afternoon, the ratio was 1:1.04 and for the night a ratio of 1:1.05 was found. These results showed that the period of the early morning represented a greater risk of attention problems and fatigue.

  19. On truncations of the exact renormalization group

    CERN Document Server

    Morris, T R

    1994-01-01

    We investigate the Exact Renormalization Group (ERG) description of (Z_2 invariant) one-component scalar field theory, in the approximation in which all momentum dependence is discarded in the effective vertices. In this context we show how one can perform a systematic search for non-perturbative continuum limits without making any assumption about the form of the lagrangian. Concentrating on the non-perturbative three dimensional Wilson fixed point, we then show that the sequence of truncations n=2,3,\\dots, obtained by expanding about the field \\varphi=0 and discarding all powers \\varphi^{2n+2} and higher, yields solutions that at first converge to the answer obtained without truncation, but then cease to further converge beyond a certain point. No completely reliable method exists to reject the many spurious solutions that are also found. These properties are explained in terms of the analytic behaviour of the untruncated solutions -- which we describe in some detail.

  20. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  1. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  2. Resonant Excitation of a Truncated Metamaterial Cylindrical Shell by a Thin Wire Monopole

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Erentok, Aycan; Breinbjerg, Olav

    2009-01-01

    A truncated metamaterial cylindrical shell excited by a thin wire monopole is investigated using the integral equation technique as well as the finite element method. Simulations reveal a strong field singularity at the edge of the truncated cylindrical shell, which critically affects the matching...

  3. Squeezing in multi-mode nonlinear optical state truncation

    International Nuclear Information System (INIS)

    Said, R.S.; Wahiddin, M.R.B.; Umarov, B.A.

    2007-01-01

    In this Letter, we show that multi-mode qubit states produced via nonlinear optical state truncation driven by classical external pumpings exhibit squeezing condition. We restrict our discussions to the two- and three-mode cases

  4. Effects of dating errors on nonparametric trend analyses of speleothem time series

    Directory of Open Access Journals (Sweden)

    M. Mudelsee

    2012-10-01

    Full Text Available A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ18O from two caves in western Germany, the series AH-1 from the Atta Cave and the series Bu1 and Bu4 from the Bunker Cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka. We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser–Müller kernel regression. Error bands around fitted trend curves are determined by combining (1 block bootstrap resampling to preserve noise properties (shape, autocorrelation of the δ18O residuals and (2 timescale simulations (models StalAge and iscam. The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka, with warm–cold amplitudes of around 0.5‰ δ18O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP, the Little Ice Age (LIA and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.

  5. truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models

    Directory of Open Access Journals (Sweden)

    Maria Karlsson

    2014-05-01

    Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.

  6. Motion of isolated open vortex filaments evolving under the truncated local induction approximation

    Science.gov (United States)

    Van Gorder, Robert A.

    2017-11-01

    The study of nonlinear waves along open vortex filaments continues to be an area of active research. While the local induction approximation (LIA) is attractive due to locality compared with the non-local Biot-Savart formulation, it has been argued that LIA appears too simple to model some relevant features of Kelvin wave dynamics, such as Kelvin wave energy transfer. Such transfer of energy is not feasible under the LIA due to integrability, so in order to obtain a non-integrable model, a truncated LIA, which breaks the integrability of the classical LIA, has been proposed as a candidate model with which to study such dynamics. Recently Laurie et al. ["Interaction of Kelvin waves and nonlocality of energy transfer in superfluids," Phys. Rev. B 81, 104526 (2010)] derived truncated LIA systematically from Biot-Savart dynamics. The focus of the present paper is to study the dynamics of a section of common open vortex filaments under the truncated LIA dynamics. We obtain the analog of helical, planar, and more general filaments which rotate without a change in form in the classical LIA, demonstrating that while quantitative differences do exist, qualitatively such solutions still exist under the truncated LIA. Conversely, solitons and breather solutions found under the LIA should not be expected under the truncated LIA, as the existence of such solutions relies on the existence of an infinite number of conservation laws which is violated due to loss of integrability. On the other hand, similarity solutions under the truncated LIA can be quite different to their counterparts found for the classical LIA, as they must obey a t1/3 type scaling rather than the t1/2 type scaling commonly found in the LIA and Biot-Savart dynamics. This change in similarity scaling means that Kelvin waves are radiated at a slower rate from vortex kinks formed after reconnection events. The loss of soliton solutions and the difference in similarity scaling indicate that dynamics emergent under

  7. The combination of i-leader truncation and gemcitabine improves oncolytic adenovirus efficacy in an immunocompetent model.

    Science.gov (United States)

    Puig-Saus, C; Laborda, E; Rodríguez-García, A; Cascalló, M; Moreno, R; Alemany, R

    2014-02-01

    Adenovirus (Ad) i-leader protein is a small protein of unknown function. The C-terminus truncation of the i-leader protein increases Ad release from infected cells and cytotoxicity. In the current study, we use the i-leader truncation to enhance the potency of an oncolytic Ad. In vitro, an i-leader truncated oncolytic Ad is released faster to the supernatant of infected cells, generates larger plaques, and is more cytotoxic in both human and Syrian hamster cell lines. In mice bearing human tumor xenografts, the i-leader truncation enhances oncolytic efficacy. However, in a Syrian hamster pancreatic tumor model, which is immunocompetent and less permissive to human Ad, antitumor efficacy is only observed when the i-leader truncated oncolytic Ad, but not the non-truncated version, is combined with gemcitabine. This synergistic effect observed in the Syrian hamster model was not seen in vitro or in immunodeficient mice bearing the same pancreatic hamster tumors, suggesting a role of the immune system in this synergism. These results highlight the interest of the i-leader C-terminus truncation because it enhances the antitumor potency of an oncolytic Ad and provides synergistic effects with gemcitabine in the presence of an immune competent system.

  8. A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention

    OpenAIRE

    Hiienkari, Markus; Teittinen, Jukka; Koskinen, Lauri; Turnquist, Matthew; Mäkipää, Jani; Rantala, Arto; Sopanen, Matti; Kaltiokallio, Mikko

    2015-01-01

    To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable ...

  9. Review of current GPS methodologies for producing accurate time series and their error sources

    Science.gov (United States)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e

  10. ANTI-CORRELATED TIME LAGS IN THE Z SOURCE GX 5-1: POSSIBLE EVIDENCE FOR A TRUNCATED ACCRETION DISK

    Energy Technology Data Exchange (ETDEWEB)

    Sriram, K.; Choi, C. S. [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Rao, A. R., E-mail: astrosriram@yahoo.co.in [Tata Institute of Fundamental Research, Mumbai 400005 (India)

    2012-06-01

    We investigate the nature of the inner accretion disk in the neutron star source GX 5-1 by making a detailed study of time lags between X-rays of different energies. Using the cross-correlation analysis, we found anti-correlated hard and soft time lags of the order of a few tens to a few hundred seconds and the corresponding intensity states were mostly the horizontal branch (HB) and upper normal branch. The model independent and dependent spectral analysis showed that during these time lags the structure of the accretion disk significantly varied. Both eastern and western approaches were used to unfold the X-ray continuum and systematic changes were observed in soft and hard spectral components. These changes along with a systematic shift in the frequency of quasi-periodic oscillations (QPOs) made it substantially evident that the geometry of the accretion disk is truncated. Simultaneous energy spectral and power density spectral study shows that the production of the horizontal branch oscillations (HBOs) is closely related to the Comptonizing region rather than the disk component in the accretion disk. We found that as the HBO frequency decreases from the hard apex to upper HB, the disk temperature increases along with an increase in the coronal temperature, which is in sharp contrast with the changes found in black hole binaries where the decrease in the QPO frequency is accompanied by a decrease in the disk temperature and a simultaneous increase in the coronal temperature. We discuss the results in the context of re-condensation of coronal material in the inner region of the disk.

  11. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  12. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  13. High-yield water-based synthesis of truncated silver nanocubes

    International Nuclear Information System (INIS)

    Chang, Yun-Min; Lu, I-Te; Chen, Chih-Yuan; Hsieh, Yu-Chi; Wu, Pu-Wei

    2014-01-01

    Highlights: • Development of a water-based formula to fabricate truncated Ag nanocubes. • The sample exhibits (1 0 0), (1 1 0), and (1 1 1) on the facets, edges, and corners. • The sample shows three characteristic absorption peaks due to plasma resonance. -- Abstract: A high-yield water-based hydrothermal synthesis was developed using silver nitrate, ammonia, glucose, and cetyltrimethylammonium bromide (CTAB) as precursors to synthesize truncated silver nanocubes with uniform sizes and in large quantities. With a fixed CTAB concentration, truncated silver nanocubes with sizes of 49.3 ± 4.1 nm were produced when the molar ratio of glucose/silver cation was maintained at 0.1. The sample exhibited (1 0 0), (1 1 0), and (1 1 1) planes on the facets, edges, and corners, respectively. In contrast, with a slightly larger glucose/silver cation ratio of 0.35, well-defined nanocubes with sizes of 70.9 ± 3.8 nm sizes were observed with the (1 0 0) plane on six facets. When the ratio was further increased to 1.5, excess reduction of silver cations facilitated the simultaneous formation of nanoparticles with cubic, spherical, and irregular shapes. Consistent results were obtained from transmission electron microscopy, scanning electron microscopy, X-ray diffraction, and UV–visible absorption measurements

  14. High-yield water-based synthesis of truncated silver nanocubes

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Yun-Min; Lu, I-Te; Chen, Chih-Yuan; Hsieh, Yu-Chi; Wu, Pu-Wei, E-mail: ppwu@mail.nctu.edu.tw

    2014-02-15

    Highlights: • Development of a water-based formula to fabricate truncated Ag nanocubes. • The sample exhibits (1 0 0), (1 1 0), and (1 1 1) on the facets, edges, and corners. • The sample shows three characteristic absorption peaks due to plasma resonance. -- Abstract: A high-yield water-based hydrothermal synthesis was developed using silver nitrate, ammonia, glucose, and cetyltrimethylammonium bromide (CTAB) as precursors to synthesize truncated silver nanocubes with uniform sizes and in large quantities. With a fixed CTAB concentration, truncated silver nanocubes with sizes of 49.3 ± 4.1 nm were produced when the molar ratio of glucose/silver cation was maintained at 0.1. The sample exhibited (1 0 0), (1 1 0), and (1 1 1) planes on the facets, edges, and corners, respectively. In contrast, with a slightly larger glucose/silver cation ratio of 0.35, well-defined nanocubes with sizes of 70.9 ± 3.8 nm sizes were observed with the (1 0 0) plane on six facets. When the ratio was further increased to 1.5, excess reduction of silver cations facilitated the simultaneous formation of nanoparticles with cubic, spherical, and irregular shapes. Consistent results were obtained from transmission electron microscopy, scanning electron microscopy, X-ray diffraction, and UV–visible absorption measurements.

  15. Intersection spaces, spatial homology truncation, and string theory

    CERN Document Server

    Banagl, Markus

    2010-01-01

    Intersection cohomology assigns groups which satisfy a generalized form of Poincaré duality over the rationals to a stratified singular space. The present monograph introduces a method that assigns to certain classes of stratified spaces cell complexes, called intersection spaces, whose ordinary rational homology satisfies generalized Poincaré duality. The cornerstone of the method is a process of spatial homology truncation, whose functoriality properties are analyzed in detail. The material on truncation is autonomous and may be of independent interest to homotopy theorists. The cohomology of intersection spaces is not isomorphic to intersection cohomology and possesses algebraic features such as perversity-internal cup-products and cohomology operations that are not generally available for intersection cohomology. A mirror-symmetric interpretation, as well as applications to string theory concerning massless D-branes arising in type IIB theory during a Calabi-Yau conifold transition, are discussed.

  16. The Apparent Lack of Lorentz Invariance in Zero-Point Fields with Truncated Spectra

    Directory of Open Access Journals (Sweden)

    Daywitt W. C.

    2009-01-01

    Full Text Available The integrals that describe the expectation values of the zero-point quantum-field-theoretic vacuum state are semi-infinite, as are the integrals for the stochastic electrodynamic vacuum. The unbounded upper limit to these integrals leads in turn to infinite energy densities and renormalization masses. A number of models have been put forward to truncate the integrals so that these densities and masses are finite. Unfortunately the truncation apparently destroys the Lorentz invariance of the integrals. This note argues that the integrals are naturally truncated by the graininess of the negative-energy Planck vacuum state from which the zero-point vacuum arises, and are thus automatically Lorentz invariant.

  17. Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography

    DEFF Research Database (Denmark)

    Borg, Leise; Jørgensen, Jakob Sauer; Frikel, Jürgen

    2017-01-01

    Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered...... and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray...... backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance...

  18. Propagation of a general-type beam through a truncated fractional Fourier transform optical system.

    Science.gov (United States)

    Zhao, Chengliang; Cai, Yangjian

    2010-03-01

    Paraxial propagation of a general-type beam through a truncated fractional Fourier transform (FRT) optical system is investigated. Analytical formulas for the electric field and effective beam width of a general-type beam in the FRT plane are derived based on the Collins formula. Our formulas can be used to study the propagation of a variety of laser beams--such as Gaussian, cos-Gaussian, cosh-Gaussian, sine-Gaussian, sinh-Gaussian, flat-topped, Hermite-cosh-Gaussian, Hermite-sine-Gaussian, higher-order annular Gaussian, Hermite-sinh-Gaussian and Hermite-cos-Gaussian beams--through a FRT optical system with or without truncation. The propagation properties of a Hermite-cos-Gaussian beam passing through a rectangularly truncated FRT optical system are studied as a numerical example. Our results clearly show that the truncated FRT optical system provides a convenient way for laser beam shaping.

  19. Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution

    Directory of Open Access Journals (Sweden)

    Amer Ibrahim Al-Omari

    2018-03-01

    Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.

  20. Renal contrast-enhanced MR angiography: timing errors and accurate depiction of renal artery origins.

    Science.gov (United States)

    Schmidt, Maria A; Morgan, Robert

    2008-10-01

    To investigate bolus timing artifacts that impair depiction of renal arteries at contrast material-enhanced magnetic resonance (MR) angiography and to determine the effect of contrast agent infusion rates on artifact generation. Renal contrast-enhanced MR angiography was simulated for a variety of infusion schemes, assuming both correct and incorrect timing between data acquisition and contrast agent injection. In addition, the ethics committee approved the retrospective evaluation of clinical breath-hold renal contrast-enhanced MR angiographic studies obtained with automated detection of contrast agent arrival. Twenty-two studies were evaluated for their ability to depict the origin of renal arteries in patent vessels and for any signs of timing errors. Simulations showed that a completely artifactual stenosis or an artifactual overestimation of an existing stenosis at the renal artery origin can be caused by timing errors of the order of 5 seconds in examinations performed with contrast agent infusion rates compatible with or higher than those of hand injections. Lower infusion rates make the studies more likely to accurately depict the origin of the renal arteries. In approximately one-third of all clinical examinations, different contrast agent uptake rates were detected on the left and right sides of the body, and thus allowed us to confirm that it is often impossible to optimize depiction of both renal arteries. In three renal arteries, a signal void was found at the origin in a patent vessel, and delayed contrast agent arrival was confirmed. Computer simulations and clinical examinations showed that timing errors impair the accurate depiction of renal artery origins. (c) RSNA, 2008.

  1. Functional analysis of Rift Valley fever virus NSs encoding a partial truncation.

    Science.gov (United States)

    Head, Jennifer A; Kalveram, Birte; Ikegami, Tetsuro

    2012-01-01

    Rift Valley fever virus (RVFV), belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN)-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR). NSs self-associates at the C-terminus 17 aa., while NSs at aa.210-230 binds to Sin3A-associated protein (SAP30) to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s) can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.

  2. Functional analysis of Rift Valley fever virus NSs encoding a partial truncation.

    Directory of Open Access Journals (Sweden)

    Jennifer A Head

    Full Text Available Rift Valley fever virus (RVFV, belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR. NSs self-associates at the C-terminus 17 aa., while NSs at aa.210-230 binds to Sin3A-associated protein (SAP30 to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.

  3. Truncated Dual-Cap Nucleation Site Development

    Science.gov (United States)

    Matson, Douglas M.; Sander, Paul J.

    2012-01-01

    During heterogeneous nucleation within a metastable mushy-zone, several geometries for nucleation site development must be considered. Traditional spherical dual cap and crevice models are compared to a truncated dual cap to determine the activation energy and critical cluster growth kinetics in ternary Fe-Cr-Ni steel alloys. Results of activation energy results indicate that nucleation is more probable at grain boundaries within the solid than at the solid-liquid interface.

  4. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.; Lazarov, R.; Pasciak, J.; Zhou, Z.

    2014-01-01

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  5. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.

    2014-05-30

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  6. Symmetric truncations of the shallow-water equations

    International Nuclear Information System (INIS)

    Rouhi, A.; Abarbanel, H.D.I.

    1993-01-01

    Conservation of potential vorticity in Eulerian fluids reflects particle interchange symmetry in the Lagrangian fluid version of the same theory. The algebra associated with this symmetry in the shallow-water equations is studied here, and we give a method for truncating the degrees of freedom of the theory which preserves a maximal number of invariants associated with this algebra. The finite-dimensional symmetry associated with keeping only N modes of the shallow-water flow is SU(N). In the limit where the number of modes goes to infinity (N→∞) all the conservation laws connected with potential vorticity conservation are recovered. We also present a Hamiltonian which is invariant under this truncated symmetry and which reduces to the familiar shallow-water Hamiltonian when N→∞. All this provides a finite-dimensional framework for numerical work with the shallow-water equations which preserves not only energy and enstrophy but all other known conserved quantities consistent with the finite number of degrees of freedom. The extension of these ideas to other nearly two-dimensional flows is discussed

  7. Model and Reduction of Inactive Times in a Maintenance Workshop Following a Diagnostic Error

    Directory of Open Access Journals (Sweden)

    T. Beda

    2011-04-01

    Full Text Available The majority of maintenance workshops in manufacturing factories are hierarchical. This arrangement permits quick response in advent of a breakdown. Reaction of the maintenance workshop is done by evaluating the characteristics of the breakdown. In effect, a diagnostic error at a given level of the process of decision making delays the restoration of normal operating state. The consequences are not just financial loses, but loss in customers’ satisfaction as well. The goal of this paper is to model the inactive time of a maintenance workshop in case that an unpredicted catalectic breakdown has occurred and a diagnostic error has also occurred at a certain level of decision-making, during the treatment process of the breakdown. We show that the expression for the inactive times obtained, is depended only on the characteristics of the workshop. Next, we propose a method to reduce the inactive times.

  8. Hamiltonian truncation approach to quenches in the Ising field theory

    Directory of Open Access Journals (Sweden)

    T. Rakovszky

    2016-10-01

    Full Text Available In contrast to lattice systems where powerful numerical techniques such as matrix product state based methods are available to study the non-equilibrium dynamics, the non-equilibrium behaviour of continuum systems is much harder to simulate. We demonstrate here that Hamiltonian truncation methods can be efficiently applied to this problem, by studying the quantum quench dynamics of the 1+1 dimensional Ising field theory using a truncated free fermionic space approach. After benchmarking the method with integrable quenches corresponding to changing the mass in a free Majorana fermion field theory, we study the effect of an integrability breaking perturbation by the longitudinal magnetic field. In both the ferromagnetic and paramagnetic phases of the model we find persistent oscillations with frequencies set by the low-lying particle excitations not only for small, but even for moderate size quenches. In the ferromagnetic phase these particles are the various non-perturbative confined bound states of the domain wall excitations, while in the paramagnetic phase the single magnon excitation governs the dynamics, allowing us to capture the time evolution of the magnetisation using a combination of known results from perturbation theory and form factor based methods. We point out that the dominance of low lying excitations allows for the numerical or experimental determination of the mass spectra through the study of the quench dynamics.

  9. Autocorrelation as a source of truncated Lévy flights in foreign exchange rates

    Science.gov (United States)

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio

    2003-05-01

    We suggest that the ultraslow speed of convergence associated with truncated Lévy flights (Phys. Rev. Lett. 73 (1994) 2946) may well be explained by autocorrelations in data. We show how a particular type of autocorrelation generates power laws consistent with a truncated Lévy flight. Stock exchanges have been suggested to be modeled by a truncated Lévy flight (Nature 376 (1995) 46; Physica A 297 (2001) 509; Econom. Bull. 7 (2002) 1). Here foreign exchange rate data are taken instead. Scaling power laws in the “probability of return to the origin” are shown to emerge for most currencies. A novel approach to measure how distant a process is from a Gaussian regime is presented.

  10. One-step green synthesis of cuprous oxide crystals with truncated octahedra shapes via a high pressure flux approach

    International Nuclear Information System (INIS)

    Li Benxian; Wang Xiaofeng; Xia Dandan; Chu Qingxin; Liu Xiaoyang; Lu Fengguo; Zhao Xudong

    2011-01-01

    Cuprous oxide (Cu 2 O) was synthesized via reactions between cupric oxide (CuO) and copper metal (Cu) at a low temperature of 300 deg. C. This progress is green, environmentally friendly and energy efficient. Cu 2 O crystals with truncated octahedra morphology were grown under high pressure using sodium hydroxide (NaOH) and potassium hydroxide (KOH) with a molar ratio of 1:1 as a flux. The growth mechanism of Cu 2 O polyhedral microcrystals are proposed and discussed. - Graphical Abstract: The Cu 2 O crystals with truncated octahedral shape were one-step synthesized in high yield via high pressure flux method for the first time, which is green and environmentally friendly. The mechanisms of synthesis and crystal growth were discussed in this paper. Highlights: → Cuprous oxide was one-step green synthesized by high pressure flux method. → The approach was based on the reverse dismutation reactions between cupric oxide and copper metal. → This progress is green, environmentally friendly and energy efficient. → The synthesized Cu2O crystals were of truncated octahedra morphology.

  11. Error estimates for near-Real-Time Satellite Soil Moisture as Derived from the Land Parameter Retrieval Model

    NARCIS (Netherlands)

    Parinussa, R.M.; Meesters, A.G.C.A.; Liu, Y.Y.; Dorigo, W.; Wagner, W.; de Jeu, R.A.M.

    2011-01-01

    A time-efficient solution to estimate the error of satellite surface soil moisture from the land parameter retrieval model is presented. The errors are estimated using an analytical solution for soil moisture retrievals from this radiative-transfer-based model that derives soil moisture from

  12. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  13. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  14. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    Science.gov (United States)

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small

  15. Space, time, and the third dimension (model error)

    Science.gov (United States)

    Moss, Marshall E.

    1979-01-01

    The space-time tradeoff of hydrologic data collection (the ability to substitute spatial coverage for temporal extension of records or vice versa) is controlled jointly by the statistical properties of the phenomena that are being measured and by the model that is used to meld the information sources. The control exerted on the space-time tradeoff by the model and its accompanying errors has seldom been studied explicitly. The technique, known as Network Analyses for Regional Information (NARI), permits such a study of the regional regression model that is used to relate streamflow parameters to the physical and climatic characteristics of the drainage basin.The NARI technique shows that model improvement is a viable and sometimes necessary means of improving regional data collection systems. Model improvement provides an immediate increase in the accuracy of regional parameter estimation and also increases the information potential of future data collection. Model improvement, which can only be measured in a statistical sense, cannot be quantitatively estimated prior to its achievement; thus an attempt to upgrade a particular model entails a certain degree of risk on the part of the hydrologist.

  16. Vortex breakdown in a truncated conical bioreactor

    Energy Technology Data Exchange (ETDEWEB)

    Balci, Adnan; Brøns, Morten [DTU Compute, Technical University of Denmark, DK-2800 Kgs. Lyngby (Denmark); Herrada, Miguel A [E.S.I, Universidad de Sevilla, Camino de los Descubrimientos s/n, E-41092 (Spain); Shtern, Vladimir N, E-mail: mobr@dtu.dk [Shtern Research and Consulting, Houston, TX 77096 (United States)

    2015-12-15

    This numerical study explains the eddy formation and disappearance in a slow steady axisymmetric air–water flow in a vertical truncated conical container, driven by the rotating top disk. Numerous topological metamorphoses occur as the water height, H{sub w}, and the bottom-sidewall angle, α, vary. It is found that the sidewall convergence (divergence) from the top to the bottom stimulates (suppresses) the development of vortex breakdown (VB) in both water and air. At α = 60°, the flow topology changes eighteen times as H{sub w} varies. The changes are due to (a) competing effects of AMF (the air meridional flow) and swirl, which drive meridional motions of opposite directions in water, and (b) feedback of water flow on AMF. For small H{sub w}, the AMF effect dominates. As H{sub w} increases, the swirl effect dominates and causes VB. The water flow feedback produces and modifies air eddies. The results are of fundamental interest and can be relevant for aerial bioreactors. (paper)

  17. Vortex breakdown in a truncated conical bioreactor

    International Nuclear Information System (INIS)

    Balci, Adnan; Brøns, Morten; Herrada, Miguel A; Shtern, Vladimir N

    2015-01-01

    This numerical study explains the eddy formation and disappearance in a slow steady axisymmetric air–water flow in a vertical truncated conical container, driven by the rotating top disk. Numerous topological metamorphoses occur as the water height, H w , and the bottom-sidewall angle, α, vary. It is found that the sidewall convergence (divergence) from the top to the bottom stimulates (suppresses) the development of vortex breakdown (VB) in both water and air. At α = 60°, the flow topology changes eighteen times as H w varies. The changes are due to (a) competing effects of AMF (the air meridional flow) and swirl, which drive meridional motions of opposite directions in water, and (b) feedback of water flow on AMF. For small H w , the AMF effect dominates. As H w increases, the swirl effect dominates and causes VB. The water flow feedback produces and modifies air eddies. The results are of fundamental interest and can be relevant for aerial bioreactors. (paper)

  18. Rotating D0-branes and consistent truncations of supergravity

    International Nuclear Information System (INIS)

    Anabalón, Andrés; Ortiz, Thomas; Samtleben, Henning

    2013-01-01

    The fluctuations around the D0-brane near-horizon geometry are described by two-dimensional SO(9) gauged maximal supergravity. We work out the U(1) 4 truncation of this theory whose scalar sector consists of five dilaton and four axion fields. We construct the full non-linear Kaluza–Klein ansatz for the embedding of the dilaton sector into type IIA supergravity. This yields a consistent truncation around a geometry which is the warped product of a two-dimensional domain wall and the sphere S 8 . As an application, we consider the solutions corresponding to rotating D0-branes which in the near-horizon limit approach AdS 2 ×M 8 geometries, and discuss their thermodynamical properties. More generally, we study the appearance of such solutions in the presence of non-vanishing axion fields

  19. A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention

    Directory of Open Access Journals (Sweden)

    Markus Hiienkari

    2015-04-01

    Full Text Available To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable operation with minimal safety margins while maximizing performance and energy efficiency at a given operating point. Measurements show minimum energy of 3.15 pJ/cyc at 400 mV, which corresponds to 39% energy saving compared to operation based on static signoff timing.

  20. Interpretations of systematic errors in the NCEP Climate Forecast System at lead times of 2, 4, 8, ..., 256 days

    Directory of Open Access Journals (Sweden)

    Siwon Song

    2012-09-01

    Full Text Available The climatology of mean bias errors (relative to 1-day forecasts was examined in a 20-year hindcast set from version 1 of the Climate Forecast System (CFS, for forecast lead times of 2, 4, 8, 16, ... 256 days, verifying in different seasons. Results mostly confirm the simple expectation that atmospheric model biases should be evident at short lead (2–4 days, while soil moisture errors develop over days-weeks and ocean errors emerge over months. A further simplification is also evident: surface temperature bias patterns have nearly fixed geographical structure, growing with different time scales over land and ocean. The geographical pattern has mostly warm and dry biases over land and cool bias over the oceans, with two main exceptions: (1 deficient stratocumulus clouds cause warm biases in eastern subtropical oceans, and (2 high latitude land is too cold in boreal winter. Further study of the east Pacific cold tongue-Intertropical Convergence Zone (ITCZ complex shows a possible interaction between a rapidly-expressed atmospheric model bias (poleward shift of deep convection beginning at day 2 and slow ocean dynamics (erroneously cold upwelling along the equator in leads > 1 month. Further study of the high latitude land cold bias shows that it is a thermal wind balance aspect of the deep polar vortex, not just a near-surface temperature error under the wintertime inversion, suggesting that its development time scale of weeks to months may involve long timescale processes in the atmosphere, not necessarily in the land model. Winter zonal wind errors are small in magnitude, but a refractive index map shows that this can cause modest errors in Rossby wave ducting. Finally, as a counterpoint to our initial expectations about error growth, a case of non-monotonic error growth is shown: velocity potential bias grows with lead on a time scale of weeks, then decays over months. It is hypothesized that compensations between land and ocean errors may

  1. A protein-truncating R179X variant in RNF186 confers protection against ulcerative colitis

    NARCIS (Netherlands)

    Rivas, Manuel A.; Graham, Daniel; Sulem, Patrick; Stevens, Christine; Desch, A. Nicole; Goyette, Philippe; Gudbjartsson, Daniel; Jonsdottir, Ingileif; Thorsteinsdottir, Unnur; Degenhardt, Frauke; Mucha, Soeren; Kurki, Mitja I.; Li, Dalin; D'Amato, Mauro; Annese, Vito; Vermeire, Severine; Weersma, Rinse K.; Halfvarson, Jonas; Paavola-Sakki, Paulina; Lappalainen, Maarit; Lek, Monkol; Cummings, Beryl; Tukiainen, Taru; Haritunians, Talin; Halme, Leena; Koskinen, Lotta L. E.; Ananthakrishnan, Ashwin N.; Luo, Yang; Heap, Graham A.; Visschedijk, Marijn C.; MacArthur, Daniel G.; Neale, Benjamin M.; Ahmad, Tariq; Anderson, Carl A.; Brant, Steven R.; Duerr, Richard H.; Silverberg, Mark S.; Cho, Judy H.; Palotie, Aarno; Saavalainen, Paivi; Kontula, Kimmo; Farkkila, Martti; McGovern, Dermot P. B.; Franke, Andre; Stefansson, Kari; Rioux, John D.; Xavier, Ramnik J.; Daly, Mark J.

    Protein-truncating variants protective against human disease provide in vivo validation of therapeutic targets. Here we used targeted sequencing to conduct a search for protein-truncating variants conferring protection against inflammatory bowel disease exploiting knowledge of common variants

  2. A min cut-set-wise truncation procedure for importance measures computation in probabilistic safety assessment

    Energy Technology Data Exchange (ETDEWEB)

    Duflot, Nicolas [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: nicolas.duflot@areva.com; Berenguer, Christophe [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: christophe.berenguer@utt.fr; Dieulle, Laurence [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: laurence.dieulle@utt.fr; Vasseur, Dominique [EPSNA Group (Nuclear PSA and Application), EDF Research and Development, 1, avenue du Gal de Gaulle, 92141 Clamart cedex (France)], E-mail: dominique.vasseur@edf.fr

    2009-11-15

    A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.

  3. A min cut-set-wise truncation procedure for importance measures computation in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Duflot, Nicolas; Berenguer, Christophe; Dieulle, Laurence; Vasseur, Dominique

    2009-01-01

    A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.

  4. Lymphoscintigraphy for sentinel lymph node detection in breast cancer: usefulness of image truncation

    International Nuclear Information System (INIS)

    Carrier, P.; Remp, H.J.; Chaborel, J.P.; Lallement, M.; Bussiere, F.; Darcourt, J.; Lallement, M.; Leblanc-Talent, P.; Machiavello, J.C.; Ettore, F.

    2004-01-01

    The sentinel lymph node (SNL) detection in breast cancer has been recently validated. It allows the reduction of the number of axillary dissections and their corresponding side effects. We tested a simple method of image truncation in order to improve the sensitivity of lymphoscintigraphy. This approach is justified by the magnitude of uptake difference between the injection site and the SNL. We prospectively investigated SNL detection using a triple method (lymphoscintigraphy, blue dye and surgical radio detection) in 130 patients. SNL was identified in 104 of the 132 patients (80%) using the standard images and in 126 of them (96, 9%) using the truncated images. Blue dye detection and surgical radio detection had a sensitivity of 76,9% and 98,5% respectively. The false negative rate was 10,3%. 288 SNL were dissected, 31 were metastatic. Among the 19 patients with metastatic SNL and more than one SNL detected, the metastatic SNL was not the hottest in 9 of them. 28 metastatic SNL were detected Y on truncated images versus only 19 on standard images. Truncation which dramatically increases the sensitivity of lymphoscintigraphy allows to increase the number of dissected SNL and probably reduces the false negative rate. (author)

  5. Truncated forms of viral VP2 proteins fused to EGFP assemble into fluorescent parvovirus-like particles

    Directory of Open Access Journals (Sweden)

    Vuento Matti

    2006-12-01

    Full Text Available Abstract Fluorescence correlation spectroscopy (FCS monitors random movements of fluorescent molecules in solution, giving information about the number and the size of for example nano-particles. The canine parvovirus VP2 structural protein as well as N-terminal deletion mutants of VP2 (-14, -23, and -40 amino acids were fused to the C-terminus of the enhanced green fluorescent protein (EGFP. The proteins were produced in insect cells, purified, and analyzed by western blotting, confocal and electron microscopy as well as FCS. The non-truncated form, EGFP-VP2, diffused with a hydrodynamic radius of 17 nm, whereas the fluorescent mutants truncated by 14, 23 and 40 amino acids showed hydrodynamic radii of 7, 20 and 14 nm, respectively. These results show that the non-truncated EGFP-VP2 fusion protein and the EGFP-VP2 constructs truncated by 23 and by as much as 40 amino acids were able to form virus-like particles (VLPs. The fluorescent VLP, harbouring VP2 truncated by 23 amino acids, showed a somewhat larger hydrodynamic radius compared to the non-truncated EGFP-VP2. In contrast, the construct containing EGFP-VP2 truncated by 14 amino acids was not able to assemble into VLP-resembling structures. Formation of capsid structures was confirmed by confocal and electron microscopy. The number of fluorescent fusion protein molecules present within the different VLPs was determined by FCS. In conclusion, FCS provides a novel strategy to analyze virus assembly and gives valuable structural information for strategic development of parvovirus-like particles.

  6. TH-AB-202-02: Real-Time Verification and Error Detection for MLC Tracking Deliveries Using An Electronic Portal Imaging Device

    International Nuclear Information System (INIS)

    J Zwan, B; Colvill, E; Booth, J; J O’Connor, D; Keall, P; B Greer, P

    2016-01-01

    Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3) field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm 2 (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.

  7. Patient identification error among prostate needle core biopsy specimens--are we ready for a DNA time-out?

    Science.gov (United States)

    Suba, Eric J; Pfeifer, John D; Raab, Stephen S

    2007-10-01

    Patient identification errors in surgical pathology often involve switches of prostate or breast needle core biopsy specimens among patients. We assessed strategies for decreasing the occurrence of these uncommon and yet potentially catastrophic events. Root cause analyses were performed following 3 cases of patient identification error involving prostate needle core biopsy specimens. Patient identification errors in surgical pathology result from slips and lapses of automatic human action that may occur at numerous steps during pre-laboratory, laboratory and post-laboratory work flow processes. Patient identification errors among prostate needle biopsies may be difficult to entirely prevent through the optimization of work flow processes. A DNA time-out, whereby DNA polymorphic microsatellite analysis is used to confirm patient identification before radiation therapy or radical surgery, may eliminate patient identification errors among needle biopsies.

  8. multi-scale data assimilation approaches and error characterisation applied to the inverse modelling of atmospheric constituent emission fields

    International Nuclear Information System (INIS)

    Koohkan, Mohammad Reza

    2012-01-01

    (VOC) are carried out over Western Europe using EMEP stations. The uncertainties of the background values of the emissions, as well as the covariance matrix of the observation errors, are estimated according to the maximum likelihood principle. The prior probability density function of the control parameters is chosen to be Gaussian or truncated Gaussian distributed. Grid-size emission inventories are inverted under these two statistical assumptions. The two kinds of approaches are compared. With the Gaussian assumption, the departure between the posterior and the prior emission inventories is higher than when using the truncated Gaussian assumption, but that method does not provide better scores than the truncated Gaussian in a forecast experiment. (author) [fr

  9. Fluorometric graphene oxide-based detection of Salmonella enteritis using a truncated DNA aptamer.

    Science.gov (United States)

    Chinnappan, Raja; AlAmer, Saleh; Eissa, Shimaa; Rahamn, Anas Abdel; Abu Salah, Khalid M; Zourob, Mohammed

    2017-12-18

    The work describes a fluorescence-based study for mapping the highest affinity truncated aptamer from the full length sequence and its integration in a graphene oxide platform for the detection of Salmonella enteriditis. To identify the best truncated sequence, molecular beacons and a displacement assay design are applied. In the fluorescence displacement assay, the truncated aptamer was hybridized with fluorescein and quencher-labeled complementary sequences to form a fluorescence/quencher pair. In the presence of S. enteritidis, the aptamer dissociates from the complementary labeled oligonucleotides and thus, the fluorescein/quencher pair becomes physically separated. This leads to an increase in fluorescence intensity. One of the truncated aptamers identified has a 2-fold lower dissociation constant (3.2 nM) compared to its full length aptamer (6.3 nM). The truncated aptamer selected in this process was used to develop a fluorometric graphene oxide (GO) based assay. If fluorescein-labeled aptamer is adsorbed on GO via π stacking interaction, fluorescence is quenched. However, in the presence of target (S. enteriditis), the labeled aptamers is released from surface to form a stable complex with the bacteria and fluorescence is restored, depending on the quantity of bacteria being present. The resulting assay has an unsurpassed detection limit of 25 cfu·mL -1 in the best case. The cross reactivity to Salmonella typhimurium, Staphylococcus aureus and Escherichia coli is negligible. The assay was applied to analyze doped milk samples for and gave good recovery. Thus, we believe that the truncated aptamer/graphene oxide platform is a potential tool for the detection of S. Enteritidis. Graphical abstract Fluorescently labelled aptamer against Salmonella enteritidis was adsorbed on the surface of graphene oxide by π-stacking interaction. This results in quenching of the fluorescence of the label. Addition of Salmonella enteritidis restores fluorescence, and this

  10. An analytical nodal method for time-dependent one-dimensional discrete ordinates problems

    International Nuclear Information System (INIS)

    Barros, R.C. de

    1992-01-01

    In recent years, relatively little work has been done in developing time-dependent discrete ordinates (S N ) computer codes. Therefore, the topic of time integration methods certainly deserves further attention. In this paper, we describe a new coarse-mesh method for time-dependent monoenergetic S N transport problesm in slab geometry. This numerical method preserves the analytic solution of the transverse-integrated S N nodal equations by constants, so we call our method the analytical constant nodal (ACN) method. For time-independent S N problems in finite slab geometry and for time-dependent infinite-medium S N problems, the ACN method generates numerical solutions that are completely free of truncation errors. Bsed on this positive feature, we expect the ACN method to be more accurate than conventional numerical methods for S N transport calculations on coarse space-time grids

  11. Analytic Method for Pressure Recovery in Truncated Diffusers ...

    African Journals Online (AJOL)

    A prediction method is presented for the static pressure recovery in subsonic axisymmetric truncated conical diffusers. In the analysis, a turbulent boundary layer is assumed at the diffuser inlet and a potential core exists throughout the flow. When flow separation occurs, this approach cannot be used to predict the maximum ...

  12. Rotating D0-branes and consistent truncations of supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Anabalón, Andrés [Departamento de Ciencias, Facultad de Artes Liberales, Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Av. Padre Hurtado 750, Viña del Mar (Chile); Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS École Normale Supérieure de Lyon 46, allée d' Italie, F-69364 Lyon cedex 07 (France); Ortiz, Thomas; Samtleben, Henning [Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS École Normale Supérieure de Lyon 46, allée d' Italie, F-69364 Lyon cedex 07 (France)

    2013-12-18

    The fluctuations around the D0-brane near-horizon geometry are described by two-dimensional SO(9) gauged maximal supergravity. We work out the U(1){sup 4} truncation of this theory whose scalar sector consists of five dilaton and four axion fields. We construct the full non-linear Kaluza–Klein ansatz for the embedding of the dilaton sector into type IIA supergravity. This yields a consistent truncation around a geometry which is the warped product of a two-dimensional domain wall and the sphere S{sup 8}. As an application, we consider the solutions corresponding to rotating D0-branes which in the near-horizon limit approach AdS{sub 2}×M{sub 8} geometries, and discuss their thermodynamical properties. More generally, we study the appearance of such solutions in the presence of non-vanishing axion fields.

  13. Field of view extension and truncation correction for MR-based human attenuation correction in simultaneous MR/PET imaging

    International Nuclear Information System (INIS)

    Blumhagen, Jan O.; Ladebeck, Ralf; Fenchel, Matthias; Braun, Harald; Quick, Harald H.; Faul, David; Scheffler, Klaus

    2014-01-01

    Purpose: In quantitative PET imaging, it is critical to accurately measure and compensate for the attenuation of the photons absorbed in the tissue. While in PET/CT the linear attenuation coefficients can be easily determined from a low-dose CT-based transmission scan, in whole-body MR/PET the computation of the linear attenuation coefficients is based on the MR data. However, a constraint of the MR-based attenuation correction (AC) is the MR-inherent field-of-view (FoV) limitation due to static magnetic field (B 0 ) inhomogeneities and gradient nonlinearities. Therefore, the MR-based human AC map may be truncated or geometrically distorted toward the edges of the FoV and, consequently, the PET reconstruction with MR-based AC may be biased. This is especially of impact laterally where the patient arms rest beside the body and are not fully considered. Methods: A method is proposed to extend the MR FoV by determining an optimal readout gradient field which locally compensates B 0 inhomogeneities and gradient nonlinearities. This technique was used to reduce truncation in AC maps of 12 patients, and the impact on the PET quantification was analyzed and compared to truncated data without applying the FoV extension and additionally to an established approach of PET-based FoV extension. Results: The truncation artifacts in the MR-based AC maps were successfully reduced in all patients, and the mean body volume was thereby increased by 5.4%. In some cases large patient-dependent changes in SUV of up to 30% were observed in individual lesions when compared to the standard truncated attenuation map. Conclusions: The proposed technique successfully extends the MR FoV in MR-based attenuation correction and shows an improvement of PET quantification in whole-body MR/PET hybrid imaging. In comparison to the PET-based completion of the truncated body contour, the proposed method is also applicable to specialized PET tracers with little uptake in the arms and might reduce the

  14. Directional errors of movements and their correction in a discrete tracking task. [pilot reaction time and sensorimotor performance

    Science.gov (United States)

    Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.

    1978-01-01

    Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.

  15. Double peak-induced distance error in short-time-Fourier-transform-Brillouin optical time domain reflectometers event detection and the recovery method.

    Science.gov (United States)

    Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi

    2015-10-01

    The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.

  16. Accounting for baseline differences and measurement error in the analysis of change over time.

    Science.gov (United States)

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Generation of truncated recombinant form of tumor necrosis factor ...

    African Journals Online (AJOL)

    7. Original Research Article. Generation of truncated recombinant form of tumor necrosis factor ... as 6×His tagged using E.coli BL21 (DE3) expression system. The protein was ... proapoptotic signaling cascade through TNFR1. [5] which is ...

  18. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  19. Neurodegeneration caused by expression of human truncated tau leads to progressive neurobehavioural impairment in transgenic rats.

    Science.gov (United States)

    Hrnkova, Miroslava; Zilka, Norbert; Minichova, Zuzana; Koson, Peter; Novak, Michal

    2007-01-26

    Human truncated tau protein is an active constituent of the neurofibrillary degeneration in sporadic Alzheimer's disease. We have shown that modified tau protein, when expressed as a transgene in rats, induced AD characteristic tau cascade consisting of tau hyperphosphorylation, formation of argyrophilic tangles and sarcosyl-insoluble tau complexes. These pathological changes led to the functional impairment characterized by a variety of neurobehavioural symptoms. In the present study we have focused on the behavioural alterations induced by transgenic expression of human truncated tau. Transgenic rats underwent a battery of behavioural tests involving cognitive- and sensorimotor-dependent tasks accompanied with neurological assessment at the age of 4.5, 6 and 9 months. Behavioural examination of these rats showed altered spatial navigation in Morris water maze resulting in less time spent in target quadrant (popen field was not influenced by transgene expression. However beam walking test revealed that transgenic rats developed progressive sensorimotor disturbances related to the age of tested animals. The disturbances were most pronounced at the age of 9 months (p<0.01). Neurological alterations indicating impaired reflex responses were other added features of behavioural phenotype of this novel transgenic rat. These results allow us to suggest that neurodegeneration, caused by the non-mutated human truncated tau derived from sporadic human AD, result in the neuronal dysfunction consequently leading to the progressive neurobehavioural impairment.

  20. Zlib: A numerical library for optimal design of truncated power series algebra and map parameterization routines

    International Nuclear Information System (INIS)

    Yan, Y.T.

    1996-11-01

    A brief review of the Zlib development is given. Emphasized is the Zlib nerve system which uses the One-Step Index Pointers (OSIPs) for efficient computation and flexible use of the Truncated Power Series Algebra (TPSA). Also emphasized is the treatment of parameterized maps with an object-oriented language (e.g. C++). A parameterized map can be a Vector Power Series (Vps) or a Lie generator represented by an exponent of a Truncated Power Series (Tps) of which each coefficient is an object of truncated power series

  1. Measuring a truncated disk in Aquila X-1

    DEFF Research Database (Denmark)

    King, Ashley L.; Tomsick, John A.; Miller, Jon M.

    2016-01-01

    We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe Kα line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner r...

  2. Causal analysis of ordinal treatments and binary outcomes under truncation by death.

    Science.gov (United States)

    Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua

    2017-06-01

    It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.

  3. Fusion events lead to truncation of FOS in epithelioid hemangioma of bone

    DEFF Research Database (Denmark)

    van IJzendoorn, David G P; de Jong, Danielle; Romagosa, Cleofe

    2015-01-01

    in exon 4 of the FOS gene and the fusion event led to the introduction of a stop codon. In all instances, the truncation of the FOS gene would result in the loss of the transactivation domain (TAD). Using FISH probes we found a break in the FOS gene in two additional cases, in none of these cases...... differential diagnosis of vascular tumors of bone. Our data suggest that the translocation causes truncation of the FOS protein, with loss of the TAD, which is thereby a novel mechanism involved in tumorigenesis....

  4. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  5. Truncatable bootstrap equations in algebraic form and critical surface exponents

    Energy Technology Data Exchange (ETDEWEB)

    Gliozzi, Ferdinando [Dipartimento di Fisica, Università di Torino andIstituto Nazionale di Fisica Nucleare - sezione di Torino,Via P. Giuria 1, Torino, I-10125 (Italy)

    2016-10-10

    We describe examples of drastic truncations of conformal bootstrap equations encoding much more information than that obtained by a direct numerical approach. A three-term truncation of the four point function of a free scalar in any space dimensions provides algebraic identities among conformal block derivatives which generate the exact spectrum of the infinitely many primary operators contributing to it. In boundary conformal field theories, we point out that the appearance of free parameters in the solutions of bootstrap equations is not an artifact of truncations, rather it reflects a physical property of permeable conformal interfaces which are described by the same equations. Surface transitions correspond to isolated points in the parameter space. We are able to locate them in the case of 3d Ising model, thanks to a useful algebraic form of 3d boundary bootstrap equations. It turns out that the low-lying spectra of the surface operators in the ordinary and the special transitions of 3d Ising model form two different solutions of the same polynomial equation. Their interplay yields an estimate of the surface renormalization group exponents, y{sub h}=0.72558(18) for the ordinary universality class and y{sub h}=1.646(2) for the special universality class, which compare well with the most recent Monte Carlo calculations. Estimates of other surface exponents as well as OPE coefficients are also obtained.

  6. Observation of the dispersion of wedge waves propagating along cylinder wedge with different truncations by laser ultrasound technique

    Science.gov (United States)

    Jia, Jing; Zhang, Yu; Han, Qingbang; Jing, Xueping

    2017-10-01

    The research focuses on study the influence of truncations on the dispersion of wedge waves propagating along cylinder wedge with different truncations by using the laser ultrasound technique. The wedge waveguide models with different truncations were built by using finite element method (FEM). The dispersion curves were obtained by using 2D Fourier transformation method. Multiple mode wedge waves were observed, which was well agreed with the results estimated from Lagasse's empirical formula. We established cylinder wedge with radius of 3mm, 20° and 60°angle, with 0μm, 5μm, 10μm, 20μm, 30μm, 40μm, and 50μm truncations, respectively. It was found that non-ideal wedge tip caused abnormal dispersion of the mode of cylinder wedge, the modes of 20° cylinder wedge presents the characteristics of guide waves which propagating along hollow cylinder as the truncation increasing. Meanwhile, the modes of 60° cylinder wedge with truncations appears the characteristics of guide waves propagating along hollow cylinder, and its mode are observed clearly. The study can be used to evaluate and detect wedge structure.

  7. Truncation artifact suppression in cone-beam radionuclide transmission CT using maximum likelihood techniques: evaluation with human subjects

    International Nuclear Information System (INIS)

    Manglos, S.H.

    1992-01-01

    Transverse image truncation can be a serious problem for human imaging using cone-beam transmission CT (CB-CT) implemented on a conventional rotating gamma camera. This paper presents a reconstruction method to reduce or eliminate the artifacts resulting from the truncation. The method uses a previously published transmission maximum likelihood EM algorithm, adapted to the cone-beam geometry. The reconstruction method is evaluated qualitatively using three human subjects of various dimensions and various degrees of truncation. (author)

  8. On the Truncated Pareto Distribution with applications

    OpenAIRE

    Zaninetti, Lorenzo; Ferraro, Mario

    2008-01-01

    The Pareto probability distribution is widely applied in different fields such us finance, physics, hydrology, geology and astronomy. This note deals with an application of the Pareto distribution to astrophysics and more precisely to the statistical analysis of mass of stars and of diameters of asteroids. In particular a comparison between the usual Pareto distribution and its truncated version is presented. Finally a possible physical mechanism that produces Pareto tails for the distributio...

  9. On the propagation of truncated localized waves in dispersive silica

    KAUST Repository

    Salem, Mohamed; Bagci, Hakan

    2010-01-01

    Propagation characteristics of truncated Localized Waves propagating in dispersive silica and free space are numerically analyzed. It is shown that those characteristics are affected by the changes in the relation between the transverse spatial

  10. Density Functional Theory and the Basis Set Truncation Problem with Correlation Consistent Basis Sets: Elephant in the Room or Mouse in the Closet?

    Science.gov (United States)

    Feller, David; Dixon, David A

    2018-03-08

    Two recent papers in this journal called into question the suitability of the correlation consistent basis sets for density functional theory (DFT) calculations, because the sets were designed for correlated methods such as configuration interaction, perturbation theory, and coupled cluster theory. These papers focused on the ability of the correlation consistent and other basis sets to reproduce total energies, atomization energies, and dipole moments obtained from "quasi-exact" multiwavelet results. Undesirably large errors were observed for the correlation consistent basis sets. One of the papers argued that basis sets specifically optimized for DFT methods were "essential" for obtaining high accuracy. In this work we re-examined the performance of the correlation consistent basis sets by resolving problems with the previous calculations and by making more appropriate basis set choices for the alkali and alkaline-earth metals and second-row elements. When this is done, the statistical errors with respect to the benchmark values and with respect to DFT optimized basis sets are greatly reduced, especially in light of the relatively large intrinsic error of the underlying DFT method. When judged with respect to high-quality Feller-Peterson-Dixon coupled cluster theory atomization energies, the PBE0 DFT method used in the previous studies exhibits a mean absolute deviation more than a factor of 50 larger than the quintuple zeta basis set truncation error.

  11. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...

  12. Impact of improved attenuation correction featuring a bone atlas and truncation correction on PET quantification in whole-body PET/MR.

    Science.gov (United States)

    Oehmigen, Mark; Lindemann, Maike E; Gratz, Marcel; Kirchner, Julian; Ruhlmann, Verena; Umutlu, Lale; Blumhagen, Jan Ole; Fenchel, Matthias; Quick, Harald H

    2018-04-01

    Recent studies have shown an excellent correlation between PET/MR and PET/CT hybrid imaging in detecting lesions. However, a systematic underestimation of PET quantification in PET/MR has been observed. This is attributable to two methodological challenges of MR-based attenuation correction (AC): (1) lack of bone information, and (2) truncation of the MR-based AC maps (μmaps) along the patient arms. The aim of this study was to evaluate the impact of improved AC featuring a bone atlas and truncation correction on PET quantification in whole-body PET/MR. The MR-based Dixon method provides four-compartment μmaps (background air, lungs, fat, soft tissue) which served as a reference for PET/MR AC in this study. A model-based bone atlas provided bone tissue as a fifth compartment, while the HUGE method provided truncation correction. The study population comprised 51 patients with oncological diseases, all of whom underwent a whole-body PET/MR examination. Each whole-body PET dataset was reconstructed four times using standard four-compartment μmaps, five-compartment μmaps, four-compartment μmaps + HUGE, and five-compartment μmaps + HUGE. The SUV max for each lesion was measured to assess the impact of each μmap on PET quantification. All four μmaps in each patient provided robust results for reconstruction of the AC PET data. Overall, SUV max was quantified in 99 tumours and lesions. Compared to the reference four-compartment μmap, the mean SUV max of all 99 lesions increased by 1.4 ± 2.5% when bone was added, by 2.1 ± 3.5% when HUGE was added, and by 4.4 ± 5.7% when bone + HUGE was added. Larger quantification bias of up to 35% was found for single lesions when bone and truncation correction were added to the μmaps, depending on their individual location in the body. The novel AC method, featuring a bone model and truncation correction, improved PET quantification in whole-body PET/MR imaging. Short reconstruction times, straightforward

  13. Impact of improved attenuation correction featuring a bone atlas and truncation correction on PET quantification in whole-body PET/MR

    Energy Technology Data Exchange (ETDEWEB)

    Oehmigen, Mark; Lindemann, Maike E. [University Hospital Essen, High Field and Hybrid MR Imaging, Essen (Germany); Gratz, Marcel; Quick, Harald H. [University Hospital Essen, High Field and Hybrid MR Imaging, Essen (Germany); University Duisburg-Essen, Erwin L. Hahn Institute for MR Imaging, Essen (Germany); Kirchner, Julian [University Dusseldorf, Department of Diagnostic and Interventional Radiology, Medical Faculty, Dusseldorf (Germany); Ruhlmann, Verena [University Hospital Essen, Department of Nuclear Medicine, Essen (Germany); Umutlu, Lale [University Hospital Essen, Department of Diagnostic and Interventional Radiology and Neuroradiology, Essen (Germany); Blumhagen, Jan Ole; Fenchel, Matthias [Siemens Healthcare GmbH, Erlangen (Germany)

    2018-04-15

    Recent studies have shown an excellent correlation between PET/MR and PET/CT hybrid imaging in detecting lesions. However, a systematic underestimation of PET quantification in PET/MR has been observed. This is attributable to two methodological challenges of MR-based attenuation correction (AC): (1) lack of bone information, and (2) truncation of the MR-based AC maps (μmaps) along the patient arms. The aim of this study was to evaluate the impact of improved AC featuring a bone atlas and truncation correction on PET quantification in whole-body PET/MR. The MR-based Dixon method provides four-compartment μmaps (background air, lungs, fat, soft tissue) which served as a reference for PET/MR AC in this study. A model-based bone atlas provided bone tissue as a fifth compartment, while the HUGE method provided truncation correction. The study population comprised 51 patients with oncological diseases, all of whom underwent a whole-body PET/MR examination. Each whole-body PET dataset was reconstructed four times using standard four-compartment μmaps, five-compartment μmaps, four-compartment μmaps + HUGE, and five-compartment μmaps + HUGE. The SUV{sub max} for each lesion was measured to assess the impact of each μmap on PET quantification. All four μmaps in each patient provided robust results for reconstruction of the AC PET data. Overall, SUV{sub max} was quantified in 99 tumours and lesions. Compared to the reference four-compartment μmap, the mean SUV{sub max} of all 99 lesions increased by 1.4 ± 2.5% when bone was added, by 2.1 ± 3.5% when HUGE was added, and by 4.4 ± 5.7% when bone + HUGE was added. Larger quantification bias of up to 35% was found for single lesions when bone and truncation correction were added to the μmaps, depending on their individual location in the body. The novel AC method, featuring a bone model and truncation correction, improved PET quantification in whole-body PET/MR imaging. Short reconstruction times, straightforward

  14. The Most Developmentally Truncated Fishes Show Extensive Hox Gene Loss and Miniaturized Genomes

    Science.gov (United States)

    Malmstrøm, Martin; Britz, Ralf; Matschiner, Michael; Tørresen, Ole K; Hadiaty, Renny Kurnia; Yaakob, Norsham; Tan, Heok Hui; Jakobsen, Kjetill Sigurd; Salzburger, Walter; Rüber, Lukas

    2018-01-01

    Abstract The world’s smallest fishes belong to the genus Paedocypris. These miniature fishes are endemic to an extreme habitat: the peat swamp forests in Southeast Asia, characterized by highly acidic blackwater. This threatened habitat is home to a large array of fishes, including a number of miniaturized but also developmentally truncated species. Especially the genus Paedocypris is characterized by profound, organism-wide developmental truncation, resulting in sexually mature individuals of <8 mm in length with a larval phenotype. Here, we report on evolutionary simplification in the genomes of two species of the dwarf minnow genus Paedocypris using whole-genome sequencing. The two species feature unprecedented Hox gene loss and genome reduction in association with their massive developmental truncation. We also show how other genes involved in the development of musculature, nervous system, and skeleton have been lost in Paedocypris, mirroring its highly progenetic phenotype. Further, our analyses suggest two mechanisms responsible for the genome streamlining in Paedocypris in relation to other Cypriniformes: severe intron shortening and reduced repeat content. As the first report on the genomic sequence of a vertebrate species with organism-wide developmental truncation, the results of our work enhance our understanding of genome evolution and how genotypes are translated to phenotypes. In addition, as a naturally simplified system closely related to zebrafish, Paedocypris provides novel insights into vertebrate development. PMID:29684203

  15. The Most Developmentally Truncated Fishes Show Extensive Hox Gene Loss and Miniaturized Genomes.

    Science.gov (United States)

    Malmstrøm, Martin; Britz, Ralf; Matschiner, Michael; Tørresen, Ole K; Hadiaty, Renny Kurnia; Yaakob, Norsham; Tan, Heok Hui; Jakobsen, Kjetill Sigurd; Salzburger, Walter; Rüber, Lukas

    2018-04-01

    The world's smallest fishes belong to the genus Paedocypris. These miniature fishes are endemic to an extreme habitat: the peat swamp forests in Southeast Asia, characterized by highly acidic blackwater. This threatened habitat is home to a large array of fishes, including a number of miniaturized but also developmentally truncated species. Especially the genus Paedocypris is characterized by profound, organism-wide developmental truncation, resulting in sexually mature individuals of <8 mm in length with a larval phenotype. Here, we report on evolutionary simplification in the genomes of two species of the dwarf minnow genus Paedocypris using whole-genome sequencing. The two species feature unprecedented Hox gene loss and genome reduction in association with their massive developmental truncation. We also show how other genes involved in the development of musculature, nervous system, and skeleton have been lost in Paedocypris, mirroring its highly progenetic phenotype. Further, our analyses suggest two mechanisms responsible for the genome streamlining in Paedocypris in relation to other Cypriniformes: severe intron shortening and reduced repeat content. As the first report on the genomic sequence of a vertebrate species with organism-wide developmental truncation, the results of our work enhance our understanding of genome evolution and how genotypes are translated to phenotypes. In addition, as a naturally simplified system closely related to zebrafish, Paedocypris provides novel insights into vertebrate development.

  16. Identification of a functionally distinct truncated BDNF mRNA splice variant and protein in Trachemys scripta elegans.

    Directory of Open Access Journals (Sweden)

    Ganesh Ambigapathy

    Full Text Available Brain-derived neurotrophic factor (BDNF has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein.

  17. Identification of a functionally distinct truncated BDNF mRNA splice variant and protein in Trachemys scripta elegans.

    Science.gov (United States)

    Ambigapathy, Ganesh; Zheng, Zhaoqing; Li, Wei; Keifer, Joyce

    2013-01-01

    Brain-derived neurotrophic factor (BDNF) has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF) are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein.

  18. Propagation of coherently combined truncated laser beam arrays with beam distortions in non-Kolmogorov turbulence.

    Science.gov (United States)

    Tao, Rumao; Si, Lei; Ma, Yanxing; Zhou, Pu; Liu, Zejin

    2012-08-10

    The propagation properties of coherently combined truncated laser beam arrays with beam distortions through non-Kolmogorov turbulence are studied in detail both analytically and numerically. The analytical expressions for the average intensity and the beam width of coherently combined truncated laser beam arrays with beam distortions propagating through turbulence are derived based on the combination of statistical optics methods and the extended Huygens-Fresnel principle. The effect of beam distortions, such as amplitude modulation and phase fluctuation, is studied by numerical examples. The numerical results reveal that phase fluctuations have significant influence on the spreading of coherently combined truncated laser beam arrays in non-Kolmogorov turbulence, and the effects of the phase fluctuations can be negligible as long as the phase fluctuations are controlled under a certain level, i.e., a>0.05 for the situation considered in the paper. Furthermore, large phase fluctuations can convert the beam distribution rapidly to a Gaussian form, vary the spreading, weaken the optimum truncation effects, and suppress the dependence of spreading on the parameters of the non-Kolmogorov turbulence.

  19. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  20. Effect of Synthetic Truncated Apolipoprotein C-I Peptide on Plasma Lipoprotein Cholesterol in Nonhuman Primates

    Directory of Open Access Journals (Sweden)

    Rampratap S. Kushwaha

    2004-01-01

    Full Text Available The present studies were conducted to determine whether a synthetic truncated apoC-I peptide that inhibits CETP activity in baboons would raise plasma HDL cholesterol levels in nonhuman primates with low HDL levels. We used 2 cynomolgus monkeys and 3 baboons fed a cholesterol- and fat-enriched diet. In cynomolgus monkeys, we injected synthetic truncated apoC-I inhibitor peptide at a dose of 20 mg/kg and, in baboons, at doses of 10, 15, and 20 mg/kg at weekly intervals. Blood samples were collected 3 times a week and VLDL + LDL and HDL cholesterol concentrations were measured. In cynomolgus monkeys, administration of the inhibitor peptide caused a rapid decrease in VLDL + LDL cholesterol concentrations (30%–60% and an increase in HDL cholesterol concentrations (10%–20%. VLDL + LDL cholesterol concentrations returned to baseline levels in approximately 15 days. In baboons, administration of the synthetic inhibitor peptide caused a decrease in VLDL + LDL cholesterol (20%–60% and an increase in HDL cholesterol (10%–20%. VLDL + LDL cholesterol returned to baseline levels by day 21, whereas HDL cholesterol concentrations remained elevated for up to 26 days. ApoA-I concentrations increased, whereas apoE and triglyceride concentrations decreased. Subcutaneous and intravenous administrations of the inhibitor peptide had similar effects on LDL and HDL cholesterol concentrations. There was no change in body weight, food consumption, or plasma IgG levels of any baboon during the study. These studies suggest that the truncated apoC-I peptide can be used to raise HDL in humans.

  1. Optimal auxiliary Hamiltonians for truncated boson-space calculations by means of a maximal-decoupling variational principle

    International Nuclear Information System (INIS)

    Li, C.

    1991-01-01

    A new method based on a maximal-decoupling variational principle is proposed to treat the Pauli-principle constraints for calculations of nuclear collective motion in a truncated boson space. The viability of the method is demonstrated through an application to the multipole form of boson Hamiltonians for the single-j and nondegenerate multi-j pairing interactions. While these boson Hamiltonians are Hermitian and contain only one- and two-boson terms, they are also the worst case for truncated boson-space calculations because they are not amenable to any boson truncations at all. By using auxiliary Hamiltonians optimally determined by the maximal-decoupling variational principle, however, truncations in the boson space become feasible and even yield reasonably accurate results. The method proposed here may thus be useful for doing realistic calculations of nuclear collective motion as well as for obtaining a viable interacting-boson-model type of boson Hamiltonian from the shell model

  2. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  3. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  4. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  5. The robustness of truncated Airy beam in PT Gaussian potentials media

    Science.gov (United States)

    Wang, Xianni; Fu, Xiquan; Huang, Xianwei; Yang, Yijun; Bai, Yanfeng

    2018-03-01

    The robustness of truncated Airy beam in parity-time (PT) symmetric Gaussian potentials media is numerically investigated. A high-peak power beam sheds from the Airy beam due to the media modulation while the Airy wavefront still retain its self-bending and non-diffraction characteristics under the influence of modulation parameters. Increasing the modulation factor results in the smaller value of maximum power of the center beam, and the opposite trend occurs with the increment of the modulation depth. However, the parabolic trajectory of the Airy wavefront does not be influenced. By utilizing the unique features, the Airy beam can be used as a long distance transmission source under the PT symmetric Gaussian potentials medium.

  6. Influence of characteristics of time series on short-term forecasting error parameter changes in real time

    Science.gov (United States)

    Klevtsov, S. I.

    2018-05-01

    The impact of physical factors, such as temperature and others, leads to a change in the parameters of the technical object. Monitoring the change of parameters is necessary to prevent a dangerous situation. The control is carried out in real time. To predict the change in the parameter, a time series is used in this paper. Forecasting allows one to determine the possibility of a dangerous change in a parameter before the moment when this change occurs. The control system in this case has more time to prevent a dangerous situation. A simple time series was chosen. In this case, the algorithm is simple. The algorithm is executed in the microprocessor module in the background. The efficiency of using the time series is affected by its characteristics, which must be adjusted. In the work, the influence of these characteristics on the error of prediction of the controlled parameter was studied. This takes into account the behavior of the parameter. The values of the forecast lag are determined. The results of the research, in the case of their use, will improve the efficiency of monitoring the technical object during its operation.

  7. Learning Mixtures of Truncated Basis Functions from Data

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Pérez-Bernabé, Inmaculada

    2014-01-01

    In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a ke...... propose an alternative learning method that relies on the cumulative distribution function of the data. Empirical results demonstrate the usefulness of the approaches: Even though the methods produce estimators that are slightly poorer than the state of the art (in terms of log......In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing......-likelihood), they are significantly faster, and therefore indicate that the MoTBF framework can be used for inference and learning in reasonably sized domains. Furthermore, we show how a particular sub- class of MoTBF potentials (learnable by the proposed methods) can be exploited to significantly reduce complexity during inference....

  8. Scavenger receptor AI/II truncation, lung function and COPD

    DEFF Research Database (Denmark)

    Thomsen, M; Nordestgaard, B G; Tybjaerg-Hansen, A

    2011-01-01

    The scavenger receptor A-I/II (SRA-I/II) on alveolar macrophages is involved in recognition and clearance of modified lipids and inhaled particulates. A rare variant of the SRA-I/II gene, Arg293X, truncates the distal collagen-like domain, which is essential for ligand recognition. We tested whet...

  9. Bound on quantum computation time: Quantum error correction in a critical environment

    International Nuclear Information System (INIS)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-01-01

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  10. Typical Periods for Two-Stage Synthesis by Time-Series Aggregation with Bounded Error in Objective Function

    Energy Technology Data Exchange (ETDEWEB)

    Bahl, Björn; Söhler, Theo; Hennen, Maike; Bardow, André, E-mail: andre.bardow@ltt.rwth-aachen.de [Institute of Technical Thermodynamics, RWTH Aachen University, Aachen (Germany)

    2018-01-08

    Two-stage synthesis problems simultaneously consider here-and-now decisions (e.g., optimal investment) and wait-and-see decisions (e.g., optimal operation). The optimal synthesis of energy systems reveals such a two-stage character. The synthesis of energy systems involves multiple large time series such as energy demands and energy prices. Since problem size increases with the size of the time series, synthesis of energy systems leads to complex optimization problems. To reduce the problem size without loosing solution quality, we propose a method for time-series aggregation to identify typical periods. Typical periods retain the chronology of time steps, which enables modeling of energy systems, e.g., with storage units or start-up cost. The aim of the proposed method is to obtain few typical periods with few time steps per period, while accurately representing the objective function of the full time series, e.g., cost. Thus, we determine the error of time-series aggregation as the cost difference between operating the optimal design for the aggregated time series and for the full time series. Thereby, we rigorously bound the maximum performance loss of the optimal energy system design. In an initial step, the proposed method identifies the best length of typical periods by autocorrelation analysis. Subsequently, an adaptive procedure determines aggregated typical periods employing the clustering algorithm k-medoids, which groups similar periods into clusters and selects one representative period per cluster. Moreover, the number of time steps per period is aggregated by a novel clustering algorithm maintaining chronology of the time steps in the periods. The method is iteratively repeated until the error falls below a threshold value. A case study based on a real-world synthesis problem of an energy system shows that time-series aggregation from 8,760 time steps to 2 typical periods with each 2 time steps results in an error smaller than the optimality gap of

  11. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  12. Controlling the error on target motion through real-time mesh adaptation: Applications to deep brain stimulation.

    Science.gov (United States)

    Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A

    2018-05-01

    An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Errors in determination of soil water content using time-domain reflectometry caused by soil compaction around wave guides

    Energy Technology Data Exchange (ETDEWEB)

    Ghezzehei, T.A.

    2008-05-29

    Application of time domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement errors, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this error by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the time of installation. The relative error in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement errors of using a standard three-prong TDR waveguide could be up to 10%. We also show that the error scales linearly with the ratio of rod radius to the interradius spacing.

  14. Error estimation in multitemporal InSAR deformation time series, with application to Lanzarote, Canary Islands

    Science.gov (United States)

    GonzáLez, Pablo J.; FernáNdez, José

    2011-10-01

    Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.

  15. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  16. Performance Comparison of Assorted Color Spaces for Multilevel Block Truncation Coding based Face Recognition

    OpenAIRE

    H.B. Kekre; Sudeep Thepade; Karan Dhamejani; Sanchit Khandelwal; Adnan Azmi

    2012-01-01

    The paper presents a performance analysis of Multilevel Block Truncation Coding based Face Recognition among widely used color spaces. In [1], Multilevel Block Truncation Coding was applied on the RGB color space up to four levels for face recognition. Better results were obtained when the proposed technique was implemented using Kekre’s LUV (K’LUV) color space [25]. This was the motivation to test the proposed technique using assorted color spaces. For experimental analysis, two face databas...

  17. The evidence for synthesis of truncated triangular silver nanoplates in the presence of CTAB

    International Nuclear Information System (INIS)

    He Xin; Zhao Xiujian; Chen Yunxia; Feng Jinyang

    2008-01-01

    Truncated triangular silver nanoplates were prepared by a solution-phase approach, which involved the seed-mediated growth of silver nanoparticles in the presence of cetyltrimethylammonium bromide (CTAB) at 40 deg. C. The result of X-ray diffraction indicates that the as-prepared nanoparticles are made of pure face centered cubic silver. Transmission electron microscopy and atomic force microscopy studies show that the truncated triangular silver nanoplates, with edge lengths of 50 ± 5 nm and thicknesses of 27 ± 3 nm, are oriented differently on substrates of a copper grid and a fresh mica flake. The corners of these nanoplates are round. The selected area electron diffraction analysis reveals that the silver nanoplates are single crystals with an atomically flat surface. We determine the holistic morphology of truncated triangular silver nanoplates through the above measurements with the aid of computer-aided 3D perspective images

  18. Cross-layer combining of power control and adaptive modulation with truncated ARQ for cognitive radios

    Institute of Scientific and Technical Information of China (English)

    CHENG Shi-lun; YANG Zhen

    2008-01-01

    To maximize throughput and to satisfy users' requirements in cognitive radios, a cross-layer optimization problem combining adaptive modulation and power control at the physical layer and truncated automatic repeat request at the medium access control layer is proposed. Simulation results show the combination of power control, adaptive modulation, and truncated automatic repeat request can regulate transmitter powers and increase the total throughput effectively.

  19. Dual scan CT image recovery from truncated projections

    Science.gov (United States)

    Sarkar, Shubhabrata; Wahi, Pankaj; Munshi, Prabhat

    2017-12-01

    There are computerized tomography (CT) scanners available commercially for imaging small objects and they are often categorized as mini-CT X-ray machines. One major limitation of these machines is their inability to scan large objects with good image quality because of the truncation of projection data. An algorithm is proposed in this work which enables such machines to scan large objects while maintaining the quality of the recovered image.

  20. Tau truncation is a productive posttranslational modification of neurofibrillary degeneration in Alzheimer's disease.

    Science.gov (United States)

    Kovacech, B; Novak, M

    2010-12-01

    Deposits of the misfolded neuronal protein tau are major hallmarks of neurodegeneration in Alzheimer's disease (AD) and other tauopathies. The etiology of the transformation process of the intrinsically disordered soluble protein tau into the insoluble misordered aggregate has attracted much attention. Tau undergoes multiple modifications in AD, most notably hyperphosphorylation and truncation. Hyperphosphorylation is widely regarded as the hottest candidate for the inducer of the neurofibrillary pathology. However, the true nature of the impetus that initiates the whole process in the human brains remains unknown. In AD, several site-specific tau cleavages were identified and became connected to the progression of the disease. In addition, western blot analyses of tau species in AD brains reveal multitudes of various truncated forms. In this review we summarize evidence showing that tau truncation alone is sufficient to induce the complete cascade of neurofibrillary pathology, including hyperphosphorylation and accumulation of misfolded insoluble forms of tau. Therefore, proteolytical abnormalities in the stressed neurons and production of aberrant tau cleavage products deserve closer attention and should be considered as early therapeutic targets for Alzheimer's disease.

  1. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  2. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  3. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  4. Post-event human decision errors: operator action tree/time reliability correlation

    International Nuclear Information System (INIS)

    Hall, R.E.; Fragola, J.; Wreathall, J.

    1982-11-01

    This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations

  5. Post-event human decision errors: operator action tree/time reliability correlation

    Energy Technology Data Exchange (ETDEWEB)

    Hall, R E; Fragola, J; Wreathall, J

    1982-11-01

    This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations.

  6. Amplitude of Light Scattering by a Truncated Pyramid and Cone in the Rayleigh-Gans-Debye Approximation

    Directory of Open Access Journals (Sweden)

    Konstantin A. Shapovalov

    2013-01-01

    Full Text Available The article considers general approach to structured particle and particle system form factor calculation in the Rayleigh-Gans-Debye (RGD approximation. Using this approach, amplitude of light scattering by a truncated pyramid and cone formulas in RGD approximation are obtained. Light scattering indicator by a truncated pyramid and cone in the RGD approximation are calculated.

  7. Protoplanetary disc truncation mechanisms in stellar clusters: comparing external photoevaporation and tidal encounters

    Science.gov (United States)

    Winter, A. J.; Clarke, C. J.; Rosotti, G.; Ih, J.; Facchini, S.; Haworth, T. J.

    2018-04-01

    Most stars form and spend their early life in regions of enhanced stellar density. Therefore the evolution of protoplanetary discs (PPDs) hosted by such stars are subject to the influence of other members of the cluster. Physically, PPDs might be truncated either by photoevaporation due to ultraviolet flux from massive stars, or tidal truncation due to close stellar encounters. Here we aim to compare the two effects in real cluster environments. In this vein we first review the properties of well studied stellar clusters with a focus on stellar number density, which largely dictates the degree of tidal truncation, and far ultraviolet (FUV) flux, which is indicative of the rate of external photoevaporation. We then review the theoretical PPD truncation radius due to an arbitrary encounter, additionally taking into account the role of eccentric encounters that play a role in hot clusters with a 1D velocity dispersion σv ≳ 2 km/s. Our treatment is then applied statistically to varying local environments to establish a canonical threshold for the local stellar density (nc ≳ 104 pc-3) for which encounters can play a significant role in shaping the distribution of PPD radii over a timescale ˜3 Myr. By combining theoretical mass loss rates due to FUV flux with viscous spreading in a PPD we establish a similar threshold for which a massive disc is completely destroyed by external photoevaporation. Comparing these thresholds in local clusters we find that if either mechanism has a significant impact on the PPD population then photoevaporation is always the dominating influence.

  8. A qualitative description of human error

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  9. A qualitative description of human error

    Energy Technology Data Exchange (ETDEWEB)

    Zhaohuan, Li [Academia Sinica, Beijing, BJ (China). Inst. of Atomic Energy

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed.

  10. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    International Nuclear Information System (INIS)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-01-01

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well

  11. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  12. Five-dimensional truncation of the plane incompressible navier-stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Boldrighini, C [Camerino Univ. (Italy). Istituto di Matematica; Franceschini, V [Modena Univ. (Italy). Istituto Matematico

    1979-01-01

    A five-modes truncation of the Navier-Stokes equations for a two dimensional incompressible fluid on a torus is considered. A computer analysis shows that for a certain range of the Reynolds number the system exhibits a stochastic behaviour, approached through an involved sequence of bifurcations.

  13. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  14. Exact method for the simulation of Coulombic systems by spherically truncated, pairwise r-1 summation

    International Nuclear Information System (INIS)

    Wolf, D.; Keblinski, P.; Phillpot, S.R.; Eggebrecht, J.

    1999-01-01

    Based on a recent result showing that the net Coulomb potential in condensed ionic systems is rather short ranged, an exact and physically transparent method permitting the evaluation of the Coulomb potential by direct summation over the r -1 Coulomb pair potential is presented. The key observation is that the problems encountered in determining the Coulomb energy by pairwise, spherically truncated r -1 summation are a direct consequence of the fact that the system summed over is practically never neutral. A simple method is developed that achieves charge neutralization wherever the r -1 pair potential is truncated. This enables the extraction of the Coulomb energy, forces, and stresses from a spherically truncated, usually charged environment in a manner that is independent of the grouping of the pair terms. The close connection of our approach with the Ewald method is demonstrated and exploited, providing an efficient method for the simulation of even highly disordered ionic systems by direct, pairwise r -1 summation with spherical truncation at rather short range, i.e., a method which fully exploits the short-ranged nature of the interactions in ionic systems. The method is validated by simulations of crystals, liquids, and interfacial systems, such as free surfaces and grain boundaries. copyright 1999 American Institute of Physics

  15. Optical asymmetric cryptography based on elliptical polarized light linear truncation and a numerical reconstruction technique.

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng

    2014-06-20

    We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.

  16. The Truncated Lognormal Distribution as a Luminosity Function for SWIFT-BAT Gamma-Ray Bursts

    Directory of Open Access Journals (Sweden)

    Lorenzo Zaninetti

    2016-11-01

    Full Text Available The determination of the luminosity function (LF in Gamma ray bursts (GRBs depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here, we analyze three cosmologies: the standard cosmology, the plasma cosmology and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law and, secondly, by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.

  17. Errors in 'BED'-derived estimates of HIV incidence will vary by place, time and age.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2009-05-01

    Full Text Available The BED Capture Enzyme Immunoassay, believed to distinguish recent HIV infections, is being used to estimate HIV incidence, although an important property of the test--how specificity changes with time since infection--has not been not measured.We construct hypothetical scenarios for the performance of BED test, consistent with current knowledge, and explore how this could influence errors in BED estimates of incidence using a mathematical model of six African countries. The model is also used to determine the conditions and the sample sizes required for the BED test to reliably detect trends in HIV incidence.If the chance of misclassification by BED increases with time since infection, the overall proportion of individuals misclassified could vary widely between countries, over time, and across age-groups, in a manner determined by the historic course of the epidemic and the age-pattern of incidence. Under some circumstances, changes in BED estimates over time can approximately track actual changes in incidence, but large sample sizes (50,000+ will be required for recorded changes to be statistically significant.The relationship between BED test specificity and time since infection has not been fully measured, but, if it decreases, errors in estimates of incidence could vary by place, time and age-group. This means that post-assay adjustment procedures using parameters from different populations or at different times may not be valid. Further research is urgently needed into the properties of the BED test, and the rate of misclassification in a wide range of populations.

  18. Generation of truncated recombinant form of tumor necrosis factor ...

    African Journals Online (AJOL)

    Purpose: To produce truncated recombinant form of tumor necrosis factor receptor 1 (TNFR1), cysteine-rich domain 2 (CRD2) and CRD3 regions of the receptor were generated using pET28a and E. coli/BL21. Methods: DNA coding sequence of CRD2 and CRD3 was cloned into pET28a vector and the corresponding ...

  19. Exact fan-beam image reconstruction algorithm for truncated projection data acquired from an asymmetric half-size detector

    International Nuclear Information System (INIS)

    Leng Shuai; Zhuang Tingliang; Nett, Brian E; Chen Guanghong

    2005-01-01

    In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2π angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2π angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm

  20. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....

  1. Feature Migration in Time: Reflection of Selective Attention on Speech Errors

    Science.gov (United States)

    Nozari, Nazbanou; Dell, Gary S.

    2012-01-01

    This article describes an initial study of the effect of focused attention on phonological speech errors. In 3 experiments, participants recited 4-word tongue twisters and focused attention on 1 (or none) of the words. The attended word was singled out differently in each experiment; participants were under instructions to avoid errors on the…

  2. Application of the AMPLE cluster-and-truncate approach to NMR structures for molecular replacement

    Energy Technology Data Exchange (ETDEWEB)

    Bibby, Jaclyn [University of Liverpool, Liverpool L69 7ZB (United Kingdom); Keegan, Ronan M. [Research Complex at Harwell, STFC Rutherford Appleton Laboratory, Didcot OX11 0FA (United Kingdom); Mayans, Olga [University of Liverpool, Liverpool L69 7ZB (United Kingdom); Winn, Martyn D. [Science and Technology Facilities Council Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Rigden, Daniel J., E-mail: drigden@liv.ac.uk [University of Liverpool, Liverpool L69 7ZB (United Kingdom)

    2013-11-01

    Processing of NMR structures for molecular replacement by AMPLE works well. AMPLE is a program developed for clustering and truncating ab initio protein structure predictions into search models for molecular replacement. Here, it is shown that its core cluster-and-truncate methods also work well for processing NMR ensembles into search models. Rosetta remodelling helps to extend success to NMR structures bearing low sequence identity or high structural divergence from the target protein. Potential future routes to improved performance are considered and practical, general guidelines on using AMPLE are provided.

  3. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  4. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    Science.gov (United States)

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  5. Truncation effects in the functional renormalization group study of spontaneous symmetry breaking

    International Nuclear Information System (INIS)

    Defenu, N.; Mati, P.; Márián, I.G.; Nándori, I.; Trombettoni, A.

    2015-01-01

    We study the occurrence of spontaneous symmetry breaking (SSB) for O(N) models using functional renormalization group techniques. We show that even the local potential approximation (LPA) when treated exactly is sufficient to give qualitatively correct results for systems with continuous symmetry, in agreement with the Mermin-Wagner theorem and its extension to systems with fractional dimensions. For general N (including the Ising model N=1) we study the solutions of the LPA equations for various truncations around the zero field using a finite number of terms (and different regulators), showing that SSB always occurs even where it should not. The SSB is signalled by Wilson-Fisher fixed points which for any truncation are shown to stay on the line defined by vanishing mass beta functions.

  6. The dorsal medial frontal cortex is sensitive to time on task, not response conflict or error likelihood.

    Science.gov (United States)

    Grinband, Jack; Savitskaya, Judith; Wager, Tor D; Teichert, Tobias; Ferrera, Vincent P; Hirsch, Joy

    2011-07-15

    The dorsal medial frontal cortex (dMFC) is highly active during choice behavior. Though many models have been proposed to explain dMFC function, the conflict monitoring model is the most influential. It posits that dMFC is primarily involved in detecting interference between competing responses thus signaling the need for control. It accurately predicts increased neural activity and response time (RT) for incompatible (high-interference) vs. compatible (low-interference) decisions. However, it has been shown that neural activity can increase with time on task, even when no decisions are made. Thus, the greater dMFC activity on incompatible trials may stem from longer RTs rather than response conflict. This study shows that (1) the conflict monitoring model fails to predict the relationship between error likelihood and RT, and (2) the dMFC activity is not sensitive to congruency, error likelihood, or response conflict, but is monotonically related to time on task. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  8. Comparison of haptic guidance and error amplification robotic trainings for the learning of a timing-based motor task by healthy seniors.

    Science.gov (United States)

    Bouchard, Amy E; Corriveau, Hélène; Milot, Marie-Hélène

    2015-01-01

    With age, a decline in the temporal aspect of movement is observed such as a longer movement execution time and a decreased timing accuracy. Robotic training can represent an interesting approach to help improve movement timing among the elderly. Two types of robotic training-haptic guidance (HG; demonstrating the correct movement for a better movement planning and improved execution of movement) and error amplification (EA; exaggerating movement errors to have a more rapid and complete learning) have been positively used in young healthy subjects to boost timing accuracy. For healthy seniors, only HG training has been used so far where significant and positive timing gains have been obtained. The goal of the study was to evaluate and compare the impact of both HG and EA robotic trainings on the improvement of seniors' movement timing. Thirty-two healthy seniors (mean age 68 ± 4 years) learned to play a pinball-like game by triggering a one-degree-of-freedom hand robot at the proper time to make a flipper move and direct a falling ball toward a randomly positioned target. During HG and EA robotic trainings, the subjects' timing errors were decreased and increased, respectively, based on the subjects' timing errors in initiating a movement. Results showed that only HG training benefited learning, but the improvement did not generalize to untrained targets. Also, age had no influence on the efficacy of HG robotic training, meaning that the oldest subjects did not benefit more from HG training than the younger senior subjects. Using HG to teach the correct timing of movement seems to be a good strategy to improve motor learning for the elderly as for younger people. However, more studies are needed to assess the long-term impact of HG robotic training on improvement in movement timing.

  9. Artificial Neural Networks for Reducing Computational Effort in Active Truncated Model Testing of Mooring Lines

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker

    2015-01-01

    simultaneously, this method is very demanding in terms of numerical efficiency and computational power. Therefore, this method has not yet proved to be feasible. It has recently been shown how a hybrid method combining classical numerical models and artificial neural networks (ANN) can provide a dramatic...... prior to the experiment and with a properly trained ANN it is no problem to obtain accurate simulations much faster than real time-without any need for large computational capacity. The present study demonstrates how this hybrid method can be applied to the active truncated experiments yielding a system...

  10. Different truncation methods of AUC between Japan and the EU for bioequivalence assessment: influence on the regulatory judgment.

    Science.gov (United States)

    Oishi, Masayo; Chiba, Koji; Fukushima, Takashi; Tomono, Yoshiro; Suwa, Toshio

    2012-01-01

    In regulatory guidelines for bioequivalence (BE) assessment, the definitions of AUC for primary assessment are different in ICH countries, i.e., AUC from zero to the last sampling point (AUCall) in Japan, AUC from zero to infinity (AUCinf) or AUC from zero to the last measurable point (AUClast) in the US, and AUClast in the EU. To assure sufficient accuracy of truncated AUC for BE assessment, the ratio of truncated AUC (AUCall or AUClast) to AUCinf should be more than 80% both in Japanese and EU guidelines. We investigated how the difference in the definition of truncated AUC affects BE assessment of sustained release (SR) formulation. Our simulation result demonstrated that AUCall/AUCinf could be ≥80% despite AUClast/AUCinf being AUC affected the judgment of validity of truncated AUC for BE assessment, and AUCall could fail to detect the substantially different in vivo dissolution profile of generic drugs with SR formulation from the original drug.

  11. The Statistical Analysis of Failure Time Data

    CERN Document Server

    Kalbfleisch, John D

    2011-01-01

    Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.

  12. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Dopamine reward prediction errors reflect hidden state inference across time

    Science.gov (United States)

    Starkweather, Clara Kwon; Babayan, Benedicte M.; Uchida, Naoshige; Gershman, Samuel J.

    2017-01-01

    Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a ‘belief state’). In this work, we asked whether dopaminergic signaling supports a TD learning framework that operates over hidden states. We found that dopamine signaling exhibited a striking difference between two tasks that differed only with respect to whether reward was delivered deterministically. Our results favor an associative learning rule that combines cached values with hidden state inference. PMID:28263301

  14. Real-time prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and error analysis

    Science.gov (United States)

    BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.

    2013-09-01

    The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.

  15. A variable timestep generalized Runge-Kutta method for the numerical integration of the space-time diffusion equations

    International Nuclear Information System (INIS)

    Aviles, B.N.; Sutton, T.M.; Kelly, D.J. III.

    1991-09-01

    A generalized Runge-Kutta method has been employed in the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic timestep control. The efficiency of the Runge-Kutta method is enhanced by a block-factorization technique that exploits the sparse structure of the matrix system resulting from the space and energy discretized form of the time-dependent neutron diffusion equations. Preliminary numerical evaluation using a one-dimensional finite difference code shows the sparse matrix implementation of the generalized Runge-Kutta method to be highly accurate and efficient when compared to an optimized iterative theta method. 12 refs., 5 figs., 4 tabs

  16. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data

    Directory of Open Access Journals (Sweden)

    I.E. Okorie

    2017-06-01

    Full Text Available The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood (−ℓˆ, Akaike information criterion (AIC, Bayesian information criterion (BIC and the generalized Cramér–von Mises W⋆ statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions. Keywords: Mathematics, Applied mathematics

  17. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  18. Filter Factors of Truncated TLS Regularization with Multiple Observations

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, I.; Plešinger, Martin; Žáková, J.

    2017-01-01

    Roč. 62, č. 2 (2017), s. 105-120 ISSN 0862-7940 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : truncated total least squares * multiple right-hand sides * eigenvalues of rank-d update * ill-posed problem * regularization * filter factors Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.618, year: 2016 http://hdl.handle.net/10338.dmlcz/146698

  19. Identification of target genes for wild type and truncated HMGA2 in mesenchymal stem-like cells

    DEFF Research Database (Denmark)

    Henriksen, Jørn Mølgaard; Stabell, Marianne; Meza-Zepeda, Leonardo A

    2010-01-01

    The HMGA2 gene, coding for an architectural transcription factor involved in mesenchymal embryogenesis, is frequently deranged by translocation and/or amplification in mesenchymal tumours, generally leading to over-expression of shortened transcripts and a truncated protein.......The HMGA2 gene, coding for an architectural transcription factor involved in mesenchymal embryogenesis, is frequently deranged by translocation and/or amplification in mesenchymal tumours, generally leading to over-expression of shortened transcripts and a truncated protein....

  20. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)

    2015-05-15

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  1. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    International Nuclear Information System (INIS)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y.; Pullepu, Babuji

    2015-01-01

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  2. Increased infectivity in human cells and resistance to antibody-mediated neutralization by truncation of the SIV gp41 cytoplasmic tail

    Directory of Open Access Journals (Sweden)

    Takeo eKuwata

    2013-05-01

    Full Text Available The role of antibodies in protecting the host from human immunodeficiency virus type 1 (HIV-1 infection is of considerable interest, particularly because the RV144 trial results suggest that antibodies contribute to protection. Although infection of nonhuman primates with simian immunodeficiency virus (SIV is commonly used as an animal model of HIV-1 infection, the viral epitopes that elicit potent and broad neutralizing antibodies to SIV have not been identified. We isolated a monoclonal antibody (MAb B404 that potently and broadly neutralizes various SIV strains. B404 targets a conformational epitope comprising the V3 and V4 loops of Env that intensely exposed when Env binds CD4. B404-resistant variants were obtained by passaging viruses in the presence of increasing concentration of B404 in PM1/CCR5 cells. Genetic analysis revealed that the Q733stop mutation, which truncates the cytoplasmic tail of gp41, was the first major substitution in Env during passage. The maximal inhibition by B404 and other MAbs were significantly decreased against a recombinant virus with a gp41 truncation compared with the parental SIVmac316. This indicates that the gp41 truncation was associated with resistance to antibody-mediated neutralization. The infectivities of the recombinant virus with the gp41 truncation were 7900-fold, 1000-fold, and 140-fold higher than those of SIVmac316 in PM1, PM1/CCR5, and TZM-bl cells, respectively. Immunoblotting analysis revealed that the gp41 truncation enhanced the incorporation of Env into virions. The effect of the gp41 truncation on infectivity was not obvious in the HSC-F macaque cell line, although the resistance of viruses harboring the gp41 truncation to neutralization was maintained. These results suggest that viruses with a truncated gp41 cytoplasmic tail were selected by increased infectivity in human cells and by acquiring resistance to neutralizing antibody.

  3. Spectroscopic characterization of a truncated hemoglobin from the nitrogen-fixing bacterium Herbaspirillum seropedicae.

    Science.gov (United States)

    Razzera, Guilherme; Vernal, Javier; Baruh, Debora; Serpa, Viviane I; Tavares, Carolina; Lara, Flávio; Souza, Emanuel M; Pedrosa, Fábio O; Almeida, Fábio C L; Terenzi, Hernán; Valente, Ana Paula

    2008-09-01

    The Herbaspirillum seropedicae genome sequence encodes a truncated hemoglobin typical of group II (Hs-trHb1) members of this family. We show that His-tagged recombinant Hs-trHb1 is monomeric in solution, and its optical spectrum resembles those of previously reported globins. NMR analysis allowed us to assign heme substituents. All data suggest that Hs-trHb1 undergoes a transition from an aquomet form in the ferric state to a hexacoordinate low-spin form in the ferrous state. The close positions of Ser-E7, Lys-E10, Tyr-B10, and His-CD1 in the distal pocket place them as candidates for heme coordination and ligand regulation. Peroxide degradation kinetics suggests an easy access to the heme pocket, as the protein offered no protection against peroxide degradation when compared with free heme. The high solvent exposure of the heme may be due to the presence of a flexible loop in the access pocket, as suggested by a structural model obtained by using homologous globins as templates. The truncated hemoglobin described here has unique features among truncated hemoglobins and may function in the facilitation of O(2) transfer and scavenging, playing an important role in the nitrogen-fixation mechanism.

  4. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  5. A generalized adjoint framework for sensitivity and global error estimation in time-dependent nuclear reactor simulations

    International Nuclear Information System (INIS)

    Stripling, H.F.; Anitescu, M.; Adams, M.L.

    2013-01-01

    Highlights: ► We develop an abstract framework for computing the adjoint to the neutron/nuclide burnup equations posed as a system of differential algebraic equations. ► We validate use of the adjoint for computing both sensitivity to uncertain inputs and for estimating global time discretization error. ► Flexibility of the framework is leveraged to add heat transfer physics and compute its adjoint without a reformulation of the adjoint system. ► Such flexibility is crucial for high performance computing applications. -- Abstract: We develop a general framework for computing the adjoint variable to nuclear engineering problems governed by a set of differential–algebraic equations (DAEs). The nuclear engineering community has a rich history of developing and applying adjoints for sensitivity calculations; many such formulations, however, are specific to a certain set of equations, variables, or solution techniques. Any change or addition to the physics model would require a reformulation of the adjoint problem and substantial difficulties in its software implementation. In this work we propose an abstract framework that allows for the modification and expansion of the governing equations, leverages the existing theory of adjoint formulation for DAEs, and results in adjoint equations that can be used to efficiently compute sensitivities for parametric uncertainty quantification. Moreover, as we justify theoretically and demonstrate numerically, the same framework can be used to estimate global time discretization error. We first motivate the framework and show that the coupled Bateman and transport equations, which govern the time-dependent neutronic behavior of a nuclear reactor, may be formulated as a DAE system with a power constraint. We then use a variational approach to develop the parameter-dependent adjoint framework and apply existing theory to give formulations for sensitivity and global time discretization error estimates using the adjoint

  6. Truncation of a mannanase from Trichoderma harzianum improves its enzymatic properties and expression efficiency in Trichoderma reesei.

    Science.gov (United States)

    Wang, Juan; Zeng, Desheng; Liu, Gang; Wang, Shaowen; Yu, Shaowen

    2014-01-01

    To obtain high expression efficiency of a mannanase gene, ThMan5A, cloned from Trichoderma harzianum MGQ2, both the full-length gene and a truncated gene (ThMan5AΔCBM) that contains only the catalytic domain, were expressed in Trichoderma reesei QM9414 using the strong constitutive promoter of the gene encoding pyruvate decarboxylase (pdc), and purified to homogeneity, respectively. We found that truncation of the gene improved its expression efficiency as well as the enzymatic properties of the encoded protein. The recombinant strain expressing ThMan5AΔCBM produced 2,460 ± 45.1 U/ml of mannanase activity in the culture supernatant; 2.3-fold higher than when expressing the full-length ThMan5A gene. In addition, the truncated mannanase had superior thermostability compared with the full-length enzyme and retained 100 % of its activity after incubation at 60 °C for 48 h. Our results clearly show that the truncated ThMan5A enzyme exhibited improved characteristics both in expression efficiency and in its thermal stability. These characteristics suggest that ThMan5AΔCBM has potential applications in the food, feed, paper, and pulp industries.

  7. A discontinuous galerkin time domain-boundary integral method for analyzing transient electromagnetic scattering

    KAUST Repository

    Li, Ping

    2014-07-01

    This paper presents an algorithm hybridizing discontinuous Galerkin time domain (DGTD) method and time domain boundary integral (BI) algorithm for 3-D open region electromagnetic scattering analysis. The computational domain of DGTD is rigorously truncated by analytically evaluating the incoming numerical flux from the outside of the truncation boundary through BI method based on the Huygens\\' principle. The advantages of the proposed method are that it allows the truncation boundary to be conformal to arbitrary (convex/ concave) scattering objects, well-separated scatters can be truncated by their local meshes without losing the physics (such as coupling/multiple scattering) of the problem, thus reducing the total mesh elements. Furthermore, low frequency waves can be efficiently absorbed, and the field outside the truncation domain can be conveniently calculated using the same BI formulation. Numerical examples are benchmarked to demonstrate the accuracy and versatility of the proposed method.

  8. Blunt traumatic axillary artery truncation, in the absence of associated fracture.

    Science.gov (United States)

    Bokser, Emily; Caputo, William; Hahn, Barry; Greenstein, Josh

    2018-02-01

    Axillary artery injuries can be associated with both proximal humeral fractures (Naouli et al., 2016; Ng et al., 2016) [1,2] as well as shoulder dislocations (Leclerc et al., 2017; Karnes et al., 2016) [3,4]. We report a rare case of an isolated axillary artery truncation following blunt trauma without any associated fracture or dislocation. A 58-year-old male presented to the emergency department for evaluation after falling on his outstretched right arm. The patient was found to have an absent right radial pulse with decreased sensation to the right arm. Point of care ultrasound showed findings suspicious for traumatic axillary artery injury, and X-rays did not demonstrate any fracture. Computed tomography with angiography confirmed axillary artery truncation with active extravasation. The patient underwent successful vascular repair with an axillary artery bypass. Although extremity injuries are common in emergency departments, emergency physicians need to recognize the risk for vascular injuries, even without associated fracture or dislocation. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. The Error Reporting in the ATLAS TDAQ system

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2014-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  10. The Error Reporting in the ATLAS TDAQ System

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2015-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  11. Unified theory of fermion pair to boson mappings in full and truncated spaces

    International Nuclear Information System (INIS)

    Ginocchio, J.N.; Johnson, C.W.

    1995-01-01

    After a brief review of various mappings of fermion pairs to bosons, we rigorously derive a general approach. Following the methods of Marumori and Otsuka, Arima, and Iachello, our approach begins with mapping states and constructs boson representations that preserve fermion matrix elements. In several cases these representations factor into finite, Hermitian boson images times a projection or norm operator that embodies the Pauli principle. We pay particular attention to truncated boson spaces, and describe general methods for constructing Hermitian and approximately finite boson image Hamiltonians. This method is akin to that of Otsuka, Arima, and Iachello introduced in connection with the interacting boson model, but is more rigorous, general, and systematic

  12. Maltose binding protein-fusion enhances the bioactivity of truncated forms of pig myostatin propeptide produced in E. coli.

    Directory of Open Access Journals (Sweden)

    Sang Beum Lee

    Full Text Available Myostatin (MSTN is a potent negative regulator of skeletal muscle growth. MSTN propeptide (MSTNpro inhibits MSTN binding to its receptor through complex formation with MSTN, implying that MSTNpro can be a useful agent to improve skeletal muscle growth in meat-producing animals. Four different truncated forms of pig MSTNpro containing N-terminal maltose binding protein (MBP as a fusion partner were expressed in E. coli, and purified by the combination of affinity chromatography and gel filtration. The MSTN-inhibitory capacities of these proteins were examined in an in vitro gene reporter assay. A MBP-fused, truncated MSTNpro containing residues 42-175 (MBP-Pro42-175 exhibited the same MSTN-inhibitory potency as the full sequence MSTNpro. Truncated MSTNpro proteins containing either residues 42-115 (MBP-Pro42-115 or 42-98 (MBP-Pro42-98 also exhibited MSTN-inhibitory capacity even though the potencies were significantly lower than that of full sequence MSTNpro. In pull-down assays, MBP-Pro42-175, MBP-Pro42-115, and MBP-Pro42-98 demonstrated their binding to MSTN. MBP was removed from the truncated MSTNpro proteins by incubation with factor Xa to examine the potential role of MBP on MSTN-inhibitory capacity of those proteins. Removal of MBP from MBP-Pro42-175 and MBP-Pro42-98 resulted in 20-fold decrease in MSTN-inhibitory capacity of Pro42-175 and abolition of MSTN-inhibitory capacity of Pro42-98, indicating that MBP as fusion partner enhanced the MSTN-inhibitory capacity of those truncated MSTNpro proteins. In summary, this study shows that MBP is a very useful fusion partner in enhancing MSTN-inhibitory potency of truncated forms of MSTNpro proteins, and MBP-fused pig MSTNpro consisting of amino acid residues 42-175 is sufficient to maintain the full MSTN-inhibitory capacity.

  13. The effects of road traffic noise on the students\\' errors in movement time anticipation; the role of introversion

    Directory of Open Access Journals (Sweden)

    I Alimohammadi

    2012-12-01

    Full Text Available   Background and Aims: Traffic noise is one of the most important urban noise pollution, which causes various physical and mental effects, impairment in daily activities, sleep disturbances, hearing loss and the impact on job performance. Thus it can reduce concentration significantly and increase the rate of traffic accidents. Some individual differences such as personality types in noise effects, affect.   Methods : Traffic noise has been measured and recorded in 10 arterial streets in Tehran, and the average sound pressure level measured was72/9 dB during two hours played for participants in the acousticroom . The sample size consisted of 80 patients (40 cases and 40 controls who were students of Tehran University of Medical Sciences. Personality type was determined by using Eysenc’s Personality Inventory (EPIquestionnaire. The error time movement anticipation before and after exposure to traffic noisewas measured by ZBA computerize test.   Results: The results revealed that error time movement anticipation before exposure to traffic noise have significant difference for introverts and extraverts Introverts have less errortime movement anticipation than extroversion ,whereas extroverts have less error time movement anticipation that introversion after exposure to traffic noise.   Conclusion: According to the obtained results, noise created different effects on the performance of personality type. Extroverts may be expected to adapt better to noise during mental performance, compared to people with opposite personality traits.  

  14. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    Science.gov (United States)

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  15. Vintage errors: do real-time economic data improve election forecasts?

    Directory of Open Access Journals (Sweden)

    Mark Andreas Kayser

    2015-07-01

    Full Text Available Economic performance is a key component of most election forecasts. When fitting models, however, most forecasters unwittingly assume that the actual state of the economy, a state best estimated by the multiple periodic revisions to official macroeconomic statistics, drives voter behavior. The difference in macroeconomic estimates between revised and original data vintages can be substantial, commonly over 100% (two-fold for economic growth estimates, making the choice of which data release to use important for the predictive validity of a model. We systematically compare the predictions of four forecasting models for numerous US presidential elections using real-time and vintage data. We find that newer data are not better data for election forecasting: forecasting error increases with data revisions. This result suggests that voter perceptions of economic growth are influenced more by media reports about the economy, which are based on initial economic estimates, than by the actual state of the economy.

  16. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense

    Directory of Open Access Journals (Sweden)

    C.Y. Nishikawa

    2012-02-01

    Full Text Available Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ54 co-factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL. A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH4Cl but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription.

  17. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense

    Energy Technology Data Exchange (ETDEWEB)

    Nishikawa, C.Y.; Araújo, L.M.; Kadowaki, M.A.S.; Monteiro, R.A.; Steffens, M.B.R.; Pedrosa, F.O.; Souza, E.M.; Chubatsu, L.S. [Departamento de Bioquímica e Biologia Molecular, Universidade Federal do Paraná, Curitiba, PR (Brazil)

    2012-01-27

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ{sup 54} factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH{sub 4}Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription.

  18. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense.

    Science.gov (United States)

    Nishikawa, C Y; Araújo, L M; Kadowaki, M A S; Monteiro, R A; Steffens, M B R; Pedrosa, F O; Souza, E M; Chubatsu, L S

    2012-02-01

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ(54) co-factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH(4)Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription.

  19. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense

    International Nuclear Information System (INIS)

    Nishikawa, C.Y.; Araújo, L.M.; Kadowaki, M.A.S.; Monteiro, R.A.; Steffens, M.B.R.; Pedrosa, F.O.; Souza, E.M.; Chubatsu, L.S.

    2012-01-01

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ 54 factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH 4 Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription

  20. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    Science.gov (United States)

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  1. Organisation and melting of solution grown truncated lozenge polyethylene single crystals

    NARCIS (Netherlands)

    Loos, J.; Tian, M.

    2003-01-01

    Morphological features and the melting behaviour of truncated lozenge crystals have been studied. For the crystals investigated, the heights of the (110) and the (200) sectors were measured to be 14.5 and 12.7 nm, respectively, using atomic force microscopy (AFM) in contact and non-contact mode.

  2. Truncated SALL1 Impedes Primary Cilia Function in Townes-Brocks Syndrome

    DEFF Research Database (Denmark)

    Bozal-Basterra, Laura; Martín-Ruíz, Itziar; Pirone, Lucia

    2018-01-01

    by mutations in the gene encoding the transcriptional repressor SALL1 and is associated with the presence of a truncated protein that localizes to the cytoplasm. Here, we provide evidence that SALL1 mutations might cause TBS by means beyond its transcriptional capacity. By using proximity proteomics, we show...

  3. Performance evaluation of FSO system using wavelength and time diversity over malaga turbulence channel with pointing errors

    Science.gov (United States)

    Balaji, K. A.; Prabu, K.

    2018-03-01

    There is an immense demand for high bandwidth and high data rate systems, which is fulfilled by wireless optical communication or free space optics (FSO). Hence FSO gained a pivotal role in research which has a added advantage of both cost-effective and licence free huge bandwidth. Unfortunately the optical signal in free space suffers from irradiance and phase fluctuations due to atmospheric turbulence and pointing errors which deteriorates the signal and degrades the performance of communication system over longer distance which is undesirable. In this paper, we have considered polarization shift keying (POLSK) system applied with wavelength and time diversity technique over Malaga(M)distribution to mitigate turbulence induced fading. We derived closed form mathematical expressions for estimating the systems outage probability and average bit error rate (BER). Ultimately from the results we can infer that wavelength and time diversity schemes enhances these systems performance.

  4. Efficient Tridiagonal Preconditioner for the Matrix-Free Truncated Newton Method

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2014-01-01

    Roč. 235, 25 May (2014), s. 394-407 ISSN 0096-3003 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * matrix-free truncated Newton method * preconditioned conjugate gradient method * preconditioners obtained by the directional differentiation * numerical algorithms Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014

  5. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  6. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    Science.gov (United States)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  7. Design of a real-time spectroscopic rotating compensator ellipsometer without systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Broch, Laurent, E-mail: laurent.broch@univ-lorraine.fr [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Stein, Nicolas [Institut Jean Lamour, Universite de Lorraine, UMR 7198 CNRS, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Zimmer, Alexandre [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS, Universite de Bourgogne, 9 avenue Alain Savary BP 47870, F-21078 Dijon Cedex (France); Battie, Yann; Naciri, Aotmane En [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France)

    2014-11-28

    We describe a spectroscopic ellipsometer in the visible domain (400–800 nm) based on a rotating compensator technology using two detectors. The classical analyzer is replaced by a fixed Rochon birefringent beamsplitter which splits the incidence light wave into two perpendicularly polarized waves, one oriented at + 45° and the other one at − 45° according to the plane of incidence. Both emergent optical signals are analyzed by two identical CCD detectors which are synchronized by an optical encoder fixed on the shaft of the step-by-step motor of the compensator. The final spectrum is the result of the two averaged Ψ and Δ spectra acquired by both detectors. We show that Ψ and Δ spectra are acquired without systematic errors on a spectral range fixed from 400 to 800 nm. The acquisition time can be adjusted down to 25 ms. The setup was validated by monitoring the first steps of bismuth telluride film electrocrystallization. The results exhibit that induced experimental growth parameters, such as film thickness and volumic fraction of deposited material can be extracted with a better trueness. - Highlights: • High-speed rotating compensator ellipsometer equipped with 2 detectors. • Ellipsometric angles without systematic errors • In-situ monitoring of electrocrystallization of bismuth telluride thin layer • High-accuracy of fitted physical parameters.

  8. Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves

    Science.gov (United States)

    Misra, R.; Bora, A.; Dewangan, G.

    2018-04-01

    Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.

  9. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  10. A truncated spherical shell model for nuclear collective excitations: Applications to the odd-mass systems, neutron-proton systems, and other topics

    International Nuclear Information System (INIS)

    Wu, Hua.

    1989-01-01

    One of the most elusive quantum system in nature is the nucleus, which is a strongly interacting many body system. In the hadronic (a la neutrons and protons) phase, the primary concern of this thesis, the nucleus' single particle excitations are intertwined with their various collective excitations. Although the underpinning of the nucleus is the spherical shell model, it is rendered powerless without a severe, but intelligent truncation of the infinite Hilbert space. The recently proposed Fermion Dynamical Symmetry Model (FDSM) is precisely such a truncation scheme and in which a symmetry-dictated truncation scheme is introduced in nuclear physics for the first time. In this thesis, extensions and explorations of the FDSM are made to specifically study the odd mass (where the most intricate mixing of the single particle and the collective excitations are observed) and the neutron-proton systems. In particular, the author finds that the previously successful phenomenological particle-rotor-model of the Copenhagen school can now be well understood microscopically via the FDSM. Furthermore, the well known Coriolis attenuation and variable moment of inertia effects are naturally understood from the model as well. A computer code FDUO was written by one of us to study, for the first time, the numerical implications of the FDSM. Several collective modes were found even when the system does not admit a group chain description. In addition, the code is most suitable to study the connection between level statistical behavior (a at Gaussian Orthogonal Ensemble) and dynamical symmetry. It is found that there exist critical region of the interaction parameter space were the system behaves chaotically. This information is certainly crucial to understanding quantum chaotic behavior

  11. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  12. Family losses following truncation selection in populations of half-sib families

    Science.gov (United States)

    J. H. Roberds; G. Namkoong; H. Kang

    1980-01-01

    Family losses during truncation selection may be sizable in populations of half-sib families. Substantial losses may occur even in populations containing little or no variation among families. Heavier losses will occur, however, under conditions of high heritability where there is considerable family variation. Standard deviations and therefore variances of family loss...

  13. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  14. Designing and evaluating an automated system for real-time medication administration error detection in a neonatal intensive care unit.

    Science.gov (United States)

    Ni, Yizhao; Lingren, Todd; Hall, Eric S; Leonard, Matthew; Melton, Kristin; Kirkendall, Eric S

    2018-05-01

    Timely identification of medication administration errors (MAEs) promises great benefits for mitigating medication errors and associated harm. Despite previous efforts utilizing computerized methods to monitor medication errors, sustaining effective and accurate detection of MAEs remains challenging. In this study, we developed a real-time MAE detection system and evaluated its performance prior to system integration into institutional workflows. Our prospective observational study included automated MAE detection of 10 high-risk medications and fluids for patients admitted to the neonatal intensive care unit at Cincinnati Children's Hospital Medical Center during a 4-month period. The automated system extracted real-time medication use information from the institutional electronic health records and identified MAEs using logic-based rules and natural language processing techniques. The MAE summary was delivered via a real-time messaging platform to promote reduction of patient exposure to potential harm. System performance was validated using a physician-generated gold standard of MAE events, and results were compared with those of current practice (incident reporting and trigger tools). Physicians identified 116 MAEs from 10 104 medication administrations during the study period. Compared to current practice, the sensitivity with automated MAE detection was improved significantly from 4.3% to 85.3% (P = .009), with a positive predictive value of 78.0%. Furthermore, the system showed potential to reduce patient exposure to harm, from 256 min to 35 min (P patient exposure to potential harm following MAE events.

  15. Nearly suppressed photoluminescence blinking of small-sized, blue-green-orange-red emitting single CdSe-based core/gradient alloy shell/shell quantum dots: correlation between truncation time and photoluminescence quantum yield.

    Science.gov (United States)

    Roy, Debjit; Mandal, Saptarshi; De, Chayan K; Kumar, Kaushalendra; Mandal, Prasun K

    2018-04-18

    CdSe-based core/gradient alloy shell/shell semiconductor quantum dots (CGASS QDs) have been shown to be optically quite superior compared to core-shell QDs. However, very little is known about CGASS QDs at the single particle level. Photoluminescence blinking dynamics of four differently emitting (blue (λem = 510), green (λem = 532), orange (λem = 591), and red (λem = 619)) single CGASS QDs having average sizes 600 nm). In this manuscript, we report nearly suppressed PL blinking behaviour of CGASS QDs with average sizes correlation between the event durations and found that residual memory exists in both the ON- and OFF-event durations. Positively correlated successive ON-ON and OFF-OFF event durations and negatively correlated (anti-correlated) ON-OFF event durations perhaps suggest the involvement of more than one type of trapping process within the blinking framework. The timescale corresponding to the additional exponential term has been assigned to hole trapping for ON-event duration statistics. Similarly, for OFF-event duration statistics, this component suggests hole detrapping. We found that the average duration of the exponential process for the ON-event durations is an order of magnitude higher than that of the OFF-event durations. This indicates that the holes are trapped for a significantly long time. When electron trapping is followed by such a hole trapping, long ON-event durations result. We have observed long ON-event durations, as high as 50 s. The competing charge tunnelling model has been used to account for the observed blinking behaviour in these CGASS QDs. Quite interestingly, the PLQY of all of these differently emitting QDs (an ensemble level property) could be correlated with the truncation time (a property at the single particle level). A respective concomitant increase-decrease of ON-OFF event truncation times with increasing PLQY is also indicative of a varying degree of suppression of the Auger recombination processes in these four

  16. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. LGI2 truncation causes a remitting focal epilepsy in dogs.

    Directory of Open Access Journals (Sweden)

    Eija H Seppälä

    2011-07-01

    Full Text Available One quadrillion synapses are laid in the first two years of postnatal construction of the human brain, which are then pruned until age 10 to 500 trillion synapses composing the final network. Genetic epilepsies are the most common neurological diseases with onset during pruning, affecting 0.5% of 2-10-year-old children, and these epilepsies are often characterized by spontaneous remission. We previously described a remitting epilepsy in the Lagotto romagnolo canine breed. Here, we identify the gene defect and affected neurochemical pathway. We reconstructed a large Lagotto pedigree of around 34 affected animals. Using genome-wide association in 11 discordant sib-pairs from this pedigree, we mapped the disease locus to a 1.7 Mb region of homozygosity in chromosome 3 where we identified a protein-truncating mutation in the Lgi2 gene, a homologue of the human epilepsy gene LGI1. We show that LGI2, like LGI1, is neuronally secreted and acts on metalloproteinase-lacking members of the ADAM family of neuronal receptors, which function in synapse remodeling, and that LGI2 truncation, like LGI1 truncations, prevents secretion and ADAM interaction. The resulting epilepsy onsets at around seven weeks (equivalent to human two years, and remits by four months (human eight years, versus onset after age eight in the majority of human patients with LGI1 mutations. Finally, we show that Lgi2 is expressed highly in the immediate post-natal period until halfway through pruning, unlike Lgi1, which is expressed in the latter part of pruning and beyond. LGI2 acts at least in part through the same ADAM receptors as LGI1, but earlier, ensuring electrical stability (absence of epilepsy during pruning years, preceding this same function performed by LGI1 in later years. LGI2 should be considered a candidate gene for common remitting childhood epilepsies, and LGI2-to-LGI1 transition for mechanisms of childhood epilepsy remission.

  18. Medication errors versus time of admission in a subpopulation of stroke patients undergoing inpatient rehabilitation complications and considerations.

    Science.gov (United States)

    Pitts, Eric P

    2011-01-01

    This study looked at the medication ordering error frequency and the length of inpatient hospital stay in a subpopulation of stroke patients (n-60) as a function of time of patient admission to an inpatient rehabilitation hospital service. A total of 60 inpatient rehabilitation patients, 30 arriving before 4 pm, and 30 arriving after 4 pm, with as admitting diagnosis of stroke were randomly selected from a larger sample (N=426). There was a statistically significant increase in medication ordering errors and the number of inpatient rehabilitation hospital days in the group of patients who arrived after 4 pm.

  19. A Novel Truncated Form of Serum Amyloid A in Kawasaki Disease.

    Directory of Open Access Journals (Sweden)

    John C Whitin

    Full Text Available Kawasaki disease (KD is an acute vasculitis in children that can cause coronary artery abnormalities. Its diagnosis is challenging, and many cytokines, chemokines, acute phase reactants, and growth factors have failed evaluation as specific biomarkers to distinguish KD from other febrile illnesses. We performed protein profiling, comparing plasma from children with KD with febrile control (FC subjects to determine if there were specific proteins or peptides that could distinguish the two clinical states.Plasma from three independent cohorts from the blood of 68 KD and 61 FC subjects was fractionated by anion exchange chromatography, followed by surface-enhanced laser desorption ionization (SELDI mass spectrometry of the fractions. The mass spectra of KD and FC plasma samples were analyzed for peaks that were statistically significantly different.A mass spectrometry peak with a mass of 7,860 Da had high intensity in acute KD subjects compared to subacute KD (p = 0.0003 and FC (p = 7.9 x 10-10 subjects. We identified this peak as a novel truncated form of serum amyloid A with N-terminal at Lys-34 of the circulating form and validated its identity using a hybrid mass spectrum immunoassay technique. The truncated form of serum amyloid A was present in plasma of KD subjects when blood was collected in tubes containing protease inhibitors. This peak disappeared when the patients were examined after their symptoms resolved. Intensities of this peptide did not correlate with KD-associated laboratory values or with other mass spectrum peaks from the plasma of these KD subjects.Using SELDI mass spectrometry, we have discovered a novel truncated form of serum amyloid A that is elevated in the plasma of KD when compared with FC subjects. Future studies will evaluate its relevance as a diagnostic biomarker and its potential role in the pathophysiology of KD.

  20. A computational approach for fluid queues driven by truncated birth-death processes.

    NARCIS (Netherlands)

    Lenin, R.B.; Parthasarathy, P.R.

    2000-01-01

    In this paper, we analyze fluid queues driven by truncated birth-death processes with general birth and death rates. We compute the equilibrium distribution of the content of the fluid buffer by providing efficient numerical procedures to compute the eigenvalues and the eigenvectors of the

  1. Hamiltonian formulation of quantum error correction and correlated noise: Effects of syndrome extraction in the long-time limit

    Science.gov (United States)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2008-07-01

    We analyze the long-time behavior of a quantum computer running a quantum error correction (QEC) code in the presence of a correlated environment. Starting from a Hamiltonian formulation of realistic noise models, and assuming that QEC is indeed possible, we find formal expressions for the probability of a given syndrome history and the associated residual decoherence encoded in the reduced density matrix. Systems with nonzero gate times (“long gates”) are included in our analysis by using an upper bound on the noise. In order to introduce the local error probability for a qubit, we assume that propagation of signals through the environment is slower than the QEC period (hypercube assumption). This allows an explicit calculation in the case of a generalized spin-boson model and a quantum frustration model. The key result is a dimensional criterion: If the correlations decay sufficiently fast, the system evolves toward a stochastic error model for which the threshold theorem of fault-tolerant quantum computation has been proven. On the other hand, if the correlations decay slowly, the traditional proof of this threshold theorem does not hold. This dimensional criterion bears many similarities to criteria that occur in the theory of quantum phase transitions.

  2. A truncated Kv1.1 protein in the brain of the megencephaly mouse: expression and interaction

    Directory of Open Access Journals (Sweden)

    Århem Peter

    2005-11-01

    Full Text Available Abstract Background The megencephaly mouse, mceph/mceph, is epileptic and displays a dramatically increased brain volume and neuronal count. The responsible mutation was recently revealed to be an eleven base pair deletion, leading to a frame shift, in the gene encoding the potassium channel Kv1.1. The predicted MCEPH protein is truncated at amino acid 230 out of 495. Truncated proteins are usually not expressed since nonsense mRNAs are most often degraded. However, high Kv1.1 mRNA levels in mceph/mceph brain indicated that it escaped this control mechanism. Therefore, we hypothesized that the truncated Kv1.1 would be expressed and dysregulate other Kv1 subunits in the mceph/mceph mice. Results We found that the MCEPH protein is expressed in the brain of mceph/mceph mice. MCEPH was found to lack mature (Golgi glycosylation, but to be core glycosylated and trapped in the endoplasmic reticulum (ER. Interactions between MCEPH and other Kv1 subunits were studied in cell culture, Xenopus oocytes and the brain. MCEPH can form tetramers with Kv1.1 in cell culture and has a dominant negative effect on Kv1.2 and Kv1.3 currents in oocytes. However, it does not retain Kv1.2 in the ER of neurons. Conclusion The megencephaly mice express a truncated Kv1.1 in the brain, and constitute a unique tool to study Kv1.1 trafficking relevant for understanding epilepsy, ataxia and pathologic brain overgrowth.

  3. Non-linear buckling of an FGM truncated conical shell surrounded by an elastic medium

    International Nuclear Information System (INIS)

    Sofiyev, A.H.; Kuruoglu, N.

    2013-01-01

    In this paper, the non-linear buckling of the truncated conical shell made of functionally graded materials (FGMs) surrounded by an elastic medium has been studied using the large deformation theory with von Karman–Donnell-type of kinematic non-linearity. A two-parameter foundation model (Pasternak-type) is used to describe the shell–foundation interaction. The FGM properties are assumed to vary continuously through the thickness direction. The fundamental relations, the modified Donnell type non-linear stability and compatibility equations of the FGM truncated conical shell resting on the Pasternak-type elastic foundation are derived. By using the Superposition and Galerkin methods, the non-linear stability equations for the FGM truncated conical shell is solved. Finally, influences of variations of Winkler foundation stiffness and shear subgrade modulus of the foundation, compositional profiles and shell characteristics on the dimensionless critical non-linear axial load are investigated. The present results are compared with the available data for a special case. -- Highlights: • Nonlinear buckling of FGM conical shell surrounded by elastic medium is studied. • Pasternak foundation model is used to describe the shell–foundation interaction. • Nonlinear basic equations are derived. • Problem is solved by using Superposition and Galerkin methods. • Influences of various parameters on the nonlinear critical load are investigated

  4. Selective translational repression of truncated proteins from frameshift mutation-derived mRNAs in tumors.

    Directory of Open Access Journals (Sweden)

    Kwon Tae You

    2007-05-01

    Full Text Available Frameshift and nonsense mutations are common in tumors with microsatellite instability, and mRNAs from these mutated genes have premature termination codons (PTCs. Abnormal mRNAs containing PTCs are normally degraded by the nonsense-mediated mRNA decay (NMD system. However, PTCs located within 50-55 nucleotides of the last exon-exon junction are not recognized by NMD (NMD-irrelevant, and some PTC-containing mRNAs can escape from the NMD system (NMD-escape. We investigated protein expression from NMD-irrelevant and NMD-escape PTC-containing mRNAs by Western blotting and transfection assays. We demonstrated that transfection of NMD-irrelevant PTC-containing genomic DNA of MARCKS generates truncated protein. In contrast, NMD-escape PTC-containing versions of hMSH3 and TGFBR2 generate normal levels of mRNA, but do not generate detectable levels of protein. Transfection of NMD-escape mutant TGFBR2 genomic DNA failed to generate expression of truncated proteins, whereas transfection of wild-type TGFBR2 genomic DNA or mutant PTC-containing TGFBR2 cDNA generated expression of wild-type protein and truncated protein, respectively. Our findings suggest a novel mechanism of gene expression regulation for PTC-containing mRNAs in which the deleterious transcripts are regulated either by NMD or translational repression.

  5. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    Science.gov (United States)

    Li, Xingxing

    2014-05-01

    displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;

  6. On the truncation of the azimuthal mode spectrum of high-order probes in probe-corrected spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Laitinen, Tommi

    2011-01-01

    Azimuthal mode (m mode) truncation of a high-order probe pattern in probe-corrected spherical near-field antenna measurements is studied in this paper. The results of this paper provide rules for appropriate and sufficient m-mode truncation for non-ideal first-order probes and odd-order probes wi...

  7. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    Science.gov (United States)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  8. Distinct Ezrin Truncations Differentiate Metastases in Sentinel Lymph Nodes from Unaffected Lymph Node Tissues, from Primary Breast Tumors, and from Healthy Glandular Breast Tissues

    Directory of Open Access Journals (Sweden)

    Claudia Röwer

    2018-02-01

    Full Text Available BACKGROUND: Lymph node metastasis status is a prognostic factor for further lymph node involvement and for patient survival in breast cancer patients. Frozen section analysis of lymph nodes is a reliable method for detection of macro-metastases. However, this method is far less effective in detecting micro-metastases, requesting improved diagnostic procedures. METHODS: We investigated expression and truncation of ezrin in (i sentinel lymph node metastases, (ii unaffected axillary lymph nodes, (iii primary breast tumors, and (iv healthy glandular breast tissues using 2D gel electrophoresis, SDS-PAGE, and mass spectrometry in addition to Western blotting. RESULTS: Full-length ezrin (E1; amino acids 1–586 is present in all four investigated tissues. Two truncated ezrin forms, one missing about the first hundred amino acids (E2a and the other lacking about 150 C-terminal amino acids (E2b were detectable in primary tumor tissues and in sentinel lymph node metastases but not in glandular tissues. Strikingly, an ezrin truncation (E3 which consists approximately of amino acids 238–586 was found strongly expressed in all sentinel lymph node metastases. Moreover, an N-terminal ezrin fragment (E4 that consists approximately of amino acids 1–273 was identified in sentinel lymph node metastases as well. CONCLUSIONS: We show for the first time the existence of tissue-dependent specific ezrin truncations. The distinguished strong Western blot staining of ezrin E3 in sentinel lymph node metastases underlines its capability to substantiate the occurrence of lymph node (micrometastases in breast cancer patients.

  9. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  10. Virtues and limitations of the truncated Holstein–Primakoff description of quantum rotors

    International Nuclear Information System (INIS)

    Hirsch, Jorge G; Castaños, Octavio; López-Peña, Ramón; Nahmad-Achar, Eduardo

    2013-01-01

    A Hamiltonian describing the collective behaviour of N interacting spins can be mapped to a bosonic one employing the Holstein–Primakoff realization, at the expense of having an infinite series in powers of the boson creation and annihilation operators. Truncating this series up to quadratic terms allows obtaining analytic solutions through a Bogoliubov transformation, which becomes exact in the limit N → ∞. The Hamiltonian exhibits a phase transition from single-spin excitations to a collective mode. In the vicinity of this phase transition, the truncated solutions predict the existence of singularities for a finite number of spins, which have no counterpart in the exact diagonalization. Renormalization allows to extract from these divergences the exact behaviour of relevant observables with the number of spins around the phase transition, and to relate it with the class of universality to which the model belongs. In this work a detailed analysis of these aspects is presented for the Lipkin model. (comment)

  11. Formation of truncated proteins and high-molecular-mass aggregates upon soft illumination of photosynthetic proteins

    DEFF Research Database (Denmark)

    Rinalducci, Sara; Campostrini, Natascia; Antonioli, Paolo

    2005-01-01

    Different spot profiles were observed in 2D gel electrophoresis of thylakoid membranes performed either under complete darkness or by leaving the sample for a short time to low visible light. In the latter case, a large number of new spots with lower molecular masses, ranging between 15,000 and 25......,000 Da, were observed, and high-molecular-mass aggregates, seen as a smearing in the upper part of the gel, appeared in the region around 250 kDa. Identification of protein(s) contained in these new spots by MS/MS revealed that most of them are simply truncated proteins deriving from native ones...

  12. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    Science.gov (United States)

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  13. Heterozygous RFX6 protein truncating variants are associated with MODY with reduced penetrance.

    Science.gov (United States)

    Patel, Kashyap A; Kettunen, Jarno; Laakso, Markku; Stančáková, Alena; Laver, Thomas W; Colclough, Kevin; Johnson, Matthew B; Abramowicz, Marc; Groop, Leif; Miettinen, Päivi J; Shepherd, Maggie H; Flanagan, Sarah E; Ellard, Sian; Inagaki, Nobuya; Hattersley, Andrew T; Tuomi, Tiinamaija; Cnop, Miriam; Weedon, Michael N

    2017-10-12

    Finding new causes of monogenic diabetes helps understand glycaemic regulation in humans. To find novel genetic causes of maturity-onset diabetes of the young (MODY), we sequenced MODY cases with unknown aetiology and compared variant frequencies to large public databases. From 36 European patients, we identify two probands with novel RFX6 heterozygous nonsense variants. RFX6 protein truncating variants are enriched in the MODY discovery cohort compared to the European control population within ExAC (odds ratio = 131, P = 1 × 10 -4 ). We find similar results in non-Finnish European (n = 348, odds ratio = 43, P = 5 × 10 -5 ) and Finnish (n = 80, odds ratio = 22, P = 1 × 10 -6 ) replication cohorts. RFX6 heterozygotes have reduced penetrance of diabetes compared to common HNF1A and HNF4A-MODY mutations (27, 70 and 55% at 25 years of age, respectively). The hyperglycaemia results from beta-cell dysfunction and is associated with lower fasting and stimulated gastric inhibitory polypeptide (GIP) levels. Our study demonstrates that heterozygous RFX6 protein truncating variants are associated with MODY with reduced penetrance.Maturity-onset diabetes of the young (MODY) is the most common subtype of familial diabetes. Here, Patel et al. use targeted DNA sequencing of MODY patients and large-scale publically available data to show that RFX6 heterozygous protein truncating variants cause reduced penetrance MODY.

  14. A truncated conical beam model for analysis of the vibration of rat whiskers.

    Science.gov (United States)

    Yan, Wenyi; Kan, Qianhua; Kergrene, Kenan; Kang, Guozheng; Feng, Xi-Qiao; Rajan, Ramesh

    2013-08-09

    A truncated conical beam model is developed to study the vibration behaviour of a rat whisker. Translational and rotational springs are introduced to better represent the constraint conditions at the base of the whiskers in a living rat. Dimensional analysis shows that the natural frequency of a truncated conical beam with generic spring constraints at its ends is inversely proportional to the square root of the mass density. Under all the combinations of the classical free, pinned, sliding or fixed boundary conditions of a truncated conical beam, it is proved that the natural frequency can be expressed as f = α(rb/L(2))E/ρ and the frequency coefficient α only depends on the ratio of the radii at the two ends of the beam. The natural frequencies of a representative rat whisker are predicted for two typical situations: freely whisking in air and the tip touching an object. Our numerical results show that there exists a window where the natural frequencies of a rat whisker are very sensitive to the change of the rotational constraint at the base. This finding is also confirmed by the numerical results of 18 whiskers with their data available from literature. It can be concluded that the natural frequencies of a rat whisker can be adjusted within a wide range through manipulating the constraints of the follicle on the rat base by a behaving animal. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  16. A statistical model for measurement error that incorporates variation over time in the target measure, with application to nutritional epidemiology.

    Science.gov (United States)

    Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor

    2015-11-30

    Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Error modeling for surrogates of dynamical systems using machine learning

    Science.gov (United States)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-12-01

    A machine-learning-based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (e.g., random forests, LASSO) to map a large set of inexpensively computed `error indicators' (i.e., features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering), and subsequently constructs a `local' regression model to predict the time-instantaneous error within each identified region of feature space. We consider two uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance, and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (e.g., time-integrated errors). We apply the proposed framework to model errors in reduced-order models of nonlinear oil--water subsurface flow simulations. The reduced-order models used in this work entail application of trajectory piecewise linearization with proper orthogonal decomposition. When the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.

  18. Expression and characterization of a novel truncated rotavirus VP4 for the development of a recombinant rotavirus vaccine.

    Science.gov (United States)

    Li, Yijian; Xue, Miaoge; Yu, Linqi; Luo, Guoxing; Yang, Han; Jia, Lianzhi; Zeng, Yuanjun; Li, Tingdong; Ge, Shengxiang; Xia, Ningshao

    2018-04-12

    The outer capsid protein VP4 is an important target for the development of a recombinant rotavirus vaccine because it mediates the attachment and penetration of rotavirus. Due to the poor solubility of full-length VP4, VP8 was explored as candidate rotavirus vaccines in the past years. In previous studies, it has been found that the N-terminal truncated VP8 protein, VP8-1 (aa26-231), could be expressed in soluble form with improved immunogenicity compared to the core of VP8 (aa65-223). However, this protein stimulated only a weak immune response when aluminum hydroxide was used as an adjuvant. In addition, it should be noted that the protective efficacy of VP4 was higher than that of VP8 and VP5. In this study, it was found that when the N-terminal 25 amino acids were deleted, the truncated VP4 ∗ (aa26-476) containing VP8 and the stalk domain of VP5 could be expressed in soluble form in E. coli and purified to homogeneous trimers. Furthermore, the truncated VP4 could induce high titers of neutralizing antibodies when aluminum adjuvant was used and conferred high protective efficacy in reducing the severity of diarrhea and rotavirus shedding in stools in animal models. The immunogenicity of the truncated VP4 was significantly higher than that of VP8 ∗ and VP5 ∗ alone. Taken together, the truncated VP4 ∗ (aa26-476), with enhanced immunogenicity and immunoprotectivity, could be considered as a viable candidate for further development and has the potential to become a parenterally administered rotavirus vaccine. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Identification and purification of truncated insulin-like growth factor I from porcine uterus. Evidence for high biological potency

    International Nuclear Information System (INIS)

    Ogasawara, M.; Karey, K.P.; Marquardt, H.; Sirbasku, D.A.

    1989-01-01

    The authors report the completion of the purification of uterine-derived growth factors (UDGF) described previously by this laboratory. During isolation, the mitogenic activity was monitored by using the human MCF-7 breast cancer cells in serum-free Ham's F12 and Dulbecco's modified Eagle's medium (1:1, v/v) containing 15 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid, 200 μg/mL bovine serum albumin, and 10 μg/mL human transferrin. This medium sustained growth for several days in response to a single addition of growth factor. The mitogens were shown by protein microsequencing to be DES 1 → 3 to DES 1 → 6 forms of insulin-like growth factor I (truncated IGF-I). An M r estimated by 125 I labeling, urea-sodium dodecyl sulfate-polyacrylamide gel electrophoresis, and autoradiography was consistent with a DES 1 → 3(4) N α truncation. Immunoadsorption and radioimmunoassay confirmed immunological properties equivalent to IGF-I. Radioreceptor assays showed truncated IGF-I was functionally equivalent to recombinant IGF-I. They conclude that the major acid-stable low-M r mitogenic activities isolated from uterus are very potent forms of truncated IGF-I capable of stimulating growth of epithelial and mesenchymal cells

  20. Expression and characterization of truncated human heme oxygenase (hHO-1) and a fusion protein of hHO-1 with human cytochrome P450 reductase.

    Science.gov (United States)

    Wilks, A; Black, S M; Miller, W L; Ortiz de Montellano, P R

    1995-04-04

    A human heme oxygenase (hHO-1) gene without the sequence coding for the last 23 amino acids has been expressed in Escherichia coli behind the pho A promoter. The truncated enzyme is obtained in high yields as a soluble, catalytically-active protein, making it available for the first time for detailed mechanistic studies. The purified, truncated hHO-1/heme complex is spectroscopically indistinguishable from that of the rat enzyme and converts heme to biliverdin when reconstituted with rat liver cytochrome P450 reductase. A self-sufficient heme oxygenase system has been obtained by fusing the truncated hHO-1 gene to the gene for human cytochrome P450 reductase without the sequence coding for the 20 amino acid membrane binding domain. Expression of the fusion protein in pCWori+ yields a protein that only requires NADPH for catalytic turnover. The failure of exogenous cytochrome P450 reductase to stimulate turnover and the insensitivity of the catalytic rate toward changes in ionic strength establish that electrons are transferred intramolecularly between the reductase and heme oxygenase domains of the fusion protein. The Vmax for the fusion protein is 2.5 times higher than that for the reconstituted system. Therefore, either the covalent tether does not interfere with normal docking and electron transfer between the flavin and heme domains or alternative but equally efficient electron transfer pathways are available that do not require specific docking.

  1. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  2. Generalized Runge-Kutta method for two- and three-dimensional space-time diffusion equations with a variable time step

    International Nuclear Information System (INIS)

    Aboanber, A.E.; Hamada, Y.M.

    2008-01-01

    An extensive knowledge of the spatial power distribution is required for the design and analysis of different types of current-generation reactors, and that requires the development of more sophisticated theoretical methods. Therefore, the need to develop new methods for multidimensional transient reactor analysis still exists. The objective of this paper is to develop a computationally efficient numerical method for solving the multigroup, multidimensional, static and transient neutron diffusion kinetics equations. A generalized Runge-Kutta method has been developed for the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic time step control. In addition, the A(α)-stability properties of the method are investigated. The analyses of two- and three-dimensional benchmark problems as well as static and transient problems, demonstrate that very accurate solutions can be obtained with assembly-sized spatial meshes. Preliminary numerical evaluations using two- and three-dimensional finite difference codes showed that the presented generalized Runge-Kutta method is highly accurate and efficient when compared with other optimized iterative numerical and conventional finite difference methods

  3. Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction

    Science.gov (United States)

    Cohen, A. C.

    1971-01-01

    A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.

  4. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  5. Rhodium SPND's Error Reduction using Extended Kalman Filter combined with Time Dependent Neutron Diffusion Equation

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Park, Tong Kyu; Jeon, Seong Su

    2014-01-01

    The Rhodium SPND is accurate in steady-state conditions but responds slowly to changes in neutron flux. The slow response time of Rhodium SPND precludes its direct use for control and protection purposes specially when nuclear power plant is used for load following. To shorten the response time of Rhodium SPND, there were some acceleration methods but they could not reflect neutron flux distribution in reactor core. On the other hands, some methods for core power distribution monitoring could not consider the slow response time of Rhodium SPND and noise effect. In this paper, time dependent neutron diffusion equation is directly used to estimate reactor power distribution and extended Kalman filter method is used to correct neutron flux with Rhodium SPND's and to shorten the response time of them. Extended Kalman filter is effective tool to reduce measurement error of Rhodium SPND's and even simple FDM to solve time dependent neutron diffusion equation can be an effective measure. This method reduces random errors of detectors and can follow reactor power level without cross-section change. It means monitoring system may not calculate cross-section at every time steps and computing time will be shorten. To minimize delay of Rhodium SPND's conversion function h should be evaluated in next study. Neutron and Rh-103 reaction has several decay chains and half-lives over 40 seconds causing delay of detection. Time dependent neutron diffusion equation will be combined with decay chains. Power level and distribution change corresponding movement of control rod will be tested with more complicated reference code as well as xenon effect. With these efforts, final result is expected to be used as a powerful monitoring tool of nuclear reactor core

  6. Rhodium SPND's Error Reduction using Extended Kalman Filter combined with Time Dependent Neutron Diffusion Equation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Hun; Park, Tong Kyu; Jeon, Seong Su [FNC Technology Co., Ltd., Yongin (Korea, Republic of)

    2014-05-15

    The Rhodium SPND is accurate in steady-state conditions but responds slowly to changes in neutron flux. The slow response time of Rhodium SPND precludes its direct use for control and protection purposes specially when nuclear power plant is used for load following. To shorten the response time of Rhodium SPND, there were some acceleration methods but they could not reflect neutron flux distribution in reactor core. On the other hands, some methods for core power distribution monitoring could not consider the slow response time of Rhodium SPND and noise effect. In this paper, time dependent neutron diffusion equation is directly used to estimate reactor power distribution and extended Kalman filter method is used to correct neutron flux with Rhodium SPND's and to shorten the response time of them. Extended Kalman filter is effective tool to reduce measurement error of Rhodium SPND's and even simple FDM to solve time dependent neutron diffusion equation can be an effective measure. This method reduces random errors of detectors and can follow reactor power level without cross-section change. It means monitoring system may not calculate cross-section at every time steps and computing time will be shorten. To minimize delay of Rhodium SPND's conversion function h should be evaluated in next study. Neutron and Rh-103 reaction has several decay chains and half-lives over 40 seconds causing delay of detection. Time dependent neutron diffusion equation will be combined with decay chains. Power level and distribution change corresponding movement of control rod will be tested with more complicated reference code as well as xenon effect. With these efforts, final result is expected to be used as a powerful monitoring tool of nuclear reactor core.

  7. Differential isotope dansylation labeling combined with liquid chromatography mass spectrometry for quantification of intact and N-terminal truncated proteins

    International Nuclear Information System (INIS)

    Tang, Yanan; Li, Liang

    2013-01-01

    Graphical abstract: -- Highlights: •LC–MS was developed for quantifying protein mixtures containing both intact and N-terminal truncated proteins. • 12 C 2 -Dansylation of the N-terminal amino acid of proteins was done first, followed by microwave-assisted acid hydrolysis. •The released 12 C 2 -dansyl labeled N-terminal amino acid was quantified using 13 C 2 -dansyl labeled amino acid standards. •The method provided accurate and precise results for quantifying intact and N-terminal truncated proteins within 8 h. -- Abstract: The N-terminal amino acids of proteins are important structure units for maintaining the biological function, localization, and interaction networks of proteins. Under different biological conditions, one or several N-terminal amino acids could be cleaved from an intact protein due to processes, such as proteolysis, resulting in the change of protein properties. Thus, the ability to quantify the N-terminal truncated forms of proteins is of great importance, particularly in the area of development and production of protein-based drugs where the relative quantity of the intact protein and its truncated form needs to be monitored. In this work, we describe a rapid method for absolute quantification of protein mixtures containing intact and N-terminal truncated proteins. This method is based on dansylation labeling of the N-terminal amino acids of proteins, followed by microwave-assisted acid hydrolysis of the proteins into amino acids. It is shown that dansyl labeled amino acids are stable in acidic conditions and can be quantified by liquid chromatography mass spectrometry (LC–MS) with the use of isotope analog standards

  8. Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment

    Science.gov (United States)

    Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.

    2016-11-01

    This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.

  9. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  10. Nonlinear error dynamics for cycled data assimilation methods

    International Nuclear Information System (INIS)

    Moodey, Alexander J F; Lawless, Amos S; Potthast, Roland W E; Van Leeuwen, Peter Jan

    2013-01-01

    We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at t k , k = 1, 2, 3, …, with a first guess given by the state propagated via a dynamical system model M k from time t k−1 to time t k . In particular, for nonlinear dynamical systems M k that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ‖e k ‖ ≔ ‖x (a) k − x (t) k ‖ between the estimated state x (a) and the true state x (t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system M k under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ‖e k ‖, depending on the size δ of the observation error, the reconstruction operator R α , the observation operator H and the Lipschitz constants K (1) and K (2) on the lower and higher modes of M k controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c‖R α ‖δ with some constant c. Since ‖R α ‖ → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz ‘63 system. (paper)

  11. A Sensitivity Study of Human Errors in Optimizing Surveillance Test Interval (STI) and Allowed Outage Time (AOT) of Standby Safety System

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Shin, Won Ky; You, Young Woo; Yang, Hui Chang

    1998-01-01

    In most cases, the surveillance test intervals (STIs), allowed outage times (AOTS) and testing strategies of safety components in nuclear power plant are prescribed in plant technical specifications. And, in general, it is required that standby safety system shall be redundant (i.e., composed of multiple components) and these components are tested by either staggered test strategy or sequential test strategy. In this study, a linear model is presented to incorporate the effects of human errors associated with test into the evaluation of unavailability. The average unavailabilities of 1/4, 2/4 redundant systems are computed considering human error and testing strategy. The adverse effects of test on system unavailability, such as component wear and test-induced transient have been modelled. The final outcome of this study would be the optimized human error domain from 3-D human error sensitivity analysis by selecting finely classified segment. The results of sensitivity analysis show that the STI and AOT can be optimized provided human error probability is maintained within allowable range. (authors)

  12. Galilean-invariant preconditioned central-moment lattice Boltzmann method without cubic velocity errors for efficient steady flow simulations

    Science.gov (United States)

    Hajabdollahi, Farzaneh; Premnath, Kannan N.

    2018-05-01

    Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several

  13. Medication Administration Errors Involving Paediatric In-Patients in a ...

    African Journals Online (AJOL)

    The drug mostly associated with error was gentamicin with 29 errors (1.2 %). Conclusion: During the study, a high frequency of error was observed. There is a need to modify the way information is handled and shared by professionals as wrong time error was the most implicated error. Attention should also be given to IV ...

  14. New definitions of pointing stability - ac and dc effects. [constant and time-dependent pointing error effects on image sensor performance

    Science.gov (United States)

    Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.

    1992-01-01

    For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.

  15. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  16. Truncated exponential-rigid-rotor model for strong electron and ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.; Fleischmann, H.H.

    1979-01-01

    A comprehensive study of exponential-rigid-rotor equilibria for strong electron and ion rings indicates the presence of a sizeable percentage of untrapped particles in all equilibria with aspect-ratios R/a approximately <4. Such aspect-ratios are required in fusion-relevant rings. Significant changes in the equilibria are observed when untrapped particles are excluded by the use of a truncated exponential-rigid-rotor distribution function. (author)

  17. Integral equation solution for truncated slab structures by using a fringe current formulation

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Toccafondi, A.; Maci, S.

    1999-01-01

    Full-wave solutions of truncated dielectric slab problems are interesting for a variety of engineering applications, in particular patch antennas on finite ground planes. For this application a canonical reference solution is that of a semi-infinite slab illuminated by a line source. Standard int...

  18. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  19. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  20. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  1. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  2. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    Science.gov (United States)

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  3. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Assessing error sources for Landsat time series analysis for tropical test sites in Viet Nam and Ethiopia

    Science.gov (United States)

    Schultz, Michael; Verbesselt, Jan; Herold, Martin; Avitabile, Valerio

    2013-10-01

    Researchers who use remotely sensed data can spend half of their total effort analysing prior data. If this data preprocessing does not match the application, this time spent on data analysis can increase considerably and can lead to inaccuracies. Despite the existence of a number of methods for pre-processing Landsat time series, each method has shortcomings, particularly for mapping forest changes under varying illumination, data availability and atmospheric conditions. Based on the requirements of mapping forest changes as defined by the United Nations (UN) Reducing Emissions from Forest Degradation and Deforestation (REDD) program, the accurate reporting of the spatio-temporal properties of these changes is necessary. We compared the impact of three fundamentally different radiometric preprocessing techniques Moderate Resolution Atmospheric TRANsmission (MODTRAN), Second Simulation of a Satellite Signal in the Solar Spectrum (6S) and simple Dark Object Subtraction (DOS) on mapping forest changes using Landsat time series data. A modification of Breaks For Additive Season and Trend (BFAST) monitor was used to jointly map the spatial and temporal agreement of forest changes at test sites in Ethiopia and Viet Nam. The suitability of the pre-processing methods for the occurring forest change drivers was assessed using recently captured Ground Truth and high resolution data (1000 points). A method for creating robust generic forest maps used for the sampling design is presented. An assessment of error sources has been performed identifying haze as a major source for time series analysis commission error.

  5. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  6. Error threshold ghosts in a simple hypercycle with error prone self-replication

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2008-01-01

    A delayed transition because of mutation processes is shown to happen in a simple hypercycle composed by two indistinguishable molecular species with error prone self-replication. The appearance of a ghost near the hypercycle error threshold causes a delay in the extinction and thus in the loss of information of the mutually catalytic replicators, in a kind of information memory. The extinction time, τ, scales near bifurcation threshold according to the universal square-root scaling law i.e. τ ∼ (Q hc - Q) -1/2 , typical of dynamical systems close to a saddle-node bifurcation. Here, Q hc represents the bifurcation point named hypercycle error threshold, involved in the change among the asymptotic stability phase and the so-called Random Replication State (RRS) of the hypercycle; and the parameter Q is the replication quality factor. The ghost involves a longer transient towards extinction once the saddle-node bifurcation has occurred, being extremely long near the bifurcation threshold. The role of this dynamical effect is expected to be relevant in fluctuating environments. Such a phenomenon should also be found in larger hypercycles when considering the hypercycle species in competition with their error tail. The implications of the ghost in the survival and evolution of error prone self-replicating molecules with hypercyclic organization are discussed

  7. The renormalized Hamiltonian truncation method in the large E{sub T} expansion

    Energy Technology Data Exchange (ETDEWEB)

    Elias-Miró, J. [SISSA and INFN, I-34136 Trieste (Italy); Montull, M. [Institut de Física d’Altes Energies (IFAE), Barcelona Institute of Science and Technology (BIST), Campus UAB, E-08193 Bellaterra (Spain); Riembau, M. [Institut de Física d’Altes Energies (IFAE), Barcelona Institute of Science and Technology (BIST), Campus UAB, E-08193 Bellaterra (Spain); DESY, Notkestrasse 85, 22607 Hamburg (Germany)

    2016-04-22

    Hamiltonian Truncation Methods are a useful numerical tool to study strongly coupled QFTs. In this work we present a new method to compute the exact corrections, at any order, in the Hamiltonian Truncation approach presented by Rychkov et al. in refs. http://dx.doi.org/10.1103/PhysRevD.91.085011; http://dx.doi.org/10.1103/PhysRevD.93.065014; http://dx.doi.org/10.1103/PhysRevD.91.025005. The method is general but as an example we calculate the exact g{sup 2} and some of the g{sup 3} contributions for the ϕ{sup 4} theory in two dimensions. The coefficients of the local expansion calculated in ref. http://dx.doi.org/10.1103/PhysRevD.91.085011 are shown to be given by phase space integrals. In addition we find new approximations to speed up the numerical calculations and implement them to compute the lowest energy levels at strong coupling. A simple diagrammatic representation of the corrections and various tests are also introduced.

  8. PLK1 has tumor-suppressive potential in APC-truncated colon cancer cells.

    Science.gov (United States)

    Raab, Monika; Sanhaji, Mourad; Matthess, Yves; Hörlin, Albrecht; Lorenz, Ioana; Dötsch, Christina; Habbe, Nils; Waidmann, Oliver; Kurunci-Csacsko, Elisabeth; Firestein, Ron; Becker, Sven; Strebhardt, Klaus

    2018-03-16

    The spindle assembly checkpoint (SAC) acts as a molecular safeguard in ensuring faithful chromosome transmission during mitosis, which is regulated by a complex interplay between phosphatases and kinases including PLK1. Adenomatous polyposis coli (APC) germline mutations cause aneuploidy and are responsible for familial adenomatous polyposis (FAP). Here we study the role of PLK1 in colon cancer cells with chromosomal instability promoted by APC truncation (APC-ΔC). The expression of APC-ΔC in colon cells reduces the accumulation of mitotic cells upon PLK1 inhibition, accelerates mitotic exit and increases the survival of cells with enhanced chromosomal abnormalities. The inhibition of PLK1 in mitotic, APC-∆C-expressing cells reduces the kinetochore levels of Aurora B and hampers the recruitment of SAC component suggesting a compromised mitotic checkpoint. Furthermore, Plk1 inhibition (RNAi, pharmacological compounds) promotes the development of adenomatous polyps in two independent Apc Min/+ mouse models. High PLK1 expression increases the survival of colon cancer patients expressing a truncated APC significantly.

  9. Design Optimization for a Truncated Catenary Mooring System for Scale Model Test

    Directory of Open Access Journals (Sweden)

    Climent Molins

    2015-11-01

    Full Text Available One of the main aspects when testing floating offshore platforms is the scaled mooring system, particularly with the increased depths where such platforms are intended. The paper proposes the use of truncated mooring systems to emulate the real mooring system by solving an optimization problem. This approach could be an interesting option when the existing testing facilities do not have enough available space. As part of the development of a new spar platform made of concrete for Floating Offshore Wind Turbines (FOWTs, called Windcrete, a station keeping system with catenary shaped lines was selected. The test facility available for the planned experiments had an important width constraint. Then, an algorithm to optimize the design of the scaled truncated mooring system using different weights of lines was developed. The optimization process adjusts the quasi-static behavior of the scaled mooring system as much as possible to the real mooring system within its expected maximum displacement range, where the catenary line provides the restoring forces by its suspended line length.

  10. Radiologic errors, past, present and future.

    Science.gov (United States)

    Berlin, Leonard

    2014-01-01

    During the 10-year period beginning in 1949 with publication of five articles in two radiology journals and UKs The Lancet, a California radiologist named L.H. Garland almost single-handedly shocked the entire medical and especially the radiologic community. He focused their attention on the fact now known and accepted by all, but at that time not previously recognized and acknowledged only with great reluctance, that a substantial degree of observer error was prevalent in radiologic interpretation. In the more than half-century that followed, Garland's pioneering work has been affirmed and reaffirmed by numerous researchers. Retrospective studies disclosed then and still disclose today that diagnostic errors in radiologic interpretations of plain radiographic (as well as CT, MR, ultrasound, and radionuclide) images hover in the 30% range, not too dissimilar to the error rates in clinical medicine. Seventy percent of these errors are perceptual in nature, i.e., the radiologist does not "see" the abnormality on the imaging exam, perhaps due to poor conspicuity, satisfaction of search, or simply the "inexplicable psycho-visual phenomena of human perception." The remainder are cognitive errors: the radiologist sees an abnormality but fails to render a correct diagnoses by attaching the wrong significance to what is seen, perhaps due to inadequate knowledge, or an alliterative or judgmental error. Computer-assisted detection (CAD), a technology that for the past two decades has been utilized primarily in mammographic interpretation, increases sensitivity but at the same time decreases specificity; whether it reduces errors is debatable. Efforts to reduce diagnostic radiological errors continue, but the degree to which they will be successful remains to be determined.

  11. Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported.

    Science.gov (United States)

    Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D

    2018-05-18

    Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.

  12. Differential isotope dansylation labeling combined with liquid chromatography mass spectrometry for quantification of intact and N-terminal truncated proteins

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Yanan; Li, Liang, E-mail: Liang.Li@ualberta.ca

    2013-08-20

    Graphical abstract: -- Highlights: •LC–MS was developed for quantifying protein mixtures containing both intact and N-terminal truncated proteins. •{sup 12}C{sub 2}-Dansylation of the N-terminal amino acid of proteins was done first, followed by microwave-assisted acid hydrolysis. •The released {sup 12}C{sub 2}-dansyl labeled N-terminal amino acid was quantified using {sup 13}C{sub 2}-dansyl labeled amino acid standards. •The method provided accurate and precise results for quantifying intact and N-terminal truncated proteins within 8 h. -- Abstract: The N-terminal amino acids of proteins are important structure units for maintaining the biological function, localization, and interaction networks of proteins. Under different biological conditions, one or several N-terminal amino acids could be cleaved from an intact protein due to processes, such as proteolysis, resulting in the change of protein properties. Thus, the ability to quantify the N-terminal truncated forms of proteins is of great importance, particularly in the area of development and production of protein-based drugs where the relative quantity of the intact protein and its truncated form needs to be monitored. In this work, we describe a rapid method for absolute quantification of protein mixtures containing intact and N-terminal truncated proteins. This method is based on dansylation labeling of the N-terminal amino acids of proteins, followed by microwave-assisted acid hydrolysis of the proteins into amino acids. It is shown that dansyl labeled amino acids are stable in acidic conditions and can be quantified by liquid chromatography mass spectrometry (LC–MS) with the use of isotope analog standards.

  13. Increased frequency of FBN1 truncating and splicing variants in Marfan syndrome patients with aortic events.

    Science.gov (United States)

    Baudhuin, Linnea M; Kotzer, Katrina E; Lagerstedt, Susan A

    2015-03-01

    Marfan syndrome is a systemic disorder that typically involves FBN1 mutations and cardiovascular manifestations. We investigated FBN1 genotype-phenotype correlations with aortic events (aortic dissection and prophylactic aortic surgery) in patients with Marfan syndrome. Genotype and phenotype information from probands (n = 179) with an FBN1 pathogenic or likely pathogenic variant were assessed. A higher frequency of truncating or splicing FBN1 variants was observed in Ghent criteria-positive patients with an aortic event (n = 34) as compared with all other probands (n = 145) without a reported aortic event (79 vs. 39%; P Marfan syndrome patients with FBN1 truncating and splicing variants.Genet Med 17 3, 177-187.

  14. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  15. The interaction of the flux errors and transport errors in modeled atmospheric carbon dioxide concentrations

    Science.gov (United States)

    Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.

    2017-12-01

    Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter

  16. Three-dimensional free vibration of functionally graded truncated conical shells subjected to thermal environment

    Energy Technology Data Exchange (ETDEWEB)

    Malekzadeh, P., E-mail: p_malekz@yahoo.com [Department of Mechanical Engineering, Persian Gulf University, Bushehr 75168 (Iran, Islamic Republic of); Fiouz, A.R.; Sobhrouyan, M. [Department of Civil Engineering, Persian Gulf University, Bushehr 75168 (Iran, Islamic Republic of)

    2012-01-15

    A three-dimensional (3D) free vibration analysis of the functionally graded (FG) truncated conical shells subjected to thermal environment is presented. The material properties are assumed to be temperature-dependent and graded in the radius direction, which can vary according to a simple power law distribution. The initial thermal stresses are obtained accurately by solving the thermoelastic equilibrium equations and by considering the two-dimensional axisymmetric temperature distribution in the shell. The differential quadrature method (DQM) as an efficient and accurate numerical tool is adopted to solve the thermal and thermo-mechanical governing equations. For this purpose, a mapping technique is employed to transform the cross section of the shell into the computational domain of DQM. The convergence behavior of the method is numerically demonstrated and comparison studies with the available solutions in the literature are performed. The effects of temperature dependence of material properties, geometrical parameters, material graded index, thermal and mechanical boundary conditions on the frequency parameters of the FG truncated conical shells are carried out. - Highlights: Black-Right-Pointing-Pointer 3D free vibration analysis of the functionally graded truncated conical shells is presented. Black-Right-Pointing-Pointer Two-dimensional axisymmetric temperature distribution in the shell is assumed. Black-Right-Pointing-Pointer The material properties are assumed to be temperature-dependent. Black-Right-Pointing-Pointer Initial thermal stresses due to thermal environment are evaluated accurately and included. Black-Right-Pointing-Pointer Representing the effects of different parameters on the non-dimensional frequencies.

  17. Correlations between chaos in a perturbed sine-Gordon equation and a truncated model system

    International Nuclear Information System (INIS)

    Bishop, A.R.; Flesch, R.; Forests, M.G.; Overman, E.A.

    1990-01-01

    The purpose of this paper is to present a first step toward providing coordinates and associated dynamics for low-dimensional attractors in nearly integrable partial differential equations (pdes), in particular, where the truncated system reflects salient geometric properties of the pde. This is achieved by correlating: (1) numerical results on the bifurcations to temporal chaos with spatial coherence of the damped, periodically forced sine-Gordon equation with periodic boundary conditions; (2) an interpretation of the spatial and temporal bifurcation structures of this perturbed integrable system with regard to the exact structure of the sine-Gordon phase space; (3) a model dynamical systems problem, which is itself a perturbed integrable Hamiltonian system, derived from the perturbed sine-Gordon equation by a finite mode Fourier truncation in the nonlinear Schroedinger limit; and (4) the bifurcations to chaos in the truncated phase space. In particular, a potential source of chaos in both the pde and the model ordinary differential equation systems is focused on: the existence of homoclinic orbits in the unperturbed integrable phase space and their continuation in the perturbed problem. The evidence presented here supports the thesis that the chaotic attractors of the weakly perturbed periodic sine-Gordon system consists of low-dimensional metastable attacking states together with intermediate states that are O(1) unstable and correspond to homoclinic states in the integrable phase space. It is surmised that the chaotic dynamics on these attractors is due to the perturbation of these homocline integrable configurations

  18. Truncating SLC5A7 mutations underlie a spectrum of dominant hereditary motor neuropathies.

    Science.gov (United States)

    Salter, Claire G; Beijer, Danique; Hardy, Holly; Barwick, Katy E S; Bower, Matthew; Mademan, Ines; De Jonghe, Peter; Deconinck, Tine; Russell, Mark A; McEntagart, Meriel M; Chioza, Barry A; Blakely, Randy D; Chilton, John K; De Bleecker, Jan; Baets, Jonathan; Baple, Emma L; Walk, David; Crosby, Andrew H

    2018-04-01

    To identify the genetic cause of disease in 2 previously unreported families with forms of distal hereditary motor neuropathies (dHMNs). The first family comprises individuals affected by dHMN type V, which lacks the cardinal clinical feature of vocal cord paralysis characteristic of dHMN-VII observed in the second family. Next-generation sequencing was performed on the proband of each family. Variants were annotated and filtered, initially focusing on genes associated with neuropathy. Candidate variants were further investigated and confirmed by dideoxy sequence analysis and cosegregation studies. Thorough patient phenotyping was completed, comprising clinical history, examination, and neurologic investigation. dHMNs are a heterogeneous group of peripheral motor neuron disorders characterized by length-dependent neuropathy and progressive distal limb muscle weakness and wasting. We previously reported a dominant-negative frameshift mutation located in the concluding exon of the SLC5A7 gene encoding the choline transporter (CHT), leading to protein truncation, as the likely cause of dominantly-inherited dHMN-VII in an extended UK family. In this study, our genetic studies identified distinct heterozygous frameshift mutations located in the last coding exon of SLC5A7 , predicted to result in the truncation of the CHT C-terminus, as the likely cause of the condition in each family. This study corroborates C-terminal CHT truncation as a cause of autosomal dominant dHMN, confirming upper limb predominating over lower limb involvement, and broadening the clinical spectrum arising from CHT malfunction.

  19. Tailor-made dimensions of diblock copolymer truncated micelles on a solid by UV irradiation.

    Science.gov (United States)

    Liou, Jiun-You; Sun, Ya-Sen

    2015-09-28

    We investigated the structural evolution of truncated micelles in ultrathin films of polystyrene-block-poly(2-vinylpyridine), PS-b-P2VP, of monolayer thickness on bare silicon substrates (SiOx/Si) upon UV irradiation in air- (UVIA) and nitrogen-rich (UVIN) environments. The structural evolution of micelles upon UV irradiation was monitored using GISAXS measurements in situ, while the surface morphology was probed using atomic force microscopy ex situ and the chemical composition using X-ray photoelectron spectroscopy (XPS). This work provides clear evidence for the interpretation of the relationship between the structural evolution and photochemical reactions in PS-b-P2VP truncated micelles upon UVIA and UVIN. Under UVIA treatment, photolysis and cross-linking reactions coexisted within the micelles; photolysis occurred mainly at the top of the micelles, whereas cross-linking occurred preferentially at the bottom. The shape and size of UVIA-treated truncated micelles were controlled predominantly by oxidative photolysis reactions, which depended on the concentration gradient of free radicals and oxygen along the micelle height. Because of an interplay between photolysis and photo-crosslinking, the scattering length densities (SLD) of PS and P2VP remained constant. In contrast, UVIN treatments enhanced the contrast in SLD between the PS shell and the P2VP core as cross-linking dominated over photolysis in the presence of nitrogen. The enhancement of the SLD contrast was due to the various degrees of cross-linking under UVIN for the PS and P2VP blocks.

  20. Zero-truncated panel Poisson mixture models: Estimating the impact on tourism benefits in Fukushima Prefecture.

    Science.gov (United States)

    Narukawa, Masaki; Nohara, Katsuhito

    2018-04-01

    This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Hybrid online sensor error detection and functional redundancy for systems with time-varying parameters.

    Science.gov (United States)

    Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali

    2017-12-01

    Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.

  2. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods.

    Science.gov (United States)

    James, Andrew J A; Konik, Robert M; Lecheminant, Philippe; Robinson, Neil J; Tsvelik, Alexei M

    2018-02-26

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1  +  1D quantum chromodynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.

  3. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods

    Science.gov (United States)

    James, Andrew J. A.; Konik, Robert M.; Lecheminant, Philippe; Robinson, Neil J.; Tsvelik, Alexei M.

    2018-04-01

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb–Liniger model, 1  +  1D quantum chromodynamics, as well as Landau–Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.

  4. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, Juan J. [Stanford University; Iaccarino, Gianluca [Stanford University

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A

  5. Characterization of mTOR-Responsive Truncated mRNAs in Cell Proliferation

    Science.gov (United States)

    2017-07-01

    These findings identify a previously uncharacterized role for mTOR in modulating 3’- UTR length of mRNAs by alternative polyadenylation ( APA ). Another...outcome of APA in the mTOR-activated transcriptome is an early termination of mRNA transcription to produce truncated mRNAs with polyadenylation in...for exhaustive analysis of Alternative cleavage and polyadenylation ( APA ) events (Figure 1). In IntMAP, first the position of multiple

  6. Truncated conformal space approach to scaling Lee-Yang model

    International Nuclear Information System (INIS)

    Yurov, V.P.; Zamolodchikov, Al.B.

    1989-01-01

    A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs

  7. Phospholipid lateral diffusion in phosphatidylcholine-sphingomyelin-cholesterol monolayers; Effects of oxidatively truncated phosphatidylcholines

    Czech Academy of Sciences Publication Activity Database

    Parkkila, P.; Štefl, Martin; Olžyńska, Agnieszka; Hof, Martin; Kinnunen, P. K. J.

    2015-01-01

    Roč. 1848, č. 1 (2015), s. 167-173 ISSN 0005-2736 R&D Projects: GA ČR GBP208/12/G016 Institutional support: RVO:61388955 Keywords : Oxidatively truncated phosphatidylcholines * Lateral diffusion * Fluorescence correlation spectroscopy Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.687, year: 2015

  8. Expression of a truncated receptor protein tyrosine phosphatase kappa in the brain of an adult transgenic mouse

    DEFF Research Database (Denmark)

    Shen, P; Canoll, P D; Sap, J

    1999-01-01

    that goal, we have used this mouse model to map the distribution of the truncated RPTP-kappa/beta-geo fusion protein in the adult mouse brain using beta-galactosidase as a marker enzyme. Visualization of the beta-galactosidase activity revealed a non-random pattern of expression, and identified cells......-6596]. Nevertheless, since the transgene's expression is driven by the endogenous RPTP-kappa promoter, distribution of the truncated RPTP-kappa/beta-geo fusion protein should reflect the regional and cellular expression of wild-type RPTP-kappa, and thus may identify sites where RPTP-kappa is important. Towards...

  9. The Error Reporting in the ATLAS TDAQ System

    Science.gov (United States)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one

  10. Entanglement entropy from the truncated conformal space

    Directory of Open Access Journals (Sweden)

    T. Palmai

    2016-08-01

    Full Text Available A new numerical approach to entanglement entropies of the Rényi type is proposed for one-dimensional quantum field theories. The method extends the truncated conformal spectrum approach and we will demonstrate that it is especially suited to study the crossover from massless to massive behavior when the subsystem size is comparable to the correlation length. We apply it to different deformations of massless free fermions, corresponding to the scaling limit of the Ising model in transverse and longitudinal fields. For massive free fermions the exactly known crossover function is reproduced already in very small system sizes. The new method treats ground states and excited states on the same footing, and the applicability for excited states is illustrated by reproducing Rényi entropies of low-lying states in the transverse field Ising model.

  11. The statistical error of Green's function Monte Carlo

    International Nuclear Information System (INIS)

    Ceperley, D.M.

    1986-01-01

    The statistical error in the ground state energy as calculated by Green's Function Monte Carlo (GFMC) is analyzed and a simple approximate formula is derived which relates the error to the number of steps of the random walk, the variational energy of the trial function, and the time step of the random walk. Using this formula it is argued that as the thermodynamic limit is approached with N identical molecules, the computer time needed to reach a given error per molecule increases as N/sup n/ where 0.5 < b < 1.5 and as the nuclear charge Z of a system is increased the computer time necessary to reach a given error grows as Z/sup 5.5/. Thus GFMC simulations will be most useful for calculating the properties of low Z elements. The implications for choosing the optimal trial function from a series of trial functions is also discussed

  12. Quasi-SU(3) truncation scheme for even-even sd-shell nuclei

    International Nuclear Information System (INIS)

    Vargas, C.E.; Hirsch, J.G.; Draayer, J.P.

    2001-01-01

    The quasi-SU(3) symmetry was uncovered in full pf and sdg shell-model calculations for both even-even and odd-even nuclei. It manifests itself through a dominance of single-particle and quadrupole-quadrupole terms in a Hamiltonian used to describe well-deformed nuclei. A practical consequence of the quasi-SU(3) symmetry is an efficient basis truncation scheme. In [C.E. Vargas et al., Phys. Rev. C 58 (1998) 1488] it is shown that when this type of Hamiltonian is diagonalized in an SU(3) basis, only a few irreducible representations (irreps) of SU(3) are needed to describe the yrast band, the leading S=0 irrep augmented with the leading S=1 irreps in the proton and neutron subspaces. In the present article the quasi-SU(3) truncation scheme is used, in conjunction with a 'realistic but schematic' Hamiltonian that includes the most important multipole terms, to describe the energy spectra and B(E2) transition strengths of 20,22 Ne, 24 Mg and 28 Si. The effect of the size of the Hilbert space on both sets of observables is discussed, as well as the structure of the yrast band and the importance of the various terms in the Hamiltonian. The limitations of the model are explicitly discussed

  13. Sources of Error in Satellite Navigation Positioning

    Directory of Open Access Journals (Sweden)

    Jacek Januszewski

    2017-09-01

    Full Text Available An uninterrupted information about the user’s position can be obtained generally from satellite navigation system (SNS. At the time of this writing (January 2017 currently two global SNSs, GPS and GLONASS, are fully operational, two next, also global, Galileo and BeiDou are under construction. In each SNS the accuracy of the user’s position is affected by the three main factors: accuracy of each satellite position, accuracy of pseudorange measurement and satellite geometry. The user’s position error is a function of both the pseudorange error called UERE (User Equivalent Range Error and user/satellite geometry expressed by right Dilution Of Precision (DOP coefficient. This error is decomposed into two types of errors: the signal in space ranging error called URE (User Range Error and the user equipment error UEE. The detailed analyses of URE, UEE, UERE and DOP coefficients, and the changes of DOP coefficients in different days are presented in this paper.

  14. Different types of errors in saccadic task are sensitive to either time of day or chronic sleep restriction.

    Directory of Open Access Journals (Sweden)

    Barbara Wachowicz

    Full Text Available Circadian rhythms and restricted sleep length affect cognitive functions and, consequently, the performance of day to day activities. To date, no more than a few studies have explored the consequences of these factors on oculomotor behaviour. We have implemented a spatial cuing paradigm in an eye tracking experiment conducted four times of the day after one week of rested wakefulness and after one week of chronic partial sleep restriction. Our aim was to verify whether these conditions affect the number of a variety of saccadic task errors. Interestingly, we found that failures in response selection, i.e. premature responses and direction errors, were prone to time of day variations, whereas failures in response execution, i.e. omissions and commissions, were considerably affected by sleep deprivation. The former can be linked to the cue facilitation mechanism, while the latter to wake state instability and the diminished ability of top-down inhibition. Together, these results may be interpreted in terms of distinctive sensitivity of orienting and alerting systems to fatigue. Saccadic eye movements proved to be a novel and effective measure with which to study the susceptibility of attentional systems to time factors, thus, this approach is recommended for future research.

  15. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  16. Analysis of errors of radiation relay, (1)

    International Nuclear Information System (INIS)

    Koyanagi, Takami; Nakajima, Sinichi

    1976-01-01

    The statistical error of liquid level controlled by radiation relay is analysed and a method of minimizing the error is proposed. This method comes to the problem of optimum setting of the time constant of radiation relay. The equations for obtaining the value of time constant are presented and the numerical results are shown in a table and plotted in a figure. The optimum time constant of the upper level control relay is entirely different from that of the lower level control relay. (auth.)

  17. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1982-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  18. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1981-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  19. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing; Pang, Chaoyi; Zhou, Xiaofang; Zhang, Xiangliang; Deng, Ke

    2014-01-01

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  20. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing

    2014-04-04

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.