WorldWideScience

Sample records for incremental tangent formulation

  1. Degradation theories of concrete and development of a new deviatoric model in incremental tangent formulation: limit analysis applied to case of anchor bolts embedded in concrete

    International Nuclear Information System (INIS)

    Ung Quoc, H.

    2003-12-01

    This research is achieved in the general framework of the study of the concrete behaviour. It has for objective the development of a new behaviour model satisfying to the particular requirements for an industrial exploitation. After the analysis of different existent models, a first development has concerned models based on the smeared crack theory. A new formulation of the theory permitted to overcome the stress locking problem. However, the analysis showed the persistence of some limits inert to this approach in spite of this improvement. Then, an analysis of the physical mechanisms of the concrete degradation has been achieved and permitted to develop the new damage model MODEV. The general formulation of this model is based on the theory of the thermodynamics and applied to the case of the heterogeneous and brittle materials. The MODEV model considers two damage mechanisms: extension and sliding. The model considers also that the relative tangent displacement between microcracks lips is responsible of the strain irreversibility. Thus, the rate of inelastic strain becomes function of the damage and the heterogeneity index of the material. The unilateral effect is taken in account as an elastic hardening or softening process according to re-closing or reopening of cracks. The model is written within the framework of non standard generalised materials in incremental tangent formulation and implemented in the general finite element code SYMPHONIE. The validation of the model has been achieved on the basis of several tests issued from the literature. The second part of this research has concerned the development of the CHEVILAB software. This simulation tool based on the limit analysis approach permit the evaluation of the ultimate load capacity of anchors bolts. The kinematics approach of the limit analysis has been adapted to the problem of anchors while considering several specific failure mechanisms. This approach has been validated then by comparison with the

  2. Degradation theories of concrete and development of a new deviatoric model in incremental tangent formulation: limit analysis applied to case of anchor bolts embedded in concrete; Theorie de degradation du beton et developpement d'un nouveau modele d'endommagement en formulation incrementale tangente: calcul a la rupture applique au cas des chevilles de fixation ancrees dans le beton

    Energy Technology Data Exchange (ETDEWEB)

    Ung Quoc, H

    2003-12-15

    This research is achieved in the general framework of the study of the concrete behaviour. It has for objective the development of a new behaviour model satisfying to the particular requirements for an industrial exploitation. After the analysis of different existent models, a first development has concerned models based on the smeared crack theory. A new formulation of the theory permitted to overcome the stress locking problem. However, the analysis showed the persistence of some limits inert to this approach in spite of this improvement. Then, an analysis of the physical mechanisms of the concrete degradation has been achieved and permitted to develop the new damage model MODEV. The general formulation of this model is based on the theory of the thermodynamics and applied to the case of the heterogeneous and brittle materials. The MODEV model considers two damage mechanisms: extension and sliding. The model considers also that the relative tangent displacement between microcracks lips is responsible of the strain irreversibility. Thus, the rate of inelastic strain becomes function of the damage and the heterogeneity index of the material. The unilateral effect is taken in account as an elastic hardening or softening process according to re-closing or reopening of cracks. The model is written within the framework of non standard generalised materials in incremental tangent formulation and implemented in the general finite element code SYMPHONIE. The validation of the model has been achieved on the basis of several tests issued from the literature. The second part of this research has concerned the development of the CHEVILAB software. This simulation tool based on the limit analysis approach permit the evaluation of the ultimate load capacity of anchors bolts. The kinematics approach of the limit analysis has been adapted to the problem of anchors while considering several specific failure mechanisms. This approach has been validated then by comparison with the

  3. Implicit implementation and consistent tangent modulus of a viscoplastic model for polymers

    OpenAIRE

    ACHOUR, Nadia; CHATZIGEORGIOU, George; MERAGHNI, Fodil; CHEMISKY, Yves; FITOUSSI, Joseph

    2015-01-01

    In this work, the phenomenological viscoplastic DSGZ model (Duan et al., 2001 [13]), developed for glassy or semi-crystalline polymers, is numerically implemented in a three-dimensional framework, following an implicit formulation. The computational methodology is based on the radial return mapping algorithm. This implicit formulation leads to the definition of the consistent tangent modulus which permits the implementation in incremental micromechanical scale transition analysis. The extende...

  4. Variational formulation for dissipative continua and an incremental J-integral

    Science.gov (United States)

    Rahaman, Md. Masiur; Dhas, Bensingh; Roy, D.; Reddy, J. N.

    2018-01-01

    Our aim is to rationally formulate a proper variational principle for dissipative (viscoplastic) solids in the presence of inertia forces. As a first step, a consistent linearization of the governing nonlinear partial differential equations (PDEs) is carried out. An additional set of complementary (adjoint) equations is then formed to recover an underlying variational structure for the augmented system of linearized balance laws. This makes it possible to introduce an incremental Lagrangian such that the linearized PDEs, including the complementary equations, become the Euler-Lagrange equations. Continuous groups of symmetries of the linearized PDEs are computed and an analysis is undertaken to identify the variational groups of symmetries of the linearized dissipative system. Application of Noether's theorem leads to the conservation laws (conserved currents) of motion corresponding to the variational symmetries. As a specific outcome, we exploit translational symmetries of the functional in the material space and recover, via Noether's theorem, an incremental J-integral for viscoplastic solids in the presence of inertia forces. Numerical demonstrations are provided through a two-dimensional plane strain numerical simulation of a compact tension specimen of annealed mild steel under dynamic loading.

  5. Complete Tangent Stiffness for eXtended Finite Element Method by including crack growth parameters

    DEFF Research Database (Denmark)

    Mougaard, J.F.; Poulsen, P.N.; Nielsen, L.O.

    2013-01-01

    the crack geometry parameters, such as the crack length and the crack direction directly in the virtual work formulation. For efficiency, it is essential to obtain a complete tangent stiffness. A new method in this work is presented to include an incremental form the crack growth parameters on equal terms......The eXtended Finite Element Method (XFEM) is a useful tool for modeling the growth of discrete cracks in structures made of concrete and other quasi‐brittle and brittle materials. However, in a standard application of XFEM, the tangent stiffness is not complete. This is a result of not including...... with the degrees of freedom in the FEM‐equations. The complete tangential stiffness matrix is based on the virtual work together with the constitutive conditions at the crack tip. Introducing the crack growth parameters as direct unknowns, both equilibrium equations and the crack tip criterion can be handled...

  6. System incremental cost calculations using the participation factor load-flow formulation

    International Nuclear Information System (INIS)

    Meisel, J.

    1993-01-01

    The load-flow problem is reformulated such that the use of a slack-bus generator is included only as a special case. This reformulation, known as the participation factor load-flow, includes a total mismatch variable and a defined participation vector, which, in general, distributes this mismatch to all system buses. The slack-bus constraint can still be obtained by defining a particular participation vector. In using the participation factor load-flow in the transpose Jacobian approach to the economic optimal dispatch problem, the paper shows that the value of the system-λ can be controlled such that this value represents the minimal incremental change in generation costs per unit change in system total demand with this demand distributed according to the specified participation vector. Methods using the conventional B-coefficient loss formulas or slack-bus load-flows give system-λ values whereby the unit change in demand must be placed on a fictitious single load-bus or on the slack-bus, respectively. Having a system-λ value which more accurately represents a proposed energy interchange between interconnected systems is very important in developing valid costs for each system. An extensive 28-bus, 8-generator system is included to illustrate these results

  7. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  8. A tangent subsolar merging line

    International Nuclear Information System (INIS)

    Crooker, N.U.; Siscoe, G.L.; Toffoletto, F.R.

    1990-01-01

    The authors describe a global magnetospheric model with a single subsolar merging line whose position is determined neither locally by the relative orientations and strengths of the merging fields nor globally by the orientation of a separator line--the governing parameters of most previous models--but by the condition of tangential contact between the external field and the magnetopause. As in previous models, the tilt of the merging line varies with IMF orientation, but here it also depends upon the ratio of Earth's magnetic flux that leaks out of the magnetopause to IMF flux that penetrates in. In the limiting case treated by Alekseyev and Belen'kaya, with no leakage of Earth's field and total IMF penetration, the merging line forms a great circle around a spherical magnetosphere where undeviated IMF lines lie tangent to its surface. This tangent merging line lies perpendicular to the IMF. They extend their work to the case of finite leakage and partial penetration, which distort the IMF into a draped pattern, thus changing the locus of tangency to the sphere. In the special case where the penetrating IMF flux is balanced by an equal amount of Earth flux leakage, the tangent merging line bisects the angle between the IMF and Earth's northward subsolar field. This result is identical to the local merging line model result for merging fields with equal magnitude. Here a global flux balance condition replaces the local equal magnitude condition

  9. Arctic curves in path models from the tangent method

    Science.gov (United States)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  10. On stability of Kummer surfaces' tangent bundle

    International Nuclear Information System (INIS)

    Bozhkov, Y.D.

    1988-10-01

    In this paper we propose an explicit approximation of the Kaehler-Einstein-Calabi-Yau metric on the Kummer surfaces, which are manifolds of type K3. It is constructed by gluing 16 pieces of the Eguchi-Hanson metric and 16 pieces of the Euclidean metric. Two estimates on its curvature are proved. Then we prove an estimate on the first eigenvalue of a covariant differential operator of second order. This enables us to apply Taubes' iteration procedure to obtain that there exists an anti-self-dual connection on the considered Kummer surface. In fact, it is a Hermitian-Einstein connection from which we conclude that Kummer surfaces' co-tangent bundle is stable and therefore their tangent bundle is stable too. (author). 40 refs

  11. Sharp inequalities for tangent function with applications

    Directory of Open Access Journals (Sweden)

    Hui-Lin Lv

    2017-05-01

    Full Text Available Abstract In the article, we present new bounds for the function e t cot ( t − 1 $e^{t\\cot(t-1}$ on the interval ( 0 , π / 2 $(0, \\pi/2$ and find sharp estimations for the Sine integral and the Catalan constant based on a new monotonicity criterion for the quotient of power series, which refine the Redheffer and Becker-Stark type inequalities for tangent function.

  12. The differential geometry of higher order jets and tangent bundles

    International Nuclear Information System (INIS)

    De Leon, M.; Rodrigues, P.R.

    1985-01-01

    This chapter is devoted to the study of basic geometrical notions required for the development of the main object of the text. Some facts about Jet theory are reviewed. A particular case of Jet manifolds is considered: the tangent bundle of higher order. It is shown that this jet bundle possesses in a canonical way a certain kind of geometric structure, the so called almost tangent structure of higher order, and which is a generalization of the almost tangent geometry of the tangent bundle. Another important fact examined is the extension of the notion of 'spray' to higher order tangent bundles. (Auth.)

  13. Hazy spaces, tangent spaces, manifolds and groups

    International Nuclear Information System (INIS)

    Dodson, C.T.J.

    1977-03-01

    The results on hazy spaces and the developments leading to hazy manifolds and groups are summarized. Proofs have appeared elsewhere so here examples are considered and some motivation for definitions and constructions in the theorems is analyzed. It is shown that quite simple ideas, intuitively acceptable, lead to remarkable similarity with the theory of differentiable manifolds. Hazy n manifolds have tangent bundles that are hazy 2n manifolds and there are hazy manifold structures for groups. Products and submanifolds are easily constructed and in particular the hazy n-sphere manifolds as submanifolds of the standard hazy manifold Zsup(n+1)

  14. Legislative Bargaining and Incremental Budgeting

    OpenAIRE

    Dhammika Dharmapala

    2002-01-01

    The notion of 'incrementalism', formulated by Aaron Wildavsky in the 1960's, has been extremely influential in the public budgeting literature. In essence, it entails the claim that legislators engaged in budgetary policymaking accept past allocations, and decide only on the allocation of increments to revenue. Wildavsky explained incrementalism with reference to the cognitive limitations of lawmakers and their desire to reduce conflict. This paper uses a legislative bargaining framework to u...

  15. Tangent hyperbolic circular frequency diverse array radars

    Directory of Open Access Journals (Sweden)

    Sarah Saeed

    2016-03-01

    Full Text Available Frequency diverse array (FDA with uniform frequency offset (UFO has been in spot light of research for past few years. Not much attention has been devoted to non-UFOs in FDA. This study investigates tangent hyperbolic (TH function for frequency offset selection scheme in circular FDAs (CFDAs. Investigation reveals a three-dimensional single-maximum beampattern, which promises to enhance system detection capability and signal-to-interference plus noise ratio. Furthermore, by utilising the versatility of TH function, a highly configurable type array system is achieved, where beampatterns of three different configurations of FDA can be generated, just by adjusting a single function parameter. This study further examines the utility of the proposed TH-CFDA in some practical radar scenarios.

  16. Tangent Lines without Derivatives for Quadratic and Cubic Equations

    Science.gov (United States)

    Carroll, William J.

    2009-01-01

    In the quadratic equation, y = ax[superscript 2] + bx + c, the equation y = bx + c is identified as the equation of the line tangent to the parabola at its y-intercept. This is extended to give a convenient method of graphing tangent lines at any point on the graph of a quadratic or a cubic equation. (Contains 5 figures.)

  17. A Tangent Bundle Theory for Visual Curve Completion.

    Science.gov (United States)

    Ben-Yosef, Guy; Ben-Shahar, Ohad

    2012-07-01

    Visual curve completion is a fundamental perceptual mechanism that completes the missing parts (e.g., due to occlusion) between observed contour fragments. Previous research into the shape of completed curves has generally followed an "axiomatic" approach, where desired perceptual/geometrical properties are first defined as axioms, followed by mathematical investigation into curves that satisfy them. However, determining psychophysically such desired properties is difficult and researchers still debate what they should be in the first place. Instead, here we exploit the observation that curve completion is an early visual process to formalize the problem in the unit tangent bundle R(2) × S(1), which abstracts the primary visual cortex (V1) and facilitates exploration of basic principles from which perceptual properties are later derived rather than imposed. Exploring here the elementary principle of least action in V1, we show how the problem becomes one of finding minimum-length admissible curves in R(2) × S(1). We formalize the problem in variational terms, we analyze it theoretically, and we formulate practical algorithms for the reconstruction of these completed curves. We then explore their induced visual properties vis-à-vis popular perceptual axioms and show how our theory predicts many perceptual properties reported in the corresponding perceptual literature. Finally, we demonstrate a variety of curve completions and report comparisons to psychophysical data and other completion models.

  18. Some applications on tangent bundle with Kaluza-Klein metric

    Directory of Open Access Journals (Sweden)

    Murat Altunbaş

    2017-01-01

    Full Text Available In this paper, differential equations of geodesics; parallelism, incompressibility and closeness conditions of the horizontal and complete lift of the vector fields are investigated with respect to Kaluza-Klein metric on tangent bundle.

  19. Microwave dielectric tangent losses in KDP and DKDP crystals

    Indian Academy of Sciences (India)

    By adding cubic and quartic phonon anharmonic interactions in the pseudospin lattice coupled mode (PLCM) model for KDP-type crystals and using double-time temperature dependent Green's function method, expressions for soft mode frequency, dielectric constant and dielectric tangent loss are obtained. Using model ...

  20. Adaptive order search and tangent-weighted trade-off for motion estimation in H.264

    Directory of Open Access Journals (Sweden)

    Srinivas Bachu

    2018-04-01

    Full Text Available Motion estimation and compensation play a major role in video compression to reduce the temporal redundancies of the input videos. A variety of block search patterns have been developed for matching the blocks with reduced computational complexity, without affecting the visual quality. In this paper, block motion estimation is achieved through integrating the square as well as the hexagonal search patterns with adaptive order. The proposed algorithm is called, AOSH (Adaptive Order Square Hexagonal Search algorithm, and it finds the best matching block with a reduced number of search points. The searching function is formulated as a trade-off criterion here. Hence, the tangent-weighted function is newly developed to evaluate the matching point. The proposed AOSH search algorithm and the tangent-weighted trade-off criterion are effectively applied to the block estimation process to enhance the visual quality and the compression performance. The proposed method is validated using three videos namely, football, garden and tennis. The quantitative performance of the proposed method and the existing methods is analysed using the Structural SImilarity Index (SSIM and the Peak Signal to Noise Ratio (PSNR. The results prove that the proposed method offers good visual quality than the existing methods. Keywords: Block motion estimation, Square search, Hexagon search, H.264, Video coding

  1. Reverse-Tangent Injection in a Centrifugal Compressor

    Science.gov (United States)

    Skoch, Gary J.

    2007-01-01

    Injection of working fluid into a centrifugal compressor in the reverse tangent direction has been invented as a way of preventing flow instabilities (stall and surge) or restoring stability when stall or surge has already commenced. The invention applies, in particular, to a centrifugal compressor, the diffuser of which contains vanes that divide the flow into channels oriented partly radially and partly tangentially. In reverse-tangent injection, a stream or jet of the working fluid (the fluid that is compressed) is injected into the vaneless annular region between the blades of the impeller and the vanes of the diffuser. As used here, "reverse" signifies that the injected flow opposes (and thereby reduces) the tangential component of the velocity of the impeller discharge. At the same time, the injected jet acts to increase the radial component of the velocity of the impeller discharge.

  2. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  3. Experience on tangent delta norms adopted for repaired generator

    International Nuclear Information System (INIS)

    Misra, N.N.; Sood, D.K.

    2005-01-01

    The repair techniques of the generators are very crucial for avoiding prolonged forced outages. The crucial decisions based on sound knowledge and judgement becomes essential in many cases. The unit under discussions had failed on account of flash over in the Exciter end overhang windings. The failure resulted in damaged to the stator bars as well as generator core. The damaged end packets of the stator core were replaced at site. The total winding bars were removed from stator core and damaged bars were replaced with new bars. The rest of the bars were tested for tangent delta tests for reuse. Acceptance norms of 0.6% tip up from 0.2pu to 0.6pu of rated stator voltage were adopted. Some of the bars outside the acceptable limits of tangent delta were shifted close to neutral so that the standard norms of tan delta are met. This was felt necessary because lead-time for procurement of new bars was more than six months. The above-adopted norms for tangent delta will be of much use for the operating utilities. The unit under discussions was of 67.5 MW operating at 50 Hz, 0.85 pf lag and had logged 66160.46 operating hours before failure. (author)

  4. Subsheaves in the tangent bundle: Integrability, stability and positivity

    International Nuclear Information System (INIS)

    Peternell, T.

    2001-01-01

    Special subsheaves E of the tangent bundle T X of a complex projective manifold X often carry important geometric information on X. The most important special properties E can have, are: integrability, maximality, positivity. Integrable subsheaves define a foliation on X and then the structure of the leaves of the foliation will be very interesting. The best situation is when the leaves are compact. Unfortunately this does not happen very often and is extremely difficult to verify. Maximality means that E is a maximal destabilising subsheaf with respect to a given polarisation; this leads to stability properties of the tangent bundle. Finally positivity means that E is an ample vector bundle or ample coherent sheaf in the sense of algebraic geometry. In the special case that E = T X , we obtain cum grano salis manifolds with positive curvature. We study stability properties of the tangent bundle of a Fano manifold X. This is deeply related to the existence problem of a Kahler-Einstein metric, but we will purely concentrate on the algebraic aspects of stability. If b2(X)= 1,then Tx is expected to be semi-stable with respect to the anti-canonical polarisation -Kx = detTx (or even stable). We explain several results which yield the conjecture for large classes of Fano manifolds. Some of the methods are of cohomological nature; some use foliations. If bz(X) 2 2, then Tx is not always semi-stable. There should be a geometric reason for this failure, namely the existence of a Fano fibration whose relative tangent sheaf destabilises Tx. In the final section we first study ample subsheaves E in Tx. There is a general conjecture which should interpolate two important, nowadays already classical, theorems of Wahl and Mori. Wahl's theorem says that if E has rank 1, then X is projective space. Mori's theorem gives the same conclusion when E = Tx. Especially Wahl's theorem is applied several times in the previous sections. Now X should be a projective space also in the

  5. Quantum information entropies for a squared tangent potential well

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Shishan [Information and Engineering College, DaLian University, 116622 (China); Sun, Guo-Hua, E-mail: sunghdb@yahoo.com [Centro Universitario Valle de Chalco, Universidad Autónoma del Estado de México, Valle de Chalco Solidaridad, Estado de México, 56615 (Mexico); Dong, Shi-Hai, E-mail: dongsh2@yahoo.com [Departamento de Física, Escuela Superior de Física y Matemáticas, Instituto Politécnico Nacional, Unidad Profesional Adolfo López Mateos, Edificio 9, México D.F. 07738 (Mexico); Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States); Draayer, J.P., E-mail: draayer@sura.org [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States)

    2014-01-10

    The particle in a symmetrical squared tangent potential well is studied by examining its Shannon information entropy and standard deviations. The position and momentum information entropy densities ρ{sub s}(x), ρ{sub s}(p) and probability densities ρ(x), ρ(p) are illustrated with different potential range L and potential depth U. We present analytical position information entropies S{sub x} for the lowest two states. We observe that the sum of position and momentum entropies S{sub x} and S{sub p} expressed by Bialynicki-Birula–Mycielski (BBM) inequality is satisfied. Some eigenstates exhibit entropy squeezing in the position. The entropy squeezing in position will be compensated by an increase in momentum entropy. We also note that the S{sub x} increases with the potential range L, while decreases with the potential depth U. The variation of S{sub p} is contrary to that of S{sub x}.

  6. Quantum information entropies for a squared tangent potential well

    International Nuclear Information System (INIS)

    Dong, Shishan; Sun, Guo-Hua; Dong, Shi-Hai; Draayer, J.P.

    2014-01-01

    The particle in a symmetrical squared tangent potential well is studied by examining its Shannon information entropy and standard deviations. The position and momentum information entropy densities ρ s (x), ρ s (p) and probability densities ρ(x), ρ(p) are illustrated with different potential range L and potential depth U. We present analytical position information entropies S x for the lowest two states. We observe that the sum of position and momentum entropies S x and S p expressed by Bialynicki-Birula–Mycielski (BBM) inequality is satisfied. Some eigenstates exhibit entropy squeezing in the position. The entropy squeezing in position will be compensated by an increase in momentum entropy. We also note that the S x increases with the potential range L, while decreases with the potential depth U. The variation of S p is contrary to that of S x .

  7. The Cretaceous superchron geodynamo: Observations near the tangent cylinder

    Science.gov (United States)

    Tarduno, John A.; Cottrell, Rory D.; Smirnov, Alexei V.

    2002-01-01

    If relationships exist between the frequency of geomagnetic reversals and the morphology, secular variation, and intensity of Earth's magnetic field, they should be best expressed during superchrons, intervals tens of millions of years long lacking reversals. Here we report paleomagnetic and paleointensity data from lavas of the Cretaceous Normal Polarity Superchron that formed at high latitudes near the tangent cylinder that surrounds the solid inner core. The time-averaged field recorded by these lavas is remarkably strong and stable. When combined with global results available from lower latitudes, these data define a time-averaged field that is overwhelmingly dominated by the axial dipole (octupole components are insignificant). These observations suggest that the basic features of the geomagnetic field are intrinsically related. Superchrons may reflect times when the nature of core–mantle boundary heat flux allows the geodynamo to operate at peak efficiency. PMID:12388778

  8. Modified Einstein and Finsler Like Theories on Tangent Lorentz Bundles

    CERN Document Server

    Stavrinos, Panayiotis; Vacaru, Sergiu I.

    2014-01-01

    We study modifications of general relativity, GR, with nonlinear dispersion relations which can be geometrized on tangent Lorentz bundles. Such modified gravity theories, MGTs, can be modeled by gravitational Lagrange density functionals $f(\\mathbf{R},\\mathbf{T},F)$ with generalized/ modified scalar curvature $\\mathbf{R}$, trace of matter field tensors $\\mathbf{T}$ and modified Finsler like generating function $F$. In particular, there are defined extensions of GR with extra dimensional "velocity/ momentum" coordinates. For four dimensional models, we prove that it is possible to decouple and integrate in very general forms the gravitational fields for $f(\\mathbf{R},\\mathbf{T},F)$--modified gravity using nonholonomic 2+2 splitting and nonholonomic Finsler like variables $F$. We study the modified motion and Newtonian limits of massive test particles on nonlinear geodesics approximated with effective extra forces orthogonal to the four-velocity. We compute the constraints on the magnitude of extra-acceleration...

  9. Construction of the Tangent to a Cycloid Proposed by Wallis and Fermat

    Directory of Open Access Journals (Sweden)

    Loredana Biacino

    2017-02-01

    Full Text Available In this paper some methods used in the XVII century for the construction of the tangents to a cycloid in a point are exposed: the kinematical method employed by Roberval, the classical geometrical  method used by Wallis and the Fermat’s construction as a consequence of his tangents method. Le Costruzioni della Tangente alla Cicloide Proposte da Wallis e da Fermat In questo lavoro sono esposti vari metodi in uso nel ‘600 per la costruzione della tangente ad una cicloide in un suo punto: il metodo cinematico impiegato da Roberval, il metodo geometrico classico usato dal Wallis e la costruzione di Fermat come conseguenza del suo metodo delle tangenti. Parole Chiave: Cicloide, Tangente ad una curva, Metodo cinematico delle tangenti, Metodo delle tangenti di Fermat.

  10. Tangent-Impulse Interception for a Hyperbolic Target

    Directory of Open Access Journals (Sweden)

    Dongzhe Wang

    2014-01-01

    Full Text Available The two-body interception problem with an upper-bounded tangent impulse for the interceptor on an elliptic parking orbit to collide with a nonmaneuvering target on a hyperbolic orbit is studied. Firstly, four special initial true anomalies whose velocity vectors are parallel to either of the lines of asymptotes for the target hyperbolic orbit are obtained by using Newton-Raphson method. For different impulse points, the solution-existence ranges of the target true anomaly for any conic transfer are discussed in detail. Then, the time-of-flight equation is solved by the secant method for a single-variable piecewise function about the target true anomaly. Considering the sphere of influence of the Earth and the upper bound on the fuel, all feasible solutions are obtained for different impulse points. Finally, a numerical example is provided to apply the proposed technique for all feasible solutions and the global minimum-time solution with initial coasting time.

  11. Planning Through Incrementalism

    Science.gov (United States)

    Lasserre, Ph.

    1974-01-01

    An incremental model of decisionmaking is discussed and compared with the Comprehensive Rational Approach. A model of reconciliation between the two approaches is proposed, and examples are given in the field of economic development and educational planning. (Author/DN)

  12. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  13. Unified treatment of microscopic boundary conditions and efficient algorithms for estimating tangent operators of the homogenized behavior in the computational homogenization method

    Science.gov (United States)

    Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic

    2017-03-01

    This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.

  14. Limit structure of future null infinity tangent-topology of the event horizon and gravitational wave tail

    International Nuclear Information System (INIS)

    Tomizawa, Shinya; Siino, Masaru

    2006-01-01

    We investigated the relation between the behaviour of gravitational waves at late time and the limit structure of future null infinity tangent which will determine the topology of the event horizon far in the future. In the present paper, we mainly consider a spacetime with two black holes. Although in most cases, the black holes coalesce and the event horizon is topologically a single sphere far in the future, there are several possibilities that the black holes never coalesce and such exact solutions as examples. In our formulation, the tangent vector of future null infinity is, under conformal embedding, related to the number of black holes far in the future through the Poincare-Hopf theorem. Under the conformal embedding, the topology of the event horizon far in the future will be affected by the geometrical structure of the future null infinity. In this paper, we relate the behaviour of Weyl curvature to this limit behaviour of the generator vector of the future null infinity. We show if Weyl curvature decays sufficiently slowly at late time in the neighbourhood of future null infinity, two black holes never coalesce

  15. Defense Agencies Initiative Increment 2 (DAI Inc 2)

    Science.gov (United States)

    2016-03-01

    module. In an ADM dated September 23, 2013, the MDA established Increment 2 as a MAIS program to include budget formulation; grants financial...2016 Major Automated Information System Annual Report Defense Agencies Initiative Increment 2 (DAI Inc 2) Defense Acquisition Management...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then

  16. Characterization of model errors in the calculation of tangent heights for atmospheric infrared limb measurements

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2014-12-01

    Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.

  17. Solution of D dimensional Dirac equation for hyperbolic tangent potential using NU method and its application in material properties

    Energy Technology Data Exchange (ETDEWEB)

    Suparmi, A., E-mail: soeparmi@staff.uns.ac.id; Cari, C., E-mail: cari@staff.uns.ac.id; Pratiwi, B. N., E-mail: namakubetanurpratiwi@gmail.com [Physics Department, Faculty of Mathematics and Science, Sebelas Maret University, Jl. Ir. Sutami 36A Kentingan Surakarta 57126 (Indonesia); Deta, U. A. [Physics Department, Faculty of Science and Mathematics Education and Teacher Training, Surabaya State University, Surabaya (Indonesia)

    2016-02-08

    The analytical solution of D-dimensional Dirac equation for hyperbolic tangent potential is investigated using Nikiforov-Uvarov method. In the case of spin symmetry the D dimensional Dirac equation reduces to the D dimensional Schrodinger equation. The D dimensional relativistic energy spectra are obtained from D dimensional relativistic energy eigen value equation by using Mat Lab software. The corresponding D dimensional radial wave functions are formulated in the form of generalized Jacobi polynomials. The thermodynamically properties of materials are generated from the non-relativistic energy eigen-values in the classical limit. In the non-relativistic limit, the relativistic energy equation reduces to the non-relativistic energy. The thermal quantities of the system, partition function and specific heat, are expressed in terms of error function and imaginary error function which are numerically calculated using Mat Lab software.

  18. Tangent Orbital Rendezvous Using Linear Relative Motion with J2 Perturbations

    Directory of Open Access Journals (Sweden)

    Gang Zhang

    2013-01-01

    Full Text Available The tangent-impulse coplanar orbit rendezvous problem is studied based on the linear relative motion for J2-perturbed elliptic orbits. There are three cases: (1 only the first impulse is tangent; (2 only the second impulse is tangent; (3 both impulses are tangent. For a given initial impulse point, the first two problems can be transformed into finding all roots of a single variable function about the transfer time, which can be done by the secant method. The bitangent rendezvous problem requires the same solution for the first two problems. By considering the initial coasting time, the bitangent rendezvous solution is obtained with a difference function. A numerical example for two coplanar elliptic orbits with J2 perturbations is given to verify the efficiency of these proposed techniques.

  19. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2005-01-01

    This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.

  20. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2006-01-01

    This is the second of two volumes containing the revised and completed notes of lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald in March, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present second volume contains the following lectures: "Random Walks on Finite Quantum Groups" by Uwe Franz and Rolf Gohm, "Quantum Markov Processes and Applications in Physics" by Burkhard Kümmerer, Classical and Free Infinite Divisibility and Lévy Processes" by Ole E. Barndorff-Nielsen, Steen Thorbjornsen, and "Lévy Processes on Quantum Groups and Dual Groups" by Uwe Franz.

  1. Efficient incremental relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2013-07-01

    We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.

  2. Tangent stiffness matrices for projection methods in elasto-plasticity

    International Nuclear Information System (INIS)

    Gruttmann, F.; Stein, E.

    1988-01-01

    In classical elastoplasticity with v. Mises yield condition and associate flow rule it is necessary to integrate the plastic strain rate. The radial return integration algorithm is employed to calculate elastoplastic stresses. In the context of the finite element method, the formulation and numerical solution of nonlinear problems in continuum mechanics is based on the weak form of the momentum balance equation (principle of virtual work). The solution of the nonlinear equations is achieved by the Newton-Raphson method in which a sequence of linear problems is solved. If the linear problem is obtained by consistent linearization one gets a quadratic rate of convergence. (orig.) [de

  3. Composition Feature of the Element Tangent Stiffness Matrix of Geometrically Nonlinear 2D Frame Structures

    Directory of Open Access Journals (Sweden)

    Romanas Karkauskas

    2011-04-01

    Full Text Available The expressions of the finite element method tangent stiffness matrix of geometrically nonlinear constructions are not fully presented in publications. The matrixes of small displacements stiffness are usually presented only. To solve various problems of construction analysis or design and to specify the mode of the real deflection of construction, it is necessary to have a fully described tangent matrix analytical expression. This paper presents a technique of tangent stiffness matrix generation using discrete body total potential energy stationary conditions considering geometrically nonlinear 2D frame element taking account of interelement interaction forces only. The obtained vector-function derivative of internal forces considering nodal displacements is the tangent stiffness matrix. The analytical expressions having nodal displacements of matrixes forming the content of the 2D frame construction element tangent stiffness matrix are presented in the article. The suggested methodology has been checked making symbolical calculations in the medium of MatLAB calculation complex. The analytical expression of the stiffness matrix has been obtained.Article in Lithuanian

  4. Incremental localized boundary-domain integro-differential equations of elastic damage mechanics for inhomogeneous body

    OpenAIRE

    Mikhailov, SE

    2006-01-01

    Copyright @ 2006 Tech Science Press A quasi-static mixed boundary value problem of elastic damage mechanics for a continuously inhomogeneous body is considered. Using the two-operator Green-Betti formula and the fundamental solution of an auxiliary homogeneous linear elasticity with frozen initial, secant or tangent elastic coe±cients, a boundary-domain integro-differential formulation of the elasto-plastic problem with respect to the displacement rates and their gradients is derived. Usin...

  5. Tangent unit-vector fields: Nonabelian homotopy invariants and the Dirichlet energy

    KAUST Repository

    Majumdar, Apala; Robbins, J.M.; Zyskin, Maxim

    2009-01-01

    energy, E (H), for continuous tangent maps of arbitrary homotopy type H. The expression for E (H) involves a topological invariant - the spelling length - associated with the (nonabelian) fundamental group of the n-times punctured two-sphere, π1 (S2 - {s1

  6. An equation satisfied by the tangent to a shear-free, geodesic, null congruence

    International Nuclear Information System (INIS)

    Hogan, P.A.; Dublin Inst. for Advanced Studies

    1987-01-01

    A tensorial equation satisfied by the tangent to a shear-free geodesic, null congruence is presented. If the congruence is neither twist-free nor expansion-free then the equation defines a second, unique, null direction previously obtained, using the spinor formalism, by Somers. Some further properties of the equation are discussed. (orig.)

  7. A combined Preisach–Hyperbolic Tangent model for magnetic hysteresis of Terfenol-D

    Energy Technology Data Exchange (ETDEWEB)

    Talebian, Soheil [Department of Mechanical Engineering, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Hojjat, Yousef, E-mail: yhojjat@modares.ac.ir [Department of Mechanical Engineering, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Ghodsi, Mojtaba [Department of Mechanical and Industrial Engineering, Sultan Qaboos University, Muscat (Oman); Karafi, Mohammad Reza [Department of Mechanical Engineering, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Mirzamohammadi, Shahed [Department of Mechanical Engineering, Shahid Rajaee University, Tehran (Iran, Islamic Republic of)

    2015-12-15

    This study presents a new model using the combination of Preisach and Hyperbolic Tangent models, to predict the magnetic hysteresis of Terfenol-D at different frequencies. Initially, a proper experimental setup was fabricated and used to obtain different magnetic hysteresis curves of Terfenol-D; such as major, minor and reversal loops. Then, it was shown that the Hyperbolic Tangent model is precisely capable of modeling the magnetic hysteresis of the Terfenol-D for both rate-independent and rate-dependent cases. Empirical equations were proposed with respect to magnetic field frequency which can calculate the non-dimensional coefficients needed by the model. These empirical equations were validated at new frequencies of 100 Hz and 300 Hz. Finally, the new model was developed through the combination of Preisach and Hyperbolic Tangent models. In the combined model, analytical relations of the Hyperbolic Tangent model for the first order reversal loops determined the weighting function of the Preisach model. This model reduces the required experiments and errors due to numerical differentiations generally needed for characterization of the Preisach function. In addition, it can predict the rate-dependent hysteresis as well as rate-independent hysteresis. - Highlights: • Different hysteresis curves of Terfenol-D are experimentally obtained at 0–200 Hz. • A new model is presented using combination of Preisach and Hyperbolic Tangent models. • The model predicts both rate-independent and rate-dependent hystereses of Terfenol-D. • The analytical model reduces the numerical errors and number of required experiments.

  8. Doubly stratified MHD tangent hyperbolic nanofluid flow due to permeable stretched cylinder

    Science.gov (United States)

    Nagendramma, V.; Leelarathnam, A.; Raju, C. S. K.; Shehzad, S. A.; Hussain, T.

    2018-06-01

    An investigation is exhibited to analyze the presence of heat source and sink in doubly stratified MHD incompressible tangent hyperbolic fluid due to stretching of cylinder embedded in porous space under nanoparticles. To develop the mathematical model of tangent hyperbolic nanofluid, movement of Brownian and thermophoretic are accounted. The established equations of continuity, momentum, thermal and solutal boundary layers are reassembled into sets of non-linear expressions. These assembled expressions are executed with the help of Runge-Kutta scheme with MATLAB. The impacts of sundry parameters are illustrated graphically and the engineering interest physical quantities like skin friction, Nusselt and Sherwood number are examined by computing numerical values. It is clear that the power-law index parameter and curvature parameter shows favorable effect on momentum boundary layer thickness whereas Weissennberg number reveals inimical influence.

  9. Tangent unit-vector fields: Nonabelian homotopy invariants and the Dirichlet energy

    KAUST Repository

    Majumdar, Apala

    2009-10-01

    Let O be a closed geodesic polygon in S2. Maps from O into S2 are said to satisfy tangent boundary conditions if the edges of O are mapped into the geodesics which contain them. Taking O to be an octant of S2, we evaluate the infimum Dirichlet energy, E (H), for continuous tangent maps of arbitrary homotopy type H. The expression for E (H) involves a topological invariant - the spelling length - associated with the (nonabelian) fundamental group of the n-times punctured two-sphere, π1 (S2 - {s1, ..., sn}, *). These results have applications for the theoretical modelling of nematic liquid crystal devices. To cite this article: A. Majumdar et al., C. R. Acad. Sci. Paris, Ser. I 347 (2009). © 2009 Académie des sciences.

  10. Boundary Layer Studies on a Spinning Tangent-Ogive-Cylinder Model

    Science.gov (United States)

    1975-07-01

    ca) An experimental investigation of the Magnus effect on a seven caliber tangent-I ;’ ogive- cylinder model in supersonic flow is reported. The...necessary and Identify by block number) Three-Dimiensional Boundary Layer Compressible Flow Body of Revolution Magnus Effects Boundary Layer...factors have resulted in renewed interest in the study of the Magnus effect . This report describes an experimental study of the effects of spin on

  11. Characterization of the Unit Tangent Sphere Bundle with $ g $-Natural Metric and Almost Contact B-metric Structure

    Directory of Open Access Journals (Sweden)

    Farshad Firuzi

    2017-06-01

    Full Text Available We consider unit tangent sphere bundle of a Riemannian manifold $ (M,g $ as a $ (2n+1 $-dimensional manifold and we equip it with pseudo-Riemannian $ g $-natural almost contact B-metric structure. Then, by computing coefficients of the structure tensor $ F$, we completely characterize the unit tangent sphere bundle equipped to this structure, with respect to the relevant classification of almost contact B-metric structures, and determine a class such that the unit tangent sphere bundle with mentioned structure belongs to it. Also, we find some curvature conditions such that the mentioned structure satisfies each of eleven basic classes.

  12. FEM Simulation of Incremental Shear

    International Nuclear Information System (INIS)

    Rosochowski, Andrzej; Olejnik, Lech

    2007-01-01

    A popular way of producing ultrafine grained metals on a laboratory scale is severe plastic deformation. This paper introduces a new severe plastic deformation process of incremental shear. A finite element method simulation is carried out for various tool geometries and process kinematics. It has been established that for the successful realisation of the process the inner radius of the channel as well as the feeding increment should be approximately 30% of the billet thickness. The angle at which the reciprocating die works the material can be 30 deg. . When compared to equal channel angular pressing, incremental shear shows basic similarities in the mode of material flow and a few technological advantages which make it an attractive alternative to the known severe plastic deformation processes. The most promising characteristic of incremental shear is the possibility of processing very long billets in a continuous way which makes the process more industrially relevant

  13. On kinematical minimum principles for rates and increments in plasticity

    International Nuclear Information System (INIS)

    Zouain, N.

    1984-01-01

    The optimization approach for elastoplastic analysis is discussed showing that some minimum principles related to numerical methods can be derived by means of duality and penalization procedures. Three minimum principles for velocity and plastic multiplier rate fields are presented in the framework of perfect plasticity. The first one is the classical Greenberg formulation. The second one, due to Capurso, is developed here with different motivation, and modified by penalization of constraints so as to arrive at a third principle for rates. The counterparts of these optimization formulations in terms of discrete increments of displacements of displacements and plastic multipliers are discussed. The third one of these minimum principles for finite increments is recognized to be closely related to Maier's formulation of holonomic plasticity. (Author) [pt

  14. FDTD Stability: Critical Time Increment

    OpenAIRE

    Z. Skvor; L. Pauk

    2003-01-01

    A new approach suitable for determination of the maximal stable time increment for the Finite-Difference Time-Domain (FDTD) algorithm in common curvilinear coordinates, for general mesh shapes and certain types of boundaries is presented. The maximal time increment corresponds to a characteristic value of a Helmholz equation that is solved by a finite-difference (FD) method. If this method uses exactly the same discretization as the given FDTD method (same mesh, boundary conditions, order of ...

  15. Applicability of the mα-tangent Method to Estimate Plastic Limit Loads of Elbows and Branch Junctions

    Energy Technology Data Exchange (ETDEWEB)

    Gim, Jae-Min; Kim, Sang-Hyun; Bae, Kyung-Dong; Kim, Yun-Jae [Korea Univ., Seoul (Korea, Republic of); Kim, Jong-Sung [Sejong Univ., Seoul (Korea, Republic of)

    2017-06-15

    In this study, the limit loads calculated by the mα-tangent method based on the linear finite element analysis are compared with the closed form solutions that are proposed by various authors. The objects of the analysis is to select the elbow and the branch pipe which are representative structure of piping system. The applicability of the mα-tangent method are investigated by applying it to cases with various geometries. The internal pressure and the in-plane bending moment are considered and the mα-tangent method is in good agreement with the existing solutions in case of elbows. However, the limit loads calculated by the mα-tangent method for branch junctions do not agree well with the existing solutions and do not show any tendency. The reason is a biased result due to the stress concentration of the discontinuous parts.

  16. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  17. A Few Comments on Determining the Shapes of Hyperboloid Cooling Towers by the Means of Ambient Tangents Method

    OpenAIRE

    Jasińska, Elżbieta; Preweda, Edward

    2004-01-01

    The paper presents the contemplations on determining the parameters of location and shape of the model hyperboloid cooling tower by the means of the ambient tangents method. The attention has been drawn to the method of determining the so-called length of the tangent, understood as the horizontal distance between the station and the points of tangency with the structure, as well as to the impact of calculation of this length on the parameters of the shell being determined. The calculations of...

  18. Without derivatives or limits: from visual and geometrical points of view to algebraic methods for identifying tangent lines

    Science.gov (United States)

    Vivier, L.

    2013-07-01

    Usually, the tangent line is considered to be a calculus notion. However, it is also a graphical and an algebraic notion. The graphical frame, where our primary conceptions are conceived, could give rise to algebraic methods to obtain the tangent line to a curve. In this pre-calculus perspective, two methods are described and discussed according to their potential for secondary students and teacher training.

  19. Overview of TANGENT (Tandem Aerosol Nucleation and Growth ENvironment Tube) 2017 IOP Study

    Science.gov (United States)

    Tiszenkel, L.

    2017-12-01

    New particle formation consists of two steps: nucleation and growth of nucleated particles. However, most laboratory studies have been conducted under conditions where these two processes are convoluted together, thereby hampering the detailed understanding of the effect of chemical species and atmospheric conditions on two processes. The objective of the Tandem Aerosol Nucleation and Growth ENvironment Tube (TANGENT) laboratory study is to investigate aerosol nucleation and growth properties independently by separating these two processes in two different flow tubes. This research is a collaboration between the University of Alabama in Huntsville and the University of Delaware. In this poster we will present the experimental setup of TANGENT and summarize the key results from the first IOP (intense observation period) experiments undertaken during Summer 2017. Nucleation takes place in a temperature- and RH-controlled fast flow reactor (FT-1) where sulfuric acid forms from OH radicals and sulfur dioxide. Sulfuric acid and impurity base compounds are detected with chemical ionization mass spectrometers (CIMS). Particle sizes and number concentrations of newly nucleated particles are measured with a scanning mobility particle sizer (SMPS) and particle size magnifier (PSM), providing concentrations of particles between 1-100 nm. The nucleation particles are transferred directly to the growth tube (FT-2) where oxidants and biogenic organic precursors are added to grow nucleated nanoparticles. Sizes of particles after growth are analyzed with an additional SMPS and elemental chemical composition of 50 nm and above particles detected with a nano-aerosol mass spectrometer (NAMS). TANGENT provides the unique ability to conduct experiments that can monitor and control reactant concentrations, aerosol size and aerosol chemical composition during nucleation and growth. Experiments during this first IOP study have elucidated the effects of sulfur dioxide, particle size

  20. Enabling Incremental Query Re-Optimization.

    Science.gov (United States)

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  1. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  2. Maximal violation of Bell's inequalities for algebras of observables in tangent spacetime regions

    International Nuclear Information System (INIS)

    Summers, S.J.; Werner, R.

    1988-01-01

    We continue our study of Bell's inequalities and quantum field theory. It is shown in considerably broader generality than in our previous work that algebras of local observables corresponding to complementary wedge regions maximally violate Bell's inequality in all normal states. Pairs of commuting von Neumann algebras that maximally violate Bell's inequalities in all normal states are characterized. Algebras of local observables corresponding to tangent double cones are shown to maximally violate Bell's inequalities in all normal states in dilatation-invariant theories, in free quantum field models, and in a class of interacting models. Further, it is proven that such algebras are not split in any theory with an ultraviolet scaling limit

  3. Study on Remote Monitoring System of Crossing and Spanning Tangent Tower

    Science.gov (United States)

    Chen, Da-bing; Zhang, Nai-long; Zhang, Meng-ge; Wang, Ze-hua; Zhang, Yan

    2017-05-01

    In order to grasp the vibration state of overhead transmission line and ensure the operational security of transmission line, the remote monitoring system of crossing and spanning tangent tower was studied. By use of this system, the displacement, velocity and acceleration of the tower, and the local weather data are collected automatically, displayed on computer of remote monitoring centre through wireless network, real-time collection and transmission of vibration signals are realized. The applying results show that the system is excellent in reliability and accuracy and so on. The system can be used to remote monitoring of transmission tower of UHV power transmission lines and in large spanning areas.

  4. Incremental Trust in Grid Computing

    DEFF Research Database (Denmark)

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  5. Convergent systems vs. incremental stability

    NARCIS (Netherlands)

    Rüffer, B.S.; Wouw, van de N.; Mueller, M.

    2013-01-01

    Two similar stability notions are considered; one is the long established notion of convergent systems, the other is the younger notion of incremental stability. Both notions require that any two solutions of a system converge to each other. Yet these stability concepts are different, in the sense

  6. Comparison of the Tangent Linear Properties of Tracer Transport Schemes Applied to Geophysical Problems.

    Science.gov (United States)

    Kent, James; Holdaway, Daniel

    2015-01-01

    A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.

  7. Evolution of cooperation driven by incremental learning

    Science.gov (United States)

    Li, Pei; Duan, Haibin

    2015-02-01

    It has been shown that the details of microscopic rules in structured populations can have a crucial impact on the ultimate outcome in evolutionary games. So alternative formulations of strategies and their revision processes exploring how strategies are actually adopted and spread within the interaction network need to be studied. In the present work, we formulate the strategy update rule as an incremental learning process, wherein knowledge is refreshed according to one's own experience learned from the past (self-learning) and that gained from social interaction (social-learning). More precisely, we propose a continuous version of strategy update rules, by introducing the willingness to cooperate W, to better capture the flexibility of decision making behavior. Importantly, the newly gained knowledge including self-learning and social learning is weighted by the parameter ω, establishing a strategy update rule involving innovative element. Moreover, we quantify the macroscopic features of the emerging patterns to inspect the underlying mechanisms of the evolutionary process using six cluster characteristics. In order to further support our results, we examine the time evolution course for these characteristics. Our results might provide insights for understanding cooperative behaviors and have several important implications for understanding how individuals adjust their strategies under real-life conditions.

  8. Optimality Conditions in Differentiable Vector Optimization via Second-Order Tangent Sets

    International Nuclear Information System (INIS)

    Jimenez, Bienvenido; Novo, Vicente

    2004-01-01

    We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditions when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given

  9. Covariant Renormalizable Modified and Massive Gravity Theories on (Non) Commutative Tangent Lorentz Bundles

    CERN Document Server

    Vacaru, Sergiu I

    2014-01-01

    The fundamental field equations in modified gravity (including general relativity; massive and bimetric theories; Ho\\vrava-Lifshits, HL; Einstein--Finsler gravity extensions etc) posses an important decoupling property with respect to nonholonomic frames with 2 (or 3) +2+2+... spacetime decompositions. This allows us to construct exact solutions with generic off--diagonal metrics depending on all spacetime coordinates via generating and integration functions containing (un-) broken symmetry parameters. Such nonholonomic configurations/ models have a nice ultraviolet behavior and seem to be ghost free and (super) renormalizable in a sense of covariant and/or massive modifications of HL gravity. The apparent noncommutativity and breaking of Lorentz invariance by quantum effects can be encoded into fibers of noncommutative tangent Lorentz bundles for corresponding "partner" anisotropically induced theories. We show how the constructions can be extended to include conjectured covariant reonormalizable models with...

  10. Learning in fully recurrent neural networks by approaching tangent planes to constraint surfaces.

    Science.gov (United States)

    May, P; Zhou, E; Lee, C W

    2012-10-01

    In this paper we present a new variant of the online real time recurrent learning algorithm proposed by Williams and Zipser (1989). Whilst the original algorithm utilises gradient information to guide the search towards the minimum training error, it is very slow in most applications and often gets stuck in local minima of the search space. It is also sensitive to the choice of learning rate and requires careful tuning. The new variant adjusts weights by moving to the tangent planes to constraint surfaces. It is simple to implement and requires no parameters to be set manually. Experimental results show that this new algorithm gives significantly faster convergence whilst avoiding problems like local minima. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. CATALOG OF OBSERVED TANGENTS TO THE SPIRAL ARMS IN THE MILKY WAY GALAXY

    International Nuclear Information System (INIS)

    Vallée, Jacques P.

    2014-01-01

    From the Sun's location in the Galactic disk, one can use different arm tracers (CO, H I, thermal or ionized or relativistic electrons, masers, cold and hot dust, etc.) to locate a tangent to each spiral arm in the disk of the Milky Way. We present a master catalog of the astronomically observed tangents to the Galaxy's spiral arms, using different arm tracers from the literature. Some arm tracers can have slightly divergent results from several papers, so a mean value is taken—see the Appendix for CO, H II, and masers. The catalog of means currently consists of 63 mean tracer entries, spread over many arms (Carina, Crux-Centaurus, Norma, Perseus origin, near 3 kpc, Scutum, Sagittarius), stemming from 107 original arm tracer entries. Additionally, we updated and revised a previous statistical analysis of the angular offset and linear separation from the mid-arm for each different mean arm tracer. Given enough arm tracers, and summing and averaging over all four spiral arms, one could determine if arm tracers have separate and parallel lanes in the Milky Way. This statistical analysis allows a cross-cut of a Galactic spiral arm to be made, confirming a recent discovery of a linear separation between arm tracers. Here, from the mid-arm's CO to the inner edge's hot dust, the arm halfwidth is about 340 pc; doubling would yield a full arm width of 680 pc. We briefly compare these observations with the predictions of many spiral arm theories, notably the density wave theory

  12. CATALOG OF OBSERVED TANGENTS TO THE SPIRAL ARMS IN THE MILKY WAY GALAXY

    Energy Technology Data Exchange (ETDEWEB)

    Vallée, Jacques P., E-mail: jacques.vallee@nrc-cnrc.gc.ca [Herzberg Astrophysics, National Research Council Canada, National Science Infrastructure portfolio, 5071 West Saanich Road, Victoria, BC, V9E 2E7 (Canada)

    2014-11-01

    From the Sun's location in the Galactic disk, one can use different arm tracers (CO, H I, thermal or ionized or relativistic electrons, masers, cold and hot dust, etc.) to locate a tangent to each spiral arm in the disk of the Milky Way. We present a master catalog of the astronomically observed tangents to the Galaxy's spiral arms, using different arm tracers from the literature. Some arm tracers can have slightly divergent results from several papers, so a mean value is taken—see the Appendix for CO, H II, and masers. The catalog of means currently consists of 63 mean tracer entries, spread over many arms (Carina, Crux-Centaurus, Norma, Perseus origin, near 3 kpc, Scutum, Sagittarius), stemming from 107 original arm tracer entries. Additionally, we updated and revised a previous statistical analysis of the angular offset and linear separation from the mid-arm for each different mean arm tracer. Given enough arm tracers, and summing and averaging over all four spiral arms, one could determine if arm tracers have separate and parallel lanes in the Milky Way. This statistical analysis allows a cross-cut of a Galactic spiral arm to be made, confirming a recent discovery of a linear separation between arm tracers. Here, from the mid-arm's CO to the inner edge's hot dust, the arm halfwidth is about 340 pc; doubling would yield a full arm width of 680 pc. We briefly compare these observations with the predictions of many spiral arm theories, notably the density wave theory.

  13. Torsion in a gravity theory with SO(k) x SO(d-k) as tangent group

    International Nuclear Information System (INIS)

    Viswanathan, K.S.; Wong, B.; Simon Fraser Univ., Burnaby, British Columbia

    1985-01-01

    We consider a d-dimensional theory of gravity where the tangent group is SO(k) x SO(d-k) rather than SO(d) as in riemannian theories. This theory has nonvanishing torsion (which is required if the theory is to yield gauge fields). The torsion is determined consistently in terms of vielbein derivatives. (orig.)

  14. Aspects regarding the Calculation of the Dielectric Loss Angle Tangent between the Windings of a Rated 40 MVA Transformer

    Directory of Open Access Journals (Sweden)

    Cristinel Popescu

    2015-09-01

    Full Text Available The paper aims to identify how to determine the dielectric loss angle tangent of the electric transformers from the transformer stations. Autors of the paper managed a case study on the dielectric established between high respectively medium voltage windings of an electrical rated 40 MVA transformer.

  15. Unmanned Maritime Systems Incremental Acquisition Approach

    Science.gov (United States)

    2016-12-01

    REPORT TYPE AND DATES COVERED MBA professional report 4. TITLE AND SUBTITLE UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH 5. FUNDING...Approved for public release. Distribution is unlimited. UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH Thomas Driscoll, Lieutenant...UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH ABSTRACT The purpose of this MBA report is to explore and understand the issues

  16. Incremental deformation: A literature review

    Directory of Open Access Journals (Sweden)

    Nasulea Daniel

    2017-01-01

    Full Text Available Nowadays the customer requirements are in permanent changing and according with them the tendencies in the modern industry is to implement flexible manufacturing processes. In the last decades, metal forming gained attention of the researchers and considerable changes has occurred. Because for a small number of parts, the conventional metal forming processes are expensive and time-consuming in terms of designing and manufacturing preparation, the manufacturers and researchers became interested in flexible processes. One of the most investigated flexible processes in metal forming is incremental sheet forming (ISF. ISF is an advanced flexible manufacturing process which allows to manufacture complex 3D products without expensive dedicated tools. In most of the cases it is needed for an ISF process the following: a simple tool, a fixing device for sheet metal blank and a universal CNC machine. Using this process it can be manufactured axis-symmetric parts, usually using a CNC lathe but also complex asymmetrical parts using CNC milling machines, robots or dedicated equipment. This paper aim to present the current status of incremental sheet forming technologies in terms of process parameters and their influences, wall thickness distribution, springback effect, formability, surface quality and the current main research directions.

  17. Application of the Double-Tangent Construction of Coexisting Phases to Any Type of Phase Equilibrium for Binary Systems Modeled with the Gamma-Phi Approach

    Science.gov (United States)

    Jaubert, Jean-Noël; Privat, Romain

    2014-01-01

    The double-tangent construction of coexisting phases is an elegant approach to visualize all the multiphase binary systems that satisfy the equality of chemical potentials and to select the stable state. In this paper, we show how to perform the double-tangent construction of coexisting phases for binary systems modeled with the gamma-phi…

  18. FDTD Stability: Critical Time Increment

    Directory of Open Access Journals (Sweden)

    Z. Skvor

    2003-06-01

    Full Text Available A new approach suitable for determination of the maximal stable timeincrement for the Finite-Difference Time-Domain (FDTD algorithm incommon curvilinear coordinates, for general mesh shapes and certaintypes of boundaries is presented. The maximal time incrementcorresponds to a characteristic value of a Helmholz equation that issolved by a finite-difference (FD method. If this method uses exactlythe same discretization as the given FDTD method (same mesh, boundaryconditions, order of precision etc., the maximal stable time incrementis obtained from the highest characteristic value. The FD system issolved by an iterative method, which uses only slightly alteredoriginal FDTD formulae. The Courant condition yields a stable timeincrement, but in certain cases the maximum increment is slightlygreater [2].

  19. Incremental Observer Relative Data Extraction

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    2004-01-01

    The visual exploration of large databases calls for a tight coupling of database and visualization systems. Current visualization systems typically fetch all the data and organize it in a scene tree that is then used to render the visible data. For immersive data explorations in a Cave...... or a Panorama, where an observer is data space this approach is far from optimal. A more scalable approach is to make the observer-aware database system and to restrict the communication between the database and visualization systems to the relevant data. In this paper VR-tree, an extension of the R......-tree, is used to index visibility ranges of objects. We introduce a new operator for incremental Observer Relative data Extraction (iORDE). We propose the Volatile Access STructure (VAST), a lightweight main memory structure that is created on the fly and is maintained during visual data explorations. VAST...

  20. Dosimetric comparison of intensity modulated radiotherapy techniques and standard wedged tangents for whole breast radiotherapy

    International Nuclear Information System (INIS)

    Fong, Andrew; Bromley, Regina; Beat, Mardi; Vien, Din; Dineley, Jude; Morgan, Graeme

    2009-01-01

    Full text: Prior to introducing intensity modulated radiotherapy (IMRT) for whole breast radiotherapy (WBRT) into our department we undertook a comparison of the dose parameters of several IMRT techniques and standard wedged tangents (SWT). Our aim was to improve the dose distribution to the breast and to decrease the dose to organs at risk (OAR): heart, lung and contralateral breast (Contra Br). Treatment plans for 20 women (10 right-sided and 10 left-sided) previously treated with SWT for WBRT were used to compare (a) SWT; (b) electronic compensators IMRT (E-IMRT); (c) tangential beam IMRT (T-IMRT); (d) coplanar multi-field IMRT (CP-IMRT); and (e) non-coplanar multi-field IMRT (NCP-IMRT). Plans for the breast were compared for (i) dose homogeneity (DH); (ii) conformity index (CI); (iii) mean dose; (iv) maximum dose; (v) minimum dose; and dose to OAR were calculated (vi) heart; (vii) lung and (viii) Contra Br. Compared with SWT, all plans except CP-IMRT gave improvement in at least two of the seven parameters evaluated. T-IMRT and NCP-IMRT resulted in significant improvement in all parameters except DH and both gave significant reduction in doses to OAR. As on initial evaluation NCP-IMRT is likely to be too time consuming to introduce on a large scale, T-IMRT is the preferred technique for WBRT for use in our department.

  1. Three dimensional peristaltic flow of hyperbolic tangent fluid in non-uniform channel having flexible walls

    Directory of Open Access Journals (Sweden)

    M. Ali Abbas

    2016-03-01

    Full Text Available In this present analysis, three dimensional peristaltic flow of hyperbolic tangent fluid in a non-uniform channel has been investigated. We have considered that the pressure is uniform over the whole cross section and the interial effects have been neglected. For this purpose we consider laminar flow under the assumptions of long wavelength (λ→∞ and creeping flow (Re→0 approximations. The attained highly nonlinear equations are solved with the help of Homotopy perturbation method. The influence of various physical parameters of interest is demonstrated graphically for wall tension, mass characterization, damping nature of the wall, wall rigidity, wall elastance, aspect ratio and the Weissenberg number. In this present investigation we found that the magnitude of the velocity is maximum in the center of the channel whereas it is minimum near the walls. Stream lines are also drawn to discuss the trapping mechanism for all the physical parameters. Comparison has also been presented between Newtonian and non-Newtonian fluid.

  2. On excursion increments in heartbeat dynamics

    International Nuclear Information System (INIS)

    Guzmán-Vargas, L.; Reyes-Ramírez, I.; Hernández-Pérez, R.

    2013-01-01

    We study correlation properties of excursion increments of heartbeat time series from healthy subjects and heart failure patients. We construct the excursion time based on the original heartbeat time series, representing the time employed by the walker to return to the local mean value. Next, the detrended fluctuation analysis and the fractal dimension method are applied to the magnitude and sign of the increments in the time excursions between successive excursions for the mentioned groups. Our results show that for magnitude series of excursion increments both groups display long-range correlations with similar correlation exponents, indicating that large (small) increments (decrements) are more likely to be followed by large (small) increments (decrements). For sign sequences and for both groups, we find that increments are short-range anti-correlated, which is noticeable under heart failure conditions

  3. Increment memory module for spectrometric data recording

    International Nuclear Information System (INIS)

    Zhuchkov, A.A.; Myagkikh, A.I.

    1988-01-01

    Incremental memory unit designed to input differential energy spectra of nuclear radiation is described. ROM application as incremental device has allowed to reduce the number of elements and do simplify information readout from the unit. 12-bit 2048 channels present memory unit organization. The device is connected directly with the bus of microprocessor systems similar to KR 580. Incrementation maximal time constitutes 3 mks. It is possible to use this unit in multichannel counting mode

  4. Small Diameter Bomb Increment II (SDB II)

    Science.gov (United States)

    2015-12-01

    Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-439 Small Diameter Bomb Increment II (SDB II) As of FY 2017 President’s Budget Defense... Bomb Increment II (SDB II) DoD Component Air Force Joint Participants Department of the Navy Responsible Office References SAR Baseline (Production...Mission and Description Small Diameter Bomb Increment II (SDB II) is a joint interest United States Air Force (USAF) and Department of the Navy

  5. Consequence of nanofluid on peristaltic transport of a hyperbolic tangent fluid model in the occurrence of apt (tending) magnetic field

    International Nuclear Information System (INIS)

    Akram, Safia; Nadeem, S.

    2014-01-01

    In the current study, sway of nanofluid on peristaltic transport of a hyperbolic tangent fluid model in the incidence of tending magnetic field has been argued. The governing equations of a nanofluid are first modeled and then simplified under lubrication approach. The coupled nonlinear equations of temperature and nano particle volume fraction are solved analytically using a homotopy perturbation technique. The analytical solution of the stream function and pressure gradient are carried out using perturbation technique. The graphical results of the problem under discussion are also being brought under consideration to see the behavior of various physical parameters. - Highlights: • The main motivation of this work is that we want to see the behavior of nanofluids in peristaltic flows. • In literature few articles are available on this, but no article is available in asymmetric channel on the new fluid model hyperbolic tangent fluid. • So we want to fill the gap in literature studying this

  6. Consequence of nanofluid on peristaltic transport of a hyperbolic tangent fluid model in the occurrence of apt (tending) magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Akram, Safia, E-mail: safia_akram@yahoo.com [Department of Basic Sciences, MCS, National University of Sciences and Technology, Rawalpindi 46000 (Pakistan); Nadeem, S. [Department of Mathematics, Quaid-i-Azam University 45320, Islamabad 44000 (Pakistan)

    2014-05-01

    In the current study, sway of nanofluid on peristaltic transport of a hyperbolic tangent fluid model in the incidence of tending magnetic field has been argued. The governing equations of a nanofluid are first modeled and then simplified under lubrication approach. The coupled nonlinear equations of temperature and nano particle volume fraction are solved analytically using a homotopy perturbation technique. The analytical solution of the stream function and pressure gradient are carried out using perturbation technique. The graphical results of the problem under discussion are also being brought under consideration to see the behavior of various physical parameters. - Highlights: • The main motivation of this work is that we want to see the behavior of nanofluids in peristaltic flows. • In literature few articles are available on this, but no article is available in asymmetric channel on the new fluid model hyperbolic tangent fluid. • So we want to fill the gap in literature studying this.

  7. Singularities of plane complex curves and limits of Kähler metrics with cone singularities. I: Tangent Cones

    Directory of Open Access Journals (Sweden)

    Borbon Martin de

    2017-02-01

    Full Text Available The goal of this article is to provide a construction and classification, in the case of two complex dimensions, of the possible tangent cones at points of limit spaces of non-collapsed sequences of Kähler-Einstein metrics with cone singularities. The proofs and constructions are completely elementary, nevertheless they have an intrinsic beauty. In a few words; tangent cones correspond to spherical metrics with cone singularities in the projective line by means of the Kähler quotient construction with respect to the S1-action generated by the Reeb vector field, except in the irregular case ℂβ₁×ℂβ₂ with β₂/ β₁ ∉ Q.

  8. Incremental Query Rewriting with Resolution

    Science.gov (United States)

    Riazanov, Alexandre; Aragão, Marcelo A. T.

    We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.

  9. Screening of mucoadhesive vaginal gel formulations

    Directory of Open Access Journals (Sweden)

    Ana Ochoa Andrade

    2014-12-01

    Full Text Available Rational design of vaginal drug delivery formulations requires special attention to vehicle properties that optimize vaginal coating and retention. The aim of the present work was to perform a screening of mucoadhesive vaginal gels formulated with carbomer or carrageenan in binary combination with a second polymer (carbomer, guar or xanthan gum. The gels were characterised using in vitroadhesion, spreadability and leakage potential studies, as well as rheological measurements (stress and frequency sweep tests and the effect of dilution with simulated vaginal fluid (SVF on spreadability. Results were analysed using analysis of variance and multiple factor analysis. The combination of polymers enhanced adhesion of both primary gelling agents, carbomer and carrageenan. From the rheological point of view all formulations presented a similar behaviour, prevalently elastic and characterised by loss tangent values well below 1. No correlation between rheological and adhesion behaviour was found. Carbomer and carrageenan gels containing the highest percentage of xanthan gum displayed good in vitro mucoadhesion and spreadability, minimal leakage potential and high resistance to dilution. The positive results obtained with carrageenan-xanthan gum-based gels can encourage the use of natural biocompatible adjuvants in the composition of vaginal products, a formulation field that is currently under the synthetic domain.

  10. Internal Physical Features of a Land Surface Model Employing a Tangent Linear Model

    Science.gov (United States)

    Yang, Runhua; Cohn, Stephen E.; daSilva, Arlindo; Joiner, Joanna; Houser, Paul R.

    1997-01-01

    The Earth's land surface, including its biomass, is an integral part of the Earth's weather and climate system. Land surface heterogeneity, such as the type and amount of vegetative covering., has a profound effect on local weather variability and therefore on regional variations of the global climate. Surface conditions affect local weather and climate through a number of mechanisms. First, they determine the re-distribution of the net radiative energy received at the surface, through the atmosphere, from the sun. A certain fraction of this energy increases the surface ground temperature, another warms the near-surface atmosphere, and the rest evaporates surface water, which in turn creates clouds and causes precipitation. Second, they determine how much rainfall and snowmelt can be stored in the soil and how much instead runs off into waterways. Finally, surface conditions influence the near-surface concentration and distribution of greenhouse gases such as carbon dioxide. The processes through which these mechanisms interact with the atmosphere can be modeled mathematically, to within some degree of uncertainty, on the basis of underlying physical principles. Such a land surface model provides predictive capability for surface variables including ground temperature, surface humidity, and soil moisture and temperature. This information is important for agriculture and industry, as well as for addressing fundamental scientific questions concerning global and local climate change. In this study we apply a methodology known as tangent linear modeling to help us understand more deeply, the behavior of the Mosaic land surface model, a model that has been developed over the past several years at NASA/GSFC. This methodology allows us to examine, directly and quantitatively, the dependence of prediction errors in land surface variables upon different vegetation conditions. The work also highlights the importance of accurate soil moisture information. Although surface

  11. Hyperbolic tangent variational approximation for interfacial profiles of binary polymer blends

    International Nuclear Information System (INIS)

    Lifschitz, M.; Freed, K.F.; Tang, H.

    1995-01-01

    Contemporary theories of binary polymer blend interfaces incorporate such features of real polymer blends as compressibility, local correlations, monomer structure, etc. However, these theories require complicated numerical schemes, and their solutions often cannot be interpreted in a physically clear fashion. We develop a variational formalism for computing interfacial properties of binary polymer blends based on a hyperbolic tangent representation for the interfaces. While such an analysis is straightforward in the incompressible limit, the extension to compressible binary blends requires two distinct width parameters and nontrivial analysis. When the profile width parameters are chosen to minimize the excess free energy of a phase separated binary blend, then the interfacial properties computed from our simplified interfacial theory closely match those computed with the much more sophisticated (and computationally intensive) treatments. Significant attention is devoted to describing the interfacial properties of blends in the regime intermediate between the strong and the weak segregation limits as well as to extrapolating between these limits. The extension of the square gradient theory to the Tang--Freed quartic approximation provides a more precise definition of the weak segregation limit, but the treatment is found to overestimate both the interfacial tension and width in the strong segregation limit. The width parameters for the different components of a strongly asymmetric compressible blend vary to a lesser extent than an asymptotic analysis in the bulk suggests. This finding indicates that the central portion of the profile contributes the most in the minimization of the excess free energy with respect to the variational width parameters. copyright 1995 American Institute of Physics

  12. Tangent modulus in numerical integration of constitutive relations and its influence on convergence of N-R method

    Directory of Open Access Journals (Sweden)

    Poruba Z.

    2009-06-01

    Full Text Available For the numerical solution of elasto-plastic problems with use of Newton-Raphson method in global equilibrium equation it is necessary to determine the tangent modulus in each integration point. To reach the parabolic convergence of Newton-Raphson method it is convenient to use so called algorithmic tangent modulus which is consistent with used integration scheme. For more simple models for example Chaboche combined hardening model it is possible to determine it in analytical way. In case of more robust macroscopic models it is in many cases necessary to use the approximation approach. This possibility is presented in this contribution for radial return method on Chaboche model. An example solved in software Ansys corresponds to line contact problem with assumption of Coulomb's friction. The study shows at the end that the number of iteration of N-R method is higher in case of continuum tangent modulus and many times higher with use of modified N-R method, initial stiffness method.

  13. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    Science.gov (United States)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  14. Incremental Support Vector Machine Framework for Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuichi Motai

    2007-01-01

    Full Text Available Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  15. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  16. An electron tomography algorithm for reconstructing 3D morphology using surface tangents of projected scattering interfaces

    Science.gov (United States)

    Petersen, T. C.; Ringer, S. P.

    2010-03-01

    Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular

  17. Efficient Incremental Checkpointing of Java Programs

    DEFF Research Database (Denmark)

    Lawall, Julia Laetitia; Muller, Gilles

    2000-01-01

    This paper investigates the optimization of language-level checkpointing of Java programs. First, we describe how to systematically associate incremental checkpoints with Java classes. While being safe, the genericness of this solution induces substantial execution overhead. Second, to solve...

  18. Implicit implementation and consistent tangent modulus of a viscoplastic model for polymers

    OpenAIRE

    ACHOUR-RENAULT, Nadia; CHATZIGEORGIOU, George; MERAGHNI, Fodil; CHEMISKY, Yves; FITOUSSI, Joseph

    2015-01-01

    In this work, the phenomenological viscoplastic DSGZ model[Duan, Y., Saigal, A., Greif, R., Zimmerman, M. A., 2001. A Uniform Phenomenological Constitutive Model for Glassy and Semicrystalline Polymers. Polymer Engineering and Science 41 (8), 1322-1328], developed for glassy or semi-crystalline polymers, is numerically implemented in a three dimensional framework, following an implicit formulation. The computational methodology is based on the radial return mapping algorithm. This implicit fo...

  19. ATTENUATION OF DIFFRACTED MULTIPLES WITH AN APEX-SHIFTED TANGENT-SQUARED RADON TRANSFORM IN IMAGE SPACE

    Directory of Open Access Journals (Sweden)

    Alvarez Gabriel

    2006-12-01

    Full Text Available In this paper, we propose a method to attenuate diffracted multiples with an apex-shifted tangent-squared Radon transform in angle domain common image gathers (ADCIG . Usually, where diffracted multiples are a problem, the wave field propagation is complex and the moveout of primaries and multiples in data space is irregular. The method handles the complexity of the wave field propagation by wave-equation migration provided that migration velocities are reasonably accurate. As a result, the moveout of the multiples is well behaved in the ADCIGs. For 2D data, the apex-shifted tangent-squared Radon transform maps the 2D space image into a 3D space-cube model whose dimensions are depth, curvature and apex-shift distance.
    Well-corrected primaries map to or near the zero curvature plane and specularly-reflected multiples map to or near the zero apex-shift plane. Diffracted multiples map elsewhere in the cube according to their curvature and apex-shift distance. Thus, specularly reflected as well as diffracted multiples can be attenuated simultaneously. This approach is illustrated with a segment of a 2D seismic line over a large salt body in the Gulf of Mexico. It is shown that ignoring the apex shift compromises the attenuation of the diffracted multiples, whereas the approach proposed attenuates both the specularly-reflected and the diffracted multiples without compromising the primaries.

  20. Variations in breast tangent radiotherapy: a survey of practice in New South Wales and the Australian Capital Territory

    International Nuclear Information System (INIS)

    Veness, M.J.; Delaney, G.; Berry, M.

    1999-01-01

    The breast is a complex anatomical structure where achieving a homogeneous dose distribution with radiation treatment is difficult. Despite obvious similarities in the approach to such treatment (using tangents) there is variation in the process of simulation, planning and treatment between radiation oncologists. Previous Australasian studies in the treatment of lung cancer, prostate cancer and Hodgkin's disease highlighted considerable variation in many areas of treatment. As part of a multicentre breast phantom study involving 10 radiation oncology departments throughout New South Wales (NSW) and the Australian Capital Territory (ACT), a 22-question survey was distributed. The aim of the survey was to assess the extent of variation in the approach to the simulation, planning and treatment of early breast cancer using tangents. Responses from 10 different radiation oncology departments revealed variation in most areas of the survey. There is no reason to assume similar variations do not occur Australasia wide. Studies involving overseas radiation oncologists also reveal a wide variation in treating early breast cancer. The consequences of such variations remain unclear. Copyright (1999) Blackwell Science Pty Ltd

  1. Growth increments in teeth of Diictodon (Therapsida

    Directory of Open Access Journals (Sweden)

    J. Francis Thackeray

    1991-09-01

    Full Text Available Growth increments circa 0.02 mm in width have been observed in sectioned tusks of Diictodon from the Late Permian lower Beaufort succession of the South African Karoo, dated between about 260 and 245 million years ago. Mean growth increments show a decline from relatively high values in the Tropidostoma/Endothiodon Assemblage Zone, to lower values in the Aulacephalodon/Cistecephaluszone, declining still further in the Dicynodon lacerficeps/Whaitsia zone at the end of the Permian. These changes coincide with gradual changes in carbon isotope ratios measured from Diictodon tooth apatite. It is suggested that the decline in growth increments is related to environmental changes associated with a decline in primary production which contributed to the decline in abundance and ultimate extinction of Diictodon.

  2. Theory of Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Martins, P.A.F.; Bay, Niels; Skjødt, Martin

    2008-01-01

    This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in-plan......-plane contact friction and is focused on the extreme modes of deformation that are likely to be found in single point incremental forming processes. The overall investigation is supported by experimental work performed by the authors and data retrieved from the literature.......This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in...

  3. On the Magnetic Squashing Factor and the Lie Transport of Tangents

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Roger B.; Pontin, David I.; Hornig, Gunnar [University of Dundee Nethergate, Dundee (United Kingdom)

    2017-10-20

    The squashing factor (or squashing degree) of a vector field is a quantitative measure of the deformation of the field line mapping between two surfaces. In the context of solar magnetic fields, it is often used to identify gradients in the mapping of elementary magnetic flux tubes between various flux domains. Regions where these gradients in the mapping are large are referred to as quasi-separatrix layers (QSLs), and are a continuous extension of separators and separatrix surfaces. These QSLs are observed to be potential sites for the formation of strong electric currents, and are therefore important for the study of magnetic reconnection in three dimensions. Since the squashing factor, Q , is defined in terms of the Jacobian of the field line mapping, it is most often calculated by first determining the mapping between two surfaces (or some approximation of it) and then numerically differentiating. Tassev and Savcheva have introduced an alternative method, in which they parameterize the change in separation between adjacent field lines, and then integrate along individual field lines to get an estimate of the Jacobian without the need to numerically differentiate the mapping itself. But while their method offers certain computational advantages, it is formulated on a perturbative description of the field line trajectory, and the accuracy of this method is not entirely clear. Here we show, through an alternative derivation, that this integral formulation is, in principle, exact. We then demonstrate the result in the case of a linear, 3D magnetic null, which allows for an exact analytical description and direct comparison to numerical estimates.

  4. Crystallization Formulation Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Crystallization Formulation Lab fills a critical need in the process development and optimization of current and new explosives and energetic formulations. The...

  5. On Newton-Raphson formulation and algorithm for displacement based structural dynamics problem with quadratic damping nonlinearity

    Directory of Open Access Journals (Sweden)

    Koh Kim Jie

    2017-01-01

    Full Text Available Quadratic damping nonlinearity is challenging for displacement based structural dynamics problem as the problem is nonlinear in time derivative of the primitive variable. For such nonlinearity, the formulation of tangent stiffness matrix is not lucid in the literature. Consequently, ambiguity related to kinematics update arises when implementing the time integration-iterative algorithm. In present work, an Euler-Bernoulli beam vibration problem with quadratic damping nonlinearity is addressed as the main source of quadratic damping nonlinearity arises from drag force estimation, which is generally valid only for slender structures. Employing Newton-Raphson formulation, tangent stiffness components associated with quadratic damping nonlinearity requires velocity input for evaluation purpose. For this reason, two mathematically equivalent algorithm structures with different kinematics arrangement are tested. Both algorithm structures result in the same accuracy and convergence characteristic of solution.

  6. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  7. Incremental Integrity Checking: Limitations and Possibilities

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2005-01-01

    Integrity checking is an essential means for the preservation of the intended semantics of a deductive database. Incrementality is the only feasible approach to checking and can be obtained with respect to given update patterns by exploiting query optimization techniques. By reducing the problem...... to query containment, we show that no procedure exists that always returns the best incremental test (aka simplification of integrity constraints), and this according to any reasonable criterion measuring the checking effort. In spite of this theoretical limitation, we develop an effective procedure...

  8. History Matters: Incremental Ontology Reasoning Using Modules

    Science.gov (United States)

    Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny

    The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.

  9. Five-axis Control Processing Using NC Machine Tools : A Tool Posture Decision Using the Tangent Slope at a Cut Point on a Work

    OpenAIRE

    小島, 龍広; 西田, 知照; 扇谷, 保彦

    2003-01-01

    This report deals with the way to decide tool posture and the way to analytically calculate tool path for the work shape requiring 5-axis control machining. In the tool path calculation, basic equations are derived using the principle that the tangent slope at a cut point on a work and the one at a cutting point on a tool edge are identical. A tool posture decision procedure using the tangent slope at each cut point on a work is proposed for any shape of tool edge. The valid- ity of the way t...

  10. The balanced scorecard: an incremental approach model to health care management.

    Science.gov (United States)

    Pineno, Charles J

    2002-01-01

    The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.

  11. Design of methodology for incremental compiler construction

    Directory of Open Access Journals (Sweden)

    Pavel Haluza

    2011-01-01

    Full Text Available The paper deals with possibilities of the incremental compiler construction. It represents the compiler construction possibilities for languages with a fixed set of lexical units and for languages with a variable set of lexical units, too. The methodology design for the incremental compiler construction is based on the known algorithms for standard compiler construction and derived for both groups of languages. Under the group of languages with a fixed set of lexical units there belong languages, where each lexical unit has its constant meaning, e.g., common programming languages. For this group of languages the paper tries to solve the problem of the incremental semantic analysis, which is based on incremental parsing. In the group of languages with a variable set of lexical units (e.g., professional typographic system TEX, it is possible to change arbitrarily the meaning of each character on the input file at any time during processing. The change takes effect immediately and its validity can be somehow limited or is given by the end of the input. For this group of languages this paper tries to solve the problem case when we use macros temporarily changing the category of arbitrary characters.

  12. Existing School Buildings: Incremental Seismic Retrofit Opportunities.

    Science.gov (United States)

    Federal Emergency Management Agency, Washington, DC.

    The intent of this document is to provide technical guidance to school district facility managers for linking specific incremental seismic retrofit opportunities to specific maintenance and capital improvement projects. The linkages are based on logical affinities, such as technical fit, location of the work within the building, cost saving…

  13. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  14. The Cognitive Underpinnings of Incremental Rehearsal

    Science.gov (United States)

    Varma, Sashank; Schleisman, Katrina B.

    2014-01-01

    Incremental rehearsal (IR) is a flashcard technique that has been developed and evaluated by school psychologists. We discuss potential learning and memory effects from cognitive psychology that may explain the observed superiority of IR over other flashcard techniques. First, we propose that IR is a form of "spaced practice" that…

  15. Statistics of wind direction and its increments

    International Nuclear Information System (INIS)

    Doorn, Eric van; Dhruva, Brindesh; Sreenivasan, Katepalli R.; Cassella, Victor

    2000-01-01

    We study some elementary statistics of wind direction fluctuations in the atmosphere for a wide range of time scales (10 -4 sec to 1 h), and in both vertical and horizontal planes. In the plane parallel to the ground surface, the direction time series consists of two parts: a constant drift due to large weather systems moving with the mean wind speed, and fluctuations about this drift. The statistics of the direction fluctuations show a rough similarity to Brownian motion but depend, in detail, on the wind speed. This dependence manifests itself quite clearly in the statistics of wind-direction increments over various intervals of time. These increments are intermittent during periods of low wind speeds but Gaussian-like during periods of high wind speeds. (c) 2000 American Institute of Physics

  16. Evolving effective incremental SAT solvers with GP

    OpenAIRE

    Bader, Mohamed; Poli, R.

    2008-01-01

    Hyper-Heuristics could simply be defined as heuristics to choose other heuristics, and it is a way of combining existing heuristics to generate new ones. In a Hyper-Heuristic framework, the framework is used for evolving effective incremental (Inc*) solvers for SAT. We test the evolved heuristics (IncHH) against other known local search heuristics on a variety of benchmark SAT problems.

  17. Teraflop-scale Incremental Machine Learning

    OpenAIRE

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-u...

  18. Shakedown analysis by finite element incremental procedures

    International Nuclear Information System (INIS)

    Borkowski, A.; Kleiber, M.

    1979-01-01

    It is a common occurence in many practical problems that external loads are variable and the exact time-dependent history of loading is unknown. Instead of it load is characterized by a given loading domain: a convex polyhedron in the n-dimensional space of load parameters. The problem is then to check whether a structure shakes down, i.e. responds elastically after a few elasto-plastic cycles, or not to a variable loading as defined above. Such check can be performed by an incremental procedure. One should reproduce incrementally a simple cyclic process which consists of proportional load paths that connect the origin of the load space with the corners of the loading domain. It was proved that if a structure shakes down to such loading history then it is able to adopt itself to an arbitrary load path contained in the loading domain. The main advantage of such approach is the possibility to use existing incremental finite-element computer codes. (orig.)

  19. Laboratory Studies of Temperature and Relative Humidity Dependence of Aerosol Nucleation during the TANGENT 2017 IOP Study

    Science.gov (United States)

    Ouyang, Q.; Tiszenkel, L.; Stangl, C. M.; Krasnomowitz, J.; Johnston, M. V.; Lee, S.

    2017-12-01

    In this poster, we will present recent measurements of temperature and relative humidity dependence of aerosol nucleation of sulfuric acid under the conditions representative of the ground level to the free troposphere. Aerosol nucleation is critically dependent on temperature, but the current global aerosol models use nucleation algorithms that are independent of temperature and relative humidity due to the lack of experimental data. Thus, these models fail to simulate nucleation in a wide range of altitude and latitude conditions. We are currently conducting the Tandem Aerosol Nucleation and Growth Environment Tube (TANGENT) the intense observation period (IOP) experiments to investigate the aerosol nucleation and growth properties independently, during nucleation and growth. Nucleation takes place from sulfuric acid, water and some base compounds in a fast flow nucleation tube (FT-1). Nucleation precursors are detected with two chemical ionization mass spectrometers (CIMS) and newly nucleated particles are measured with a particle size magnifier (PSM) and a scanning mobility particle sizers (SMPS). Then these particles grow further in the second flow tube (FT-2) in the presence of oxidants of biogenic organic compounds. Chemical compositions of grown particles are further analyzed with a nano-aerosol mass spectrometer (NAMS). Our experimental results will provide a robust algorithm for aerosol nucleation and growth rates as a function of temperature and relative humidity.

  20. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    International Nuclear Information System (INIS)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-01-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification. (paper)

  1. Simulation and comparison of perturb and observe and incremental ...

    Indian Academy of Sciences (India)

    Perturb and Observe (P & O) algorithm and Incremental conductance algorithm. ... Keywords. Solar array; insolation; MPPT; modelling, P & O; incremental conductance. 1. .... voltage level. It is also ..... Int. J. Advances in Eng. Technol. 133–148.

  2. 48 CFR 3432.771 - Provision for incremental funding.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Provision for incremental funding. 3432.771 Section 3432.771 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION..., Incremental Funding, in a solicitation if a cost-reimbursement contract using incremental funding is...

  3. Incremental learning for automated knowledge capture

    Energy Technology Data Exchange (ETDEWEB)

    Benz, Zachary O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilico, Justin Derrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dixon, Kevin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wendt, Jeremy Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-12-01

    People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

  4. Two models of minimalist, incremental syntactic analysis.

    Science.gov (United States)

    Stabler, Edward P

    2013-07-01

    Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.

  5. Incremental Nonnegative Matrix Factorization for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  6. [Incremental cost effectiveness of multifocal cataract surgery].

    Science.gov (United States)

    Pagel, N; Dick, H B; Krummenauer, F

    2007-02-01

    Supplementation of cataract patients with multifocal intraocular lenses involves an additional financial investment when compared to the corresponding monofocal supplementation, which usually is not funded by German health care insurers. In the context of recent resource allocation discussions, however, the cost effectiveness of multifocal cataract surgery could become an important rationale. Therefore an evidence-based estimation of its cost effectiveness was carried out. Three independent meta-analyses were implemented to estimate the gain in uncorrected near visual acuity and best corrected visual acuity (vision lines) as well as the predictability (fraction of patients without need for reading aids) of multifocal supplementation. Study reports published between 1995 and 2004 (English or German language) were screened for appropriate key words. Meta effects in visual gain and predictability were estimated by means and standard deviations of the reported effect measures. Cost data were estimated by German DRG rates and individual lens costs; the cost effectiveness of multifocal cataract surgery was then computed in terms of its marginal cost effectiveness ratio (MCER) for each clinical benefit endpoint; the incremental costs of multifocal versus monofocal cataract surgery were further estimated by means of their respective incremental cost effectiveness ratio (ICER). An independent meta-analysis estimated the complication profiles to be expected after monofocal and multifocal cataract surgery in order to evaluate expectable complication-associated additional costs of both procedures; the marginal and incremental cost effectiveness estimates were adjusted accordingly. A sensitivity analysis comprised cost variations of +/- 10 % and utility variations alongside the meta effect estimate's 95 % confidence intervals. Total direct costs from the health care insurer's perspective were estimated 3363 euro, associated with a visual meta benefit in best corrected visual

  7. The Dynamic Character of the Flow Over a 3.5 Caliber Tangent-Ogive Cylinder in Steady and Maneuvering States at High Incidence

    OpenAIRE

    Zeiger, Matthew D.

    2014-01-01

    Although complex, inconsistent and fickle, the time-averaged flow over a stationary slender forebody is generally well-understood. However, the nature of unsteady, time-varying flows over slender forebodies - whether due to the natural unsteadiness or forced maneuvering - is not well-understood. This body of work documents three experimental investigations into the unsteadiness of the flow over a 3.5 caliber tangent-ogive cylinder at high angles of incidence. The goal of the investigations...

  8. Comparison of measured and calculated contralateral breast doses in whole breast radiotherapy for VMAT and standard tangent techniques

    International Nuclear Information System (INIS)

    Tse, T.L.J; Bromley, R.; Booth, J.; Gray, A.

    2011-01-01

    Full text: Objective This study aims to evaluate the accuracy of calculated dose with the Eclipse analytical anisotropic algorithm (AAA) for contralateral breast (CB) in left-sided breast radiotherapy for dual-arc VMA T and standard wedged tangent (SWT) techniques. Methods and materials Internal and surface CB doses were measured with EBT2 film in an anthropomorphic phantom mounted with C-cup and D-cup breasts. The measured point dose was approximated by averaging doses over the 4 x 4 mm 2 central region of each 2 x 2 cm2 piece of film. The dose in the target region of the breast was also measured. The measured results were compared to AAA calculations with calculation grids of I, 2.5 and 5 mm. Results In SWT plans, the average ratios of calculation to measurement for internal doses were 0.63 ± 0.081 and 0.5 I ± 0.28 in the medial and lateral aspects, respectively. Corresponding ratios for surface doses were 0.88 ± 0.22 and 0.38 ± 0.38. In VMAT plans, however, the calculation accuracies showed little dependence on the measurement locations, the ratios were 0.78 ± O. I I and 0.81 ± 0.085 for internal and surface doses. In general, finer calculation resolutions did not inevitably improve the dose estimates of internal doses. For surface doses, using smaller grid size I mm could improve the calculation accuracies on the medial but not the lateral aspects of CB. Conclusion In all plans, AAA had a tendency to underestimate both internal and surface CB doses. Overall, it produces more accurate results in VMAT than SWT plans.

  9. A simple extension of contraction theory to study incremental stability properties

    DEFF Research Database (Denmark)

    Jouffroy, Jerome

    Contraction theory is a recent tool enabling to study the stability of nonlinear systems trajectories with respect to one another, and therefore belongs to the class of incremental stability methods. In this paper, we extend the original definition of contraction theory to incorporate...... in an explicit manner the control input of the considered system. Such an extension, called universal contraction, is quite analogous in spirit to the well-known Input-to-State Stability (ISS). It serves as a simple formulation of incremental ISS, external stability, and detectability in a differential setting....... The hierarchical combination result of contraction theory is restated in this framework, and a differential small-gain theorem is derived from results already available in Lyapunov theory....

  10. Incremental fold tests of remagnetized carbonate rocks

    Science.gov (United States)

    Van Der Voo, R.; van der Pluijm, B.

    2017-12-01

    Many unmetamorphosed carbonates all over the world are demonstrably remagnetized, with the age of the secondary magnetizations typically close to that of the nearest orogeny in space and time. This observation did not become compelling until the mid-1980's, when the incremental fold test revealed the Appalachian carbonates to carry a syn-deformational remanence of likely Permian age (Scotese et al., 1982, Phys. Earth Planet. Int., v. 30, p. 385-395; Cederquist et al., 2006, Tectonophysics v. 422, p. 41-54). Since that time scores of Appalachian and Rocky Mountain carbonate rocks have added results to the growing database of paleopoles representing remagnetizations. Late Paleozoic remagnetizations form a cloud of results surrounding the reference poles of the Laurentian APWP. Remagnetizations in other locales and with inferred ages coeval with regional orogenies (e.g., Taconic, Sevier/Laramide, Variscan, Indosinian) are also ubiquitous. To be able to transform this cornucopia into valuable anchor-points on the APWP would be highly desirable. This may indeed become feasible, as will be explained next. Recent studies of faulted and folded carbonate-shale sequences have shown that this deformation enhances the illitization of smectite (Haines & van der Pluijm, 2008, Jour. Struct. Geol., v. 30, p. 525-538; Fitz-Diaz et al., 2014, International Geol. Review, v. 56, p. 734-755). 39Ar-40Ar dating of the authigenic illite (neutralizing any detrital illite contribution by taking the intercept of a mixing line) yields, therefore, the age of the deformation. We know that this date is also the age of the syndeformational remanence; thus we have the age of the corresponding paleopole. Results so far are obtained for the Canadian and U.S. Rocky Mountains and for the Spanish Cantabrian carbonates (Tohver et al., 2008, Earth Planet. Sci. Lett., v. 274, p. 524-530) and make good sense in accord with geological knowledge. Incremental fold tests are the tools used for this

  11. Incremental passivity and output regulation for switched nonlinear systems

    Science.gov (United States)

    Pang, Hongbo; Zhao, Jun

    2017-10-01

    This paper studies incremental passivity and global output regulation for switched nonlinear systems, whose subsystems are not required to be incrementally passive. A concept of incremental passivity for switched systems is put forward. First, a switched system is rendered incrementally passive by the design of a state-dependent switching law. Second, the feedback incremental passification is achieved by the design of a state-dependent switching law and a set of state feedback controllers. Finally, we show that once the incremental passivity for switched nonlinear systems is assured, the output regulation problem is solved by the design of global nonlinear regulator controllers comprising two components: the steady-state control and the linear output feedback stabilising controllers, even though the problem for none of subsystems is solvable. Two examples are presented to illustrate the effectiveness of the proposed approach.

  12. Audits of radiopharmaceutical formulations

    International Nuclear Information System (INIS)

    Castronovo, F.P. Jr.

    1992-01-01

    A procedure for auditing radiopharmaceutical formulations is described. To meet FDA guidelines regarding the quality of radiopharmaceuticals, institutional radioactive drug research committees perform audits when such drugs are formulated away from an institutional pharmacy. All principal investigators who formulate drugs outside institutional pharmacies must pass these audits before they can obtain a radiopharmaceutical investigation permit. The audit team meets with the individual who performs the formulation at the site of drug preparation to verify that drug formulations meet identity, strength, quality, and purity standards; are uniform and reproducible; and are sterile and pyrogen free. This team must contain an expert knowledgeable in the preparation of radioactive drugs; a radiopharmacist is the most qualified person for this role. Problems that have been identified by audits include lack of sterility and apyrogenicity testing, formulations that are open to the laboratory environment, failure to use pharmaceutical-grade chemicals, inadequate quality control methods or records, inadequate training of the person preparing the drug, and improper unit dose preparation. Investigational radiopharmaceutical formulations, including nonradiolabeled drugs, must be audited before they are administered to humans. A properly trained pharmacist should be a member of the audit team

  13. Performance Evaluation of Incremental K-means Clustering Algorithm

    OpenAIRE

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  14. Towards a multiconfigurational method of increments

    Science.gov (United States)

    Fertitta, E.; Koch, D.; Paulus, B.; Barcza, G.; Legeza, Ö.

    2018-06-01

    The method of increments (MoI) allows one to successfully calculate cohesive energies of bulk materials with high accuracy, but it encounters difficulties when calculating dissociation curves. The reason is that its standard formalism is based on a single Hartree-Fock (HF) configuration whose orbitals are localised and used for the many-body expansion. In situations where HF does not allow a size-consistent description of the dissociation, the MoI cannot be guaranteed to yield proper results either. Herein, we address the problem by employing a size-consistent multiconfigurational reference for the MoI formalism. This leads to a matrix equation where a coupling derived by the reference itself is employed. In principle, such an approach allows one to evaluate approximate values for the ground as well as excited states energies. While the latter are accurate close to the avoided crossing only, the ground state results are very promising for the whole dissociation curve, as shown by the comparison with density matrix renormalisation group benchmarks. We tested this two-state constant-coupling MoI on beryllium rings of different sizes and studied the error introduced by the constant coupling.

  15. Natural Gas pipelines: economics of incremental capacity

    International Nuclear Information System (INIS)

    Kimber, M.

    2000-01-01

    A number of gas transmission pipeline systems in Australia exhibit capacity constraints, and yet there is little evidence of creative or innovative processes from either the service provides of the regulators which might provide a market-based response to these constraints. There is no provision in the Code in its current form to allow it to accommodate these processes. This aspect is one of many that require review to make the Code work. It is unlikely that the current members of the National Gas Pipeline Advisory Committee (NGPAC) or its advisers have sufficient understanding of the analysis of risk and the consequential commercial drivers to implement the necessary changes. As a result, the Code will increasingly lose touch with the commercial realities of the energy market and will continue to inhibit investment in new and expanded infrastructure where market risk is present. The recent report prepared for the Business Council of Australia indicates a need to re-vitalise the energy reform process. It is important for the Australian energy industry to provide leadership and advice to governments to continue the process of reform, and, in particular, to amend the Code to make it more relevant. These amendments must include a mechanism by which price signals can be generated to provide timely and effective information for existing service providers or new entrants to install incremental pipeline capacity

  16. Reactive decontamination formulation

    Science.gov (United States)

    Giletto, Anthony [College Station, TX; White, William [College Station, TX; Cisar, Alan J [Cypress, TX; Hitchens, G Duncan [Bryan, TX; Fyffe, James [Bryan, TX

    2003-05-27

    The present invention provides a universal decontamination formulation and method for detoxifying chemical warfare agents (CWA's) and biological warfare agents (BWA's) without producing any toxic by-products, as well as, decontaminating surfaces that have come into contact with these agents. The formulation includes a sorbent material or gel, a peroxide source, a peroxide activator, and a compound containing a mixture of KHSO.sub.5, KHSO.sub.4 and K.sub.2 SO.sub.4. The formulation is self-decontaminating and once dried can easily be wiped from the surface being decontaminated. A method for decontaminating a surface exposed to chemical or biological agents is also disclosed.

  17. Power variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, J.M.; Podolskij, Mark

    2009-01-01

    We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path of the pr......We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path...... a chaos representation....

  18. Preparation of radiopharmaceutical formulations

    International Nuclear Information System (INIS)

    Simon, J.; Garlich, J.R.; Frank, R.K.; McMillan, K.

    1998-01-01

    Radiopharmaceutical formulations for complexes comprising at least one radionuclide complexed with a ligand, or its physiologically-acceptable salts thereof, especially 153 samarium-ethylenediaminetetramethylenephosphonic acid, which optionally contains a divalent metal ion, e.g. calcium, and is frozen, thawed, and then administered by injection. Alternatively, the radiopharmaceutical formulations must contain the divalent metal and are frozen only if the time before administration is sufficiently long to cause concern for radiolysis of the ligand. 2 figs., 9 tabs

  19. Tariff formulation and equalization

    International Nuclear Information System (INIS)

    Svartsund, Trond

    2003-01-01

    The primary goal of the transmission tariff is to provide for socioeconomic use of the transmission grid. The present tariff structure is basically right. The responsibility for the formulation of the tariff resides with the local grid owner. This must take place in agreement with the current regulations which are passed by the authorities. The formulation must be adaptable to the local requirements. EBL (Norwegian Electricity Industry Association) is content with the current regulations

  20. Tangent map intermittency as an approximate analysis of intermittency in a high dimensional fully stochastic dynamical system: The Tangled Nature model.

    Science.gov (United States)

    Diaz-Ruelas, Alvaro; Jeldtoft Jensen, Henrik; Piovani, Duccio; Robledo, Alberto

    2016-12-01

    It is well known that low-dimensional nonlinear deterministic maps close to a tangent bifurcation exhibit intermittency and this circumstance has been exploited, e.g., by Procaccia and Schuster [Phys. Rev. A 28, 1210 (1983)], to develop a general theory of 1/f spectra. This suggests it is interesting to study the extent to which the behavior of a high-dimensional stochastic system can be described by such tangent maps. The Tangled Nature (TaNa) Model of evolutionary ecology is an ideal candidate for such a study, a significant model as it is capable of reproducing a broad range of the phenomenology of macroevolution and ecosystems. The TaNa model exhibits strong intermittency reminiscent of punctuated equilibrium and, like the fossil record of mass extinction, the intermittency in the model is found to be non-stationary, a feature typical of many complex systems. We derive a mean-field version for the evolution of the likelihood function controlling the reproduction of species and find a local map close to tangency. This mean-field map, by our own local approximation, is able to describe qualitatively only one episode of the intermittent dynamics of the full TaNa model. To complement this result, we construct a complete nonlinear dynamical system model consisting of successive tangent bifurcations that generates time evolution patterns resembling those of the full TaNa model in macroscopic scales. The switch from one tangent bifurcation to the next in the sequences produced in this model is stochastic in nature, based on criteria obtained from the local mean-field approximation, and capable of imitating the changing set of types of species and total population in the TaNa model. The model combines full deterministic dynamics with instantaneous parameter random jumps at stochastically drawn times. In spite of the limitations of our approach, which entails a drastic collapse of degrees of freedom, the description of a high-dimensional model system in terms of a low

  1. A tangent-ring optical TWDM-MAN enabling three-level transregional reconfigurations and shared protections by multipoint distributed control

    Science.gov (United States)

    Gou, Kaiyu; Gan, Chaoqin; Zhang, Xiaoyu; Zhang, Yuchao

    2018-03-01

    An optical time-and-wavelength-division-multiplexing metro-access network (TWDM-MAN) is proposed and demonstrated in this paper. By the reuse of tangent-ring optical distribution network and the design of distributed control mechanism, ONUs needing to communicate with each other can be flexibly accessed to successfully make up three kinds of reconfigurable networks. By the nature advantage of ring topology in protection, three-level comprehensive protections covering both feeder and distribution fibers are also achieved. Besides, a distributed wavelength allocation (DWA) is designed to support efficient parallel upstream transmission. The analyses including capacity, congestion and transmission simulation show that this network has a great performance.

  2. Quasi-brittle damage modeling based on incremental energy relaxation combined with a viscous-type regularization

    Science.gov (United States)

    Langenfeld, K.; Junker, P.; Mosler, J.

    2018-05-01

    This paper deals with a constitutive model suitable for the analysis of quasi-brittle damage in structures. The model is based on incremental energy relaxation combined with a viscous-type regularization. A similar approach—which also represents the inspiration for the improved model presented in this paper—was recently proposed in Junker et al. (Contin Mech Thermodyn 29(1):291-310, 2017). Within this work, the model introduced in Junker et al. (2017) is critically analyzed first. This analysis leads to an improved model which shows the same features as that in Junker et al. (2017), but which (i) eliminates unnecessary model parameters, (ii) can be better interpreted from a physics point of view, (iii) can capture a fully softened state (zero stresses), and (iv) is characterized by a very simple evolution equation. In contrast to the cited work, this evolution equation is (v) integrated fully implicitly and (vi) the resulting time-discrete evolution equation can be solved analytically providing a numerically efficient closed-form solution. It is shown that the final model is indeed well-posed (i.e., its tangent is positive definite). Explicit conditions guaranteeing this well-posedness are derived. Furthermore, by additively decomposing the stress rate into deformation- and purely time-dependent terms, the functionality of the model is explained. Illustrative numerical examples confirm the theoretical findings.

  3. One Step at a Time: SBM as an Incremental Process.

    Science.gov (United States)

    Conrad, Mark

    1995-01-01

    Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…

  4. Incrementality in naming and reading complex numerals: Evidence from eyetracking

    NARCIS (Netherlands)

    Korvorst, M.H.W.; Roelofs, A.P.A.; Levelt, W.J.M.

    2006-01-01

    Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported

  5. Lifetime costs of lung transplantation : Estimation of incremental costs

    NARCIS (Netherlands)

    VanEnckevort, PJ; Koopmanschap, MA; Tenvergert, EM; VanderBij, W; Rutten, FFH

    1997-01-01

    Despite an expanding number of centres which provide lung transplantation, information about the incremental costs of lung transplantation is scarce. From 1991 until 1995, in The Netherlands a technology assessment was performed which provided information about the incremental costs of lung

  6. Finance for incremental housing: current status and prospects for expansion

    NARCIS (Netherlands)

    Ferguson, B.; Smets, P.G.S.M.

    2010-01-01

    Appropriate finance can greatly increase the speed and lower the cost of incremental housing - the process used by much of the low/moderate-income majority of most developing countries to acquire shelter. Informal finance continues to dominate the funding of incremental housing. However, new sources

  7. Validation of the periodicity of growth increment deposition in ...

    African Journals Online (AJOL)

    Validation of the periodicity of growth increment deposition in otoliths from the larval and early juvenile stages of two cyprinids from the Orange–Vaal river ... Linear regression models were fitted to the known age post-fertilisation and the age estimated using increment counts to test the correspondence between the two for ...

  8. 76 FR 73475 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2011-11-29

    ... Benefits Business Transformation, Increment I, 76 FR 53764 (Aug. 29, 2011). The final rule removed form... [CIS No. 2481-09; Docket No. USCIS-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY: U.S. Citizenship and Immigration Services, DHS. ACTION: Final...

  9. 76 FR 53763 - Immigration Benefits Business Transformation, Increment I

    Science.gov (United States)

    2011-08-29

    ..., 100, et al. Immigration Benefits Business Transformation, Increment I; Final Rule #0;#0;Federal... Benefits Business Transformation, Increment I AGENCY: U.S. Citizenship and Immigration Services, DHS... USCIS is engaged in an enterprise-wide transformation effort to implement new business processes and to...

  10. The Time Course of Incremental Word Processing during Chinese Reading

    Science.gov (United States)

    Zhou, Junyi; Ma, Guojie; Li, Xingshan; Taft, Marcus

    2018-01-01

    In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words…

  11. Creating Helical Tool Paths for Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Skjødt, Martin; Hancock, Michael H.; Bay, Niels

    2007-01-01

    Single point incremental forming (SPIF) is a relatively new sheet forming process. A sheet is clamped in a rig and formed incrementally using a rotating single point tool in the form of a rod with a spherical end. The process is often performed on a CNC milling machine and the tool movement...

  12. On conditional scalar increment and joint velocity-scalar increment statistics

    International Nuclear Information System (INIS)

    Zhang Hengbin; Wang Danhong; Tong Chenning

    2004-01-01

    Conditional velocity and scalar increment statistics are usually studied in the context of Kolmogorov's refined similarity hypotheses and are considered universal (quasi-Gaussian) for inertial-range separations. In such analyses the locally averaged energy and scalar dissipation rates are used as conditioning variables. Recent studies have shown that certain local turbulence structures can be captured when the local scalar variance (φ 2 ) r and the local kinetic energy k r are used as the conditioning variables. We study the conditional increments using these conditioning variables, which also provide the local turbulence scales. Experimental data obtained in the fully developed region of an axisymmetric turbulent jet are used to compute the statistics. The conditional scalar increment probability density function (PDF) conditional on (φ 2 ) r is found to be close to Gaussian for (φ 2 ) r small compared with its mean and is sub-Gaussian and bimodal for large (φ 2 ) r , and therefore is not universal. We find that the different shapes of the conditional PDFs are related to the instantaneous degree of non-equilibrium (production larger than dissipation) of the local scalar. There is further evidence of this from the conditional PDF conditional on both (φ 2 ) r and χ r , which is largely a function of (φ 2 ) r /χ r , a measure of the degree of non-equilibrium. The velocity-scalar increment joint PDF is close to joint Gaussian and quad-modal for equilibrium and non-equilibrium local velocity and scalar, respectively. The latter shape is associated with a combination of the ramp-cliff and plane strain structures. Kolmogorov's refined similarity hypotheses also predict a dependence of the conditional PDF on the degree of non-equilibrium. Therefore, the quasi-Gaussian (joint) PDF, previously observed in the context of Kolmogorov's refined similarity hypotheses, is only one of the conditional PDF shapes of inertial range turbulence. The present study suggests that

  13. Efficiency of Oral Incremental Rehearsal versus Written Incremental Rehearsal on Students' Rate, Retention, and Generalization of Spelling Words

    Science.gov (United States)

    Garcia, Dru; Joseph, Laurice M.; Alber-Morgan, Sheila; Konrad, Moira

    2014-01-01

    The purpose of this study was to examine the efficiency of an incremental rehearsal oral versus an incremental rehearsal written procedure on a sample of primary grade children's weekly spelling performance. Participants included five second and one first grader who were in need of help with their spelling according to their teachers. An…

  14. Granulated decontamination formulations

    Science.gov (United States)

    Tucker, Mark D.

    2007-10-02

    A decontamination formulation and method of making that neutralizes the adverse health effects of both chemical and biological compounds, especially chemical warfare (CW) and biological warfare (BW) agents, and toxic industrial chemicals. The formulation provides solubilizing compounds that serve to effectively render the chemical and biological compounds, particularly CW and BW compounds, susceptible to attack, and at least one reactive compound that serves to attack (and detoxify or kill) the compound. The formulation includes at least one solubilizing agent, a reactive compound, a sorbent additive, and water. A highly adsorbent sorbent additive (e.g., amorphous silica, sorbitol, mannitol, etc.) is used to "dry out" one or more liquid ingredients into a dry, free-flowing powder that has an extended shelf life, and is more convenient to handle and mix in the field.

  15. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  16. Nonlinear finite element formulation for analyzing shape memory alloy cylindrical panels

    International Nuclear Information System (INIS)

    Mirzaeifar, R; Shakeri, M; Sadighi, M

    2009-01-01

    In this paper, a general incremental displacement based finite element formulation capable of modeling material nonlinearities based on first-order shear deformation theory (FSDT) is developed for cylindrical shape memory alloy (SMA) shells. The Boyd–Lagoudas phenomenological model with polynomial hardening in conjunction with 3D incremental convex cutting plane explicit algorithm is implemented for preparing the SMA constitutive model in the finite element formulation. Several numerical examples are presented for demonstrating the performance of the proposed formulation in stress, deflection and phase transformation analysis of pseudoelastic behavior of shape memory cylindrical panels with various boundary conditions. Also, it is shown that the presented formulation can be implemented for studying plates and beams with rectangular cross section

  17. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek; Skiadopoulos, Spiros; Kalnis, Panos

    2017-01-01

    : they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving

  18. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab; Canim, Mustafa; Sadoghi, Mohammad; Bhatta, Bishwaranjan; Chang, Yuan-Chi; Kalnis, Panos

    2017-01-01

    , such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem

  19. Increment and mortality in a virgin Douglas-fir forest.

    Science.gov (United States)

    Robert W. Steele; Norman P. Worthington

    1955-01-01

    Is there any basis to the forester's rule of thumb that virgin forests eventually reach an equilibrium where increment and mortality approximately balance? Are we wasting potential timber volume by failing to salvage mortality in old-growth stands?

  20. Mission Planning System Increment 5 (MPS Inc 5)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Mission Planning System Increment 5 (MPS Inc 5) Defense Acquisition Management Information...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then Year...Phone: 845-9625 DSN Fax: Date Assigned: May 19, 2014 Program Information Program Name Mission Planning System Increment 5 (MPS Inc 5) DoD

  1. Shredder: GPU-Accelerated Incremental Storage and Computation

    OpenAIRE

    Bhatotia, Pramod; Rodrigues, Rodrigo; Verma, Akshat

    2012-01-01

    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-ba...

  2. On the instability increments of a stationary pinch

    International Nuclear Information System (INIS)

    Bud'ko, A.B.

    1989-01-01

    The stability of stationary pinch to helical modes is numerically studied. It is shown that in the case of a rather fast plasma pressure decrease to the pinch boundary, for example, for an isothermal diffusion pinch with Gauss density distribution instabilities with m=0 modes are the most quickly growing. Instability increments are calculated. A simple analytical expression of a maximum increment of growth of sausage instability for automodel Gauss profiles is obtained

  3. Biometrics Enabling Capability Increment 1 (BEC Inc 1)

    Science.gov (United States)

    2016-03-01

    modal biometrics submissions to include iris, face, palm and finger prints from biometrics collection devices, which will support the Warfighter in...2016 Major Automated Information System Annual Report Biometrics Enabling Capability Increment 1 (BEC Inc 1) Defense Acquisition Management...Phone: 227-3119 DSN Fax: Date Assigned: July 15, 2015 Program Information Program Name Biometrics Enabling Capability Increment 1 (BEC Inc 1) DoD

  4. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    OpenAIRE

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists) and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution err...

  5. A Syntactic-Semantic Approach to Incremental Verification

    OpenAIRE

    Bianculli, Domenico; Filieri, Antonio; Ghezzi, Carlo; Mandrioli, Dino

    2013-01-01

    Software verification of evolving systems is challenging mainstream methodologies and tools. Formal verification techniques often conflict with the time constraints imposed by change management practices for evolving systems. Since changes in these systems are often local to restricted parts, an incremental verification approach could be beneficial. This paper introduces SiDECAR, a general framework for the definition of verification procedures, which are made incremental by the framework...

  6. Logistics Modernization Program Increment 2 (LMP Inc 2)

    Science.gov (United States)

    2016-03-01

    Sections 3 and 4 of the LMP Increment 2 Business Case, ADM), key functional requirements, Critical Design Review (CDR) Reports, and Economic ...from the 2013 version of the LMP Increment 2 Economic Analysis and replace it with references to the Economic Analysis that will be completed...of ( inbound /outbound) IDOCs into the system. LMP must be able to successfully process 95% of ( inbound /outbound) IDOCs into the system. Will meet

  7. Organization Strategy and Structural Differences for Radical Versus Incremental Innovation

    OpenAIRE

    John E. Ettlie; William P. Bridges; Robert D. O'Keefe

    1984-01-01

    The purpose of this study was to test a model of the organizational innovation process that suggests that the strategy-structure causal sequence is differentiated by radical versus incremental innovation. That is, unique strategy and structure will be required for radical innovation, especially process adoption, while more traditional strategy and structure arrangements tend to support new product introduction and incremental process adoption. This differentiated theory is strongly supported ...

  8. Incremental Learning for Place Recognition in Dynamic Environments

    OpenAIRE

    Luo, Jie; Pronobis, Andrzej; Caputo, Barbara; Jensfelt, Patric

    2007-01-01

    This paper proposes a discriminative approach to template-based Vision-based place recognition is a desirable feature for an autonomous mobile system. In order to work in realistic scenarios, visual recognition algorithms should be adaptive, i.e. should be able to learn from experience and adapt continuously to changes in the environment. This paper presents a discriminative incremental learning approach to place recognition. We use a recently introduced version of the incremental SVM, which ...

  9. MRI: Modular reasoning about interference in incremental programming

    OpenAIRE

    Oliveira, Bruno C. D. S; Schrijvers, Tom; Cook, William R

    2012-01-01

    Incremental Programming (IP) is a programming style in which new program components are defined as increments of other components. Examples of IP mechanisms include: Object-oriented programming (OOP) inheritance, aspect-oriented programming (AOP) advice and feature-oriented programming (FOP). A characteristic of IP mechanisms is that, while individual components can be independently defined, the composition of components makes those components become tightly coupled, sh...

  10. Incremental short daily home hemodialysis: a case series

    OpenAIRE

    Toth-Manikowski, Stephanie M.; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-01-01

    Background Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients? residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. Case presentation From 2011 to 2015, we initiated 5 incident hemodialysis patients on an ...

  11. Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-04-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.

  12. Entity versus incremental theories predict older adults' memory performance.

    Science.gov (United States)

    Plaks, Jason E; Chasteen, Alison L

    2013-12-01

    The authors examined whether older adults' implicit theories regarding the modifiability of memory in particular (Studies 1 and 3) and abilities in general (Study 2) would predict memory performance. In Study 1, individual differences in older adults' endorsement of the "entity theory" (a belief that one's ability is fixed) or "incremental theory" (a belief that one's ability is malleable) of memory were measured using a version of the Implicit Theories Measure (Dweck, 1999). Memory performance was assessed with a free-recall task. Results indicated that the higher the endorsement of the incremental theory, the better the free recall. In Study 2, older and younger adults' theories were measured using a more general version of the Implicit Theories Measure that focused on the modifiability of abilities in general. Again, for older adults, the higher the incremental endorsement, the better the free recall. Moreover, as predicted, implicit theories did not predict younger adults' memory performance. In Study 3, participants read mock news articles reporting evidence in favor of either the entity or incremental theory. Those in the incremental condition outperformed those in the entity condition on reading span and free-recall tasks. These effects were mediated by pretask worry such that, for those in the entity condition, higher worry was associated with lower performance. Taken together, these studies suggest that variation in entity versus incremental endorsement represents a key predictor of older adults' memory performance. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Incremental short daily home hemodialysis: a case series.

    Science.gov (United States)

    Toth-Manikowski, Stephanie M; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-07-05

    Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients' residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. From 2011 to 2015, we initiated 5 incident hemodialysis patients on an incremental home hemodialysis regimen. The biochemical parameters of all patients remained stable on the incremental hemodialysis regimen and they consistently achieved standard Kt/Vurea targets. Of the two patients with follow-up >6 months, residual kidney function was preserved for ≥2 years. Importantly, the patients were able to transition to home hemodialysis without automatically requiring 5 sessions per week at the outset and gradually increased the number of treatments and/or dialysate volume as the residual kidney function declined. An incremental home hemodialysis regimen can be safely prescribed and may improve acceptability of home hemodialysis. Reducing hemodialysis frequency by even one treatment per week can reduce the number of fistula or graft cannulations or catheter connections by >100 per year, an important consideration for patient well-being, access longevity, and access-related infections. The incremental hemodialysis approach, supported by national guidelines, can be considered for all home hemodialysis patients with residual kidney function.

  14. Magnetohydrodynamics (MHD flow of a tangent hyperbolic fluid with nanoparticles past a stretching sheet with second order slip and convective boundary condition

    Directory of Open Access Journals (Sweden)

    Wubshet Ibrahim

    Full Text Available This article presents the effect of thermal radiation on magnetohydrodynamic flow of tangent hyperbolic fluid with nanoparticle past an enlarging sheet with second order slip and convective boundary condition. Condition of zero normal flux of nanoparticles at the wall is used for the concentration boundary condition, which is the current topic that have yet to be studied extensively. The solution for the velocity, temperature and nanoparticle concentration is governed by parameters viz. power-law index (n, Weissenberg number We, Biot number Bi, Prandtl number Pr, velocity slip parameters δ and γ, Lewis number Le, Brownian motion parameter Nb and the thermophoresis parameter Nt. Similarity transformation is used to metamorphosed the governing non-linear boundary-value problem into coupled higher order non-linear ordinary differential equation. The succeeding equations were numerically solved using the function bvp4c from the matlab for different values of emerging parameters. Numerical results are deliberated through graphs and tables for velocity, temperature, concentration, the skin friction coefficient and local Nusselt number. The results designate that the skin friction coefficient Cf deplete as the values of Weissenberg number We, slip parameters γ and δ upturn and it rises as the values of power-law index n increase. The local Nusselt number -θ′(0 decreases as slip parameters γ and δ, radiation parameter Nr, Weissenberg number We, thermophoresis parameter Nt and power-law index n increase. However, the local Nusselt number increases as the Biot number Bi increase. Keywords: Tangent hyperbolic fluid, Second order slip flow, MHD, Convective boundary condition, Radiation effect, Passive control of nanoparticles

  15. Systematic Equation Formulation

    DEFF Research Database (Denmark)

    Lindberg, Erik

    2007-01-01

    A tutorial giving a very simple introduction to the set-up of the equations used as a model for an electrical/electronic circuit. The aim is to find a method which is as simple and general as possible with respect to implementation in a computer program. The “Modified Nodal Approach”, MNA, and th......, and the “Controlled Source Approach”, CSA, for systematic equation formulation are investigated. It is suggested that the kernel of the P Spice program based on MNA is reprogrammed....

  16. Support vector machine incremental learning triggered by wrongly predicted samples

    Science.gov (United States)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  17. Three routes forward for biofuels: Incremental, leapfrog, and transitional

    International Nuclear Information System (INIS)

    Morrison, Geoff M.; Witcover, Julie; Parker, Nathan C.; Fulton, Lew

    2016-01-01

    This paper examines three technology routes for lowering the carbon intensity of biofuels: (1) a leapfrog route that focuses on major technological breakthroughs in lignocellulosic pathways at new, stand-alone biorefineries; (2) an incremental route in which improvements are made to existing U.S. corn ethanol and soybean biodiesel biorefineries; and (3) a transitional route in which biotechnology firms gain experience growing, handling, or chemically converting lignocellulosic biomass in a lower-risk fashion than leapfrog biorefineries by leveraging existing capital stock. We find the incremental route is likely to involve the largest production volumes and greenhouse gas benefits until at least the mid-2020s, but transitional and leapfrog biofuels together have far greater long-term potential. We estimate that the Renewable Fuel Standard, California's Low Carbon Fuel Standard, and federal tax credits provided an incentive of roughly $1.5–2.5 per gallon of leapfrog biofuel between 2012 and 2015, but that regulatory elements in these policies mostly incentivize lower-risk incremental investments. Adjustments in policy may be necessary to bring a greater focus on transitional technologies that provide targeted learning and cost reduction opportunities for leapfrog biofuels. - Highlights: • Three technological pathways are compared that lower carbon intensity of biofuels. • Incremental changes lead to faster greenhouse gas reductions. • Leapfrog changes lead to greatest long-term potential. • Two main biofuel policies (RFS and LCFS) are largely incremental in nature. • Transitional biofuels offer medium-risk, medium reward pathway.

  18. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Science.gov (United States)

    Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua

    2014-01-01

    To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252

  19. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Directory of Open Access Journals (Sweden)

    Ming Xue

    2014-02-01

    Full Text Available To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks.

  20. Drug delivery and formulations.

    Science.gov (United States)

    Breitkreutz, Jörg; Boos, Joachim

    2011-01-01

    Paediatric drug delivery is a major challenge in drug development. Because of the heterogeneous nature of the patient group, ranging from newborns to adolescents, there is a need to use appropriate excipients, drug dosage forms and delivery devices for different age groups. So far, there is a lack of suitable and safe drug formulations for children, especially for the very young and seriously ill patients. The new EU legislation will enforce paediatric clinical trials and drug development. Current advances in paediatric drug delivery include interesting new concepts such as fast-dissolving drug formulations, including orodispersible tablets and oral thin strips (buccal wafers), and multiparticulate dosage forms based on mini-tabletting or pelletization technologies. Parenteral administration is likely to remain the first choice for children in the neonatal period and for emergency cases. Alternative routes of administration include transdermal, pulmonary and nasal drug delivery systems. A few products are already available on the market, but others still need further investigations and clinical proof of concept.

  1. Ether formulations of relativity

    International Nuclear Information System (INIS)

    Duffy, M.C.

    1980-01-01

    Contemporary ether theories are surveyed and criticised, especially those formally identical to orthodox Relativity. The historical development of Relativity, Special and General, in terms of an ether, is briefly indicated. Classical interpretations of Generalized Relativity using ether are compared to Euclidean formulations using a background space. The history of a sub-group of theories, formulating a 'new' Relativity involving modified transforms, is outlined. According to the theory with which they agree, recent supposed detections of drift are classified and criticised. Cosmological evidence suggesting an ether is mentioned. Only ether theories formally identical to Relativity have been published in depth. They stand criticised as being contrary to the positivist spirit. The history of mechanical analogues is traced, from Hartley's representing gravitating matter as spherical standing waves, to recent suggestions that vortex-sponge might model electromagnetic, quantum, uncertainty and faster-than-light phenomena. Contemporary theories are particular physical theories, themselves 'second interpretations' of a primary mathematical model. Mechanical analogues are auxiliary, not necessary, to other theory, disclosing relationships between classical and non-classical descriptions of assemblies charging state. The ether-relativity polemic, part of a broader dispute about relativity, is founded on mistaken conceptions of the roles of mathematical and physical models, mechanical analogues; and a distored view of history, which indicates that ether theories have become relativistic. (author)

  2. Making context explicit for explanation and incremental knowledge acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Brezillon, P. [Univ. Paris (France)

    1996-12-31

    Intelligent systems may be improved by making context explicit in problem solving. This is a lesson drawn from a study of the reasons why a number of knowledge-based systems (KBSs) failed. We discuss the interest to make context explicit in explanation generation and incremental knowledge acquisition, two important aspects of intelligent systems that aim to cooperate with users. We show how context can be used to better explain and incrementally acquire knowledge. The advantages of using context in explanation and incremental knowledge acquisition are discussed through SEPIT, an expert system for supporting diagnosis and explanation through simulation of power plants. We point out how the limitations of such systems may be overcome by making context explicit.

  3. Martingales, nonstationary increments, and the efficient market hypothesis

    Science.gov (United States)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  4. Single point incremental forming: Formability of PC sheets

    Science.gov (United States)

    Formisano, A.; Boccarusso, L.; Carrino, L.; Lambiase, F.; Minutolo, F. Memola Capece

    2018-05-01

    Recent research on Single Point Incremental Forming of polymers has slightly covered the possibility of expanding the materials capability window of this flexible forming process beyond metals, by demonstrating the workability of thermoplastic polymers at room temperature. Given the different behaviour of polymers compared to metals, different aspects need to be deepened to better understand the behaviour of these materials when incrementally formed. Thus, the aim of the work is to investigate the formability of incrementally formed polycarbonate thin sheets. To this end, an experimental investigation at room temperature was conducted involving formability tests; varying wall angle cone and pyramid frusta were manufactured by processing polycarbonate sheets with different thicknesses and using tools with different diameters, in order to draw conclusions on the formability of polymer sheets through the evaluation of the forming angles and the observation of the failure mechanisms.

  5. Motion-Induced Blindness Using Increments and Decrements of Luminance

    Directory of Open Access Journals (Sweden)

    Stine Wm Wren

    2017-10-01

    Full Text Available Motion-induced blindness describes the disappearance of stationary elements of a scene when other, perhaps non-overlapping, elements of the scene are in motion. We measured the effects of increment (200.0 cd/m2 and decrement targets (15.0 cd/m2 and masks presented on a grey background (108.0 cd/m2, tapping into putative ON- and OFF-channels, on the rate of target disappearance psychophysically. We presented two-frame motion, which has coherent motion energy, and dynamic Glass patterns and dynamic anti-Glass patterns, which do not have coherent motion energy. Using the method of constant stimuli, participants viewed stimuli of varying durations (3.1 s, 4.6 s, 7.0 s, 11 s, or 16 s in a given trial and then indicated whether or not the targets vanished during that trial. Psychometric function midpoints were used to define absolute threshold mask duration for the disappearance of the target. 95% confidence intervals for threshold disappearance times were estimated using a bootstrap technique for each of the participants across two experiments. Decrement masks were more effective than increment masks with increment targets. Increment targets were easier to mask than decrement targets. Distinct mask pattern types had no effect, suggesting that perceived coherence contributes to the effectiveness of the mask. The ON/OFF dichotomy clearly carries its influence to the level of perceived motion coherence. Further, the asymmetry in the effects of increment and decrement masks on increment and decrement targets might lead one to speculate that they reflect the ‘importance’ of detecting decrements in the environment.

  6. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors.

    Science.gov (United States)

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual's processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people's moral character is fixed (entity theorists) and individuals who hold the implicit belief that people's moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE), rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction) with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2-4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory) showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  7. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    Directory of Open Access Journals (Sweden)

    Niwen Huang

    2017-08-01

    Full Text Available Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE, rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2–4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  8. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  9. Apparatus for electrical-assisted incremental forming and process thereof

    Science.gov (United States)

    Roth, John; Cao, Jian

    2018-04-24

    A process and apparatus for forming a sheet metal component using an electric current passing through the component. The process can include providing an incremental forming machine, the machine having at least one arcuate tipped tool and at least electrode spaced a predetermined distance from the arcuate tipped tool. The machine is operable to perform a plurality of incremental deformations on the sheet metal component using the arcuate tipped tool. The machine is also operable to apply an electric direct current through the electrode into the sheet metal component at the predetermined distance from the arcuate tipped tool while the machine is forming the sheet metal component.

  10. Single-point incremental forming and formability-failure diagrams

    DEFF Research Database (Denmark)

    Silva, M.B.; Skjødt, Martin; Atkins, A.G.

    2008-01-01

    In a recent work [1], the authors constructed a closed-form analytical model that is capable of dealing with the fundamentals of single point incremental forming and explaining the experimental and numerical results published in the literature over the past couple of years. The model is based...... of deformation that are commonly found in general single point incremental forming processes; and (ii) to investigate the formability limits of SPIF in terms of ductile damage mechanics and the question of whether necking does, or does not, precede fracture. Experimentation by the authors together with data...

  11. Short-term load forecasting with increment regression tree

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jingfei; Stenzel, Juergen [Darmstadt University of Techonology, Darmstadt 64283 (Germany)

    2006-06-15

    This paper presents a new regression tree method for short-term load forecasting. Both increment and non-increment tree are built according to the historical data to provide the data space partition and input variable selection. Support vector machine is employed to the samples of regression tree nodes for further fine regression. Results of different tree nodes are integrated through weighted average method to obtain the comprehensive forecasting result. The effectiveness of the proposed method is demonstrated through its application to an actual system. (author)

  12. Vozy formule 1

    OpenAIRE

    Zbožínek, Adam

    2009-01-01

    Tato práce uvádí základní pravidla a předpoklady pro konstrukci a použití vozů formule 1. Hlavní zaměření je na aerodynamiku, která je nejdůležitější disciplínou v tomto motoristickém sportu, dále je tato práce zaměřena na základní faktory týkající se motoru vozu, kol, nové technologie KERS a provedení volantu. This work shows basic rules and conditions for construction and use of cars formula 1. The main part of this work focus on the aerodynamics which is the most important discipline of...

  13. Assessment of strategy formulation

    DEFF Research Database (Denmark)

    Acur, Nuran; Englyst, Linda

    2006-01-01

    of the success criteria through face-to-face interviews with 46 managers, workshops involving 40 managers, and two in-depth case studies. The success criteria have been slightly modified due to the empirical results, to yield the assessment tool. Findings – The resulting assessment tool integrates three generic...... approaches to strategy assessment, namely the goal-centred, comparative and improvement approaches, as found in the literature. Furthermore, it encompasses three phases of strategy formulation processes: strategic thinking, strategic planning and embedding of strategy. The tool reflects that the different......, but cases and managerial perceptions indicate that the need for accurate and detailed plans might be overrated in the literature, as implementation relies heavily on continuous improvement and empowerment. Concerning embedding, key aspects relate both to the goal-centred and improvement approaches, while...

  14. Development of corotational formulated FEM for application to 30m class large deployable reflector

    International Nuclear Information System (INIS)

    Ozawa, Satoru; Fujiwara, Yuuichi; Tsujihata, Akio

    2010-01-01

    JAXA, Japan Aerospace Exploration Agency, is now developing a corotational formulated finite element analysis method and its software 'Origami/ETS' for the development of 30m class large deployable reflectors. For the reason that the deployable reflector is composed of beams, cables and mesh, this analysis method is generalized for finite elements with multiple nodes, which are commonly used in linear finite element analyses. The large displacement and rotation are taken into account by the corotational formulation. The tangent stiffness matrix for finite elements with multiple nodes is obtained as follows; the geometric stiffness matrix of two node elements is derived by taking variation of the element's corotational matrix from the virtual work of finite elements with large displacement; similarly the geometric stiffness matrix for three node elements is derived; as the extension of two and three node element theories, the geometric stiffness matrix for multiple node elements is derived; with the geometric stiffness matrix for multiple node elements, the tangent stiffness matrix is obtained. The analysis method is applied for the deployment analysis and static structural analysis of the 30m class large deployable reflector. In the deployment analysis, it is confirmed that this method stably analyzes the deployment motion from the deployment configuration to the stowed configuration of the reflector. In the static analysis, it is confirmed that the mesh structure is analyzed successfully. The 30m class large deployable reflector is now still being developed and is about to undergo several tests with its prototypes. This analysis method will be used in the tests and verifications of the reflector.

  15. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    Science.gov (United States)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  16. Playing by the rules? Analysing incremental urban developments

    NARCIS (Netherlands)

    Karnenbeek, van Lilian; Janssen-Jansen, Leonie

    2018-01-01

    Current urban developments are often considered outdated and static, and the argument follows that they should become more adaptive. In this paper, we argue that existing urban development are already adaptive and incremental. Given this flexibility in urban development, understanding changes in the

  17. Size, Stability and Incremental Budgeting Outcomes in Public Universities.

    Science.gov (United States)

    Schick, Allen G.; Hills, Frederick S.

    1982-01-01

    Examined the influence of relative size in the analysis of total dollar and workforce budgets, and changes in total dollar and workforce budgets when correlational/regression methods are used. Data suggested that size dominates the analysis of total budgets, and is not a factor when discretionary dollar increments are analyzed. (JAC)

  18. The National Institute of Education and Incremental Budgeting.

    Science.gov (United States)

    Hastings, Anne H.

    1979-01-01

    The National Institute of Education's (NIE) history demonstrates that the relevant criteria for characterizing budgeting as incremental are not the predictability and stability of appropriations but the conditions of complexity, limited information, multiple factors, and imperfect agreement on ends; NIE's appropriations were dominated by political…

  19. Generation of Referring Expressions: Assessing the Incremental Algorithm

    Science.gov (United States)

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  20. Object class hierarchy for an incremental hypertext editor

    Directory of Open Access Journals (Sweden)

    A. Colesnicov

    1995-02-01

    Full Text Available The object class hierarchy design is considered due to a hypertext editor implementation. The following basic classes were selected: the editor's coordinate system, the memory manager, the text buffer executing basic editing operations, the inherited hypertext buffer, the edit window, the multi-window shell. Special hypertext editing features, the incremental hypertext creation support and further generalizations are discussed.

  1. Bipower variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, José Manuel; Podolskij, Mark

    2009-01-01

    Convergence in probability and central limit laws of bipower variation for Gaussian processes with stationary increments and for integrals with respect to such processes are derived. The main tools of the proofs are some recent powerful techniques of Wiener/Itô/Malliavin calculus for establishing...

  2. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    KAUST Repository

    Zhang, Chuxu; Liu, Chuang; Yu, Lu; Zhang, Zi-Ke; Zhou, Tao

    2017-01-01

    success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt

  3. Revisiting the fundamentals of single point incremental forming by

    DEFF Research Database (Denmark)

    Silva, Beatriz; Skjødt, Martin; Martins, Paulo A.F.

    2008-01-01

    Knowledge of the physics behind the fracture of material at the transition between the inclined wall and the corner radius of the sheet is of great importance for understanding the fundamentals of single point incremental forming (SPIF). How the material fractures, what is the state of strain...

  4. Some theoretical aspects of capacity increment in gaseous diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Coates, J. H.; Guais, J. C.; Lamorlette, G.

    1975-09-01

    Facing to the sharply growing needs of enrichment services, the problem of implementing new capacities must be included in an optimized scheme spread out in time. In this paper the alternative solutions will be studied first for an unique increment decision, and then in an optimum schedule. The limits of the analysis will be discussed.

  5. Respiratory ammonia output and blood ammonia concentration during incremental exercise

    NARCIS (Netherlands)

    Ament, W; Huizenga, [No Value; Kort, E; van der Mark, TW; Grevink, RG; Verkerke, GJ

    The aim of this study was to investigate whether the increase of ammonia concentration and lactate concentration in blood was accompanied by an increased expiration of ammonia during graded exercise. Eleven healthy subjects performed an incremental cycle ergometer test. Blood ammonia, blood lactate

  6. Incremental concept learning with few training examples and hierarchical classification

    NARCIS (Netherlands)

    Bouma, H.; Eendebak, P.T.; Schutte, K.; Azzopardi, G.; Burghouts, G.J.

    2015-01-01

    Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible

  7. Factors for Radical Creativity, Incremental Creativity, and Routine, Noncreative Performance

    Science.gov (United States)

    Madjar, Nora; Greenberg, Ellen; Chen, Zheng

    2011-01-01

    This study extends theory and research by differentiating between routine, noncreative performance and 2 distinct types of creativity: radical and incremental. We also use a sensemaking perspective to examine the interplay of social and personal factors that may influence a person's engagement in a certain level of creative action versus routine,…

  8. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  9. Incremental exercise test performance with and without a respiratory ...

    African Journals Online (AJOL)

    Incremental exercise test performance with and without a respiratory gas collection system. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... Industrial- type mask wear is thought to impair exercise performance through increased respiratory dead space, flow ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  10. 78 FR 22770 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2013-04-17

    ...-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY...: Background On August 29, 2011, DHS issued a final rule titled, Immigration Benefits Business Transformation... business processes. In this notice, we are correcting three technical errors. DATES: The effective date of...

  11. Minimizing System Modification in an Incremental Design Approach

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. We...

  12. Incremental cryptography and security of public hash functions ...

    African Journals Online (AJOL)

    An investigation of incremental algorithms for crytographic functions was initiated. The problem, for collision-free hashing, is to design a scheme for which there exists an efficient “update” algorithm: this algorithm is given the hash function H, the hash h = H(M) of message M and the “replacement request” (j, m), and outputs ...

  13. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  14. Evidence combination for incremental decision-making processes

    NARCIS (Netherlands)

    Berrada, Ghita; van Keulen, Maurice; de Keijzer, Ander

    The establishment of a medical diagnosis is an incremental process highly fraught with uncertainty. At each step of this painstaking process, it may be beneficial to be able to quantify the uncertainty linked to the diagnosis and steadily update the uncertainty estimation using available sources of

  15. Geometry of finite deformations and time-incremental analysis

    Czech Academy of Sciences Publication Activity Database

    Fiala, Zdeněk

    2016-01-01

    Roč. 81, May (2016), s. 230-244 ISSN 0020-7462 Institutional support: RVO:68378297 Keywords : solid mechanics * finite deformations * time-incremental analysis * Lagrangian system * evolution equation of Lie type Subject RIV: BE - Theoretical Physics Impact factor: 2.074, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020746216000330

  16. Mixed convection and heat generation/absorption aspects in MHD flow of tangent-hyperbolic nanoliquid with Newtonian heat/mass transfer

    Science.gov (United States)

    Qayyum, Sajid; Hayat, Tasawar; Shehzad, Sabir Ali; Alsaedi, Ahmed

    2018-03-01

    This article concentrates on the magnetohydrodynamic (MHD) stagnation point flow of tangent hyperbolic nanofluid in the presence of buoyancy forces. Flow analysis caused due to stretching surface. Characteristics of heat transfer are examined under the influence of thermal radiation and heat generation/absorption. Newtonian conditions for heat and mass transfer are employed. Nanofluid model includes Brownian motion and thermophoresis. The governing nonlinear partial differential systems of the problem are transformed into a systems of nonlinear ordinary differential equations through appropriate variables. Impact of embedded parameters on the velocity, temperature and nanoparticle concentration fields are presented graphically. Numerical computations are made to obtain the values of skin friction coefficient, local Nusselt and Sherwood numbers. It is concluded that velocity field enhances in the frame of mixed convection parameter while reverse situation is observed due to power law index. Effect of Brownian motion parameter on the temperature and heat transfer rate is quite reverse. Moreover impact of solutal conjugate parameter on the concentration and local Sherwood number is quite similar.

  17. Filtros digitales de tangente hiperbólica aplicados al alisado de las perturbaciones en los datos de resistividad de fosfato de Marruecos

    Directory of Open Access Journals (Sweden)

    M. Amrani

    2008-06-01

    Full Text Available Es posible diseñar filtros pasa bajos y pasa bandas por medio de una combinación de funciones tangente hiperbólica en el dominio de la frecuencia, usando los teoremas de escalamiento y deslizamiento de las transformadas de Fourier. Las funciones de filtro correspondientes en el dominio del tiempo pueden ser derivadas analíticamente a partir de las expresiones en el dominio de la frecuencia. Los parámetros de suavidad controlan las pendientes en las regiones de corte y permiten la construcción de filtros relativamente pequeños al mismo tiempo que reducen las oscilaciones de la respuesta del filtro en el dominio del tiempo. Se pueden elegir diferentes parámetros de suavidad para las frecuencias de corte alta y baja en el diseño de filtros pasa banda. Siguiendo el esquema propuesto en este artículo se pueden derivar fácilmente los otros tipos de filtro.

  18. Baseline LAW Glass Formulation Testing

    International Nuclear Information System (INIS)

    Kruger, Albert A.; Mooers, Cavin; Bazemore, Gina; Pegg, Ian L.; Hight, Kenneth; Lai, Shan Tao; Buechele, Andrew; Rielley, Elizabeth; Gan, Hao; Muller, Isabelle S.; Cecil, Richard

    2013-01-01

    The major objective of the baseline glass formulation work was to develop and select glass formulations that are compliant with contractual and processing requirements for each of the LAW waste streams. Other objectives of the work included preparation and characterization of glasses with respect to the properties of interest, optimization of sulfate loading in the glasses, evaluation of ability to achieve waste loading limits, testing to demonstrate compatibility of glass melts with melter materials of construction, development of glass formulations to support ILAW qualification activities, and identification of glass formulation issues with respect to contract specifications and processing requirements

  19. Accelerated Optimization in the PDE Framework: Formulations for the Active Contour Case

    KAUST Repository

    Yezzi, Anthony; Sundaramoorthi, Ganesh

    2017-01-01

    Following the seminal work of Nesterov, accelerated optimization methods have been used to powerfully boost the performance of first-order, gradient-based parameter estimation in scenarios where second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it also performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with a basis of attraction large enough to contain the initial overshoot. This behavior has made accelerated and stochastic gradient search methods particularly popular within the machine learning community. In their recent PNAS 2016 paper, Wibisono, Wilson, and Jordan demonstrate how a broad class of accelerated schemes can be cast in a variational framework formulated around the Bregman divergence, leading to continuum limit ODE's. We show how their formulation may be further extended to infinite dimension manifolds (starting here with the geometric space of curves and surfaces) by substituting the Bregman divergence with inner products on the tangent space and explicitly introducing a distributed mass model which evolves in conjunction with the object of interest during the optimization process. The co-evolving mass model, which is introduced purely for the sake of endowing the optimization with helpful dynamics, also links the resulting class of accelerated PDE based optimization schemes to fluid dynamical formulations of optimal mass transport.

  20. Accelerated Optimization in the PDE Framework: Formulations for the Active Contour Case

    KAUST Repository

    Yezzi, Anthony

    2017-11-27

    Following the seminal work of Nesterov, accelerated optimization methods have been used to powerfully boost the performance of first-order, gradient-based parameter estimation in scenarios where second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it also performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with a basis of attraction large enough to contain the initial overshoot. This behavior has made accelerated and stochastic gradient search methods particularly popular within the machine learning community. In their recent PNAS 2016 paper, Wibisono, Wilson, and Jordan demonstrate how a broad class of accelerated schemes can be cast in a variational framework formulated around the Bregman divergence, leading to continuum limit ODE\\'s. We show how their formulation may be further extended to infinite dimension manifolds (starting here with the geometric space of curves and surfaces) by substituting the Bregman divergence with inner products on the tangent space and explicitly introducing a distributed mass model which evolves in conjunction with the object of interest during the optimization process. The co-evolving mass model, which is introduced purely for the sake of endowing the optimization with helpful dynamics, also links the resulting class of accelerated PDE based optimization schemes to fluid dynamical formulations of optimal mass transport.

  1. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  2. Will Incremental Hemodialysis Preserve Residual Function and Improve Patient Survival?

    Science.gov (United States)

    Davenport, Andrew

    2015-01-01

    The progressive loss of residual renal function in peritoneal dialysis patients is associated with increased mortality. It has been suggested that incremental dialysis may help preserve residual renal function and improve patient survival. Residual renal function depends upon both patient related and dialysis associated factors. Maintaining patients in an over-hydrated state may be associated with better preservation of residual renal function but any benefit comes with a significant risk of cardiovascular consequences. Notably, it is only observational studies that have reported an association between dialysis patient survival and residual renal function; causality has not been established for dialysis patient survival. The tenuous connections between residual renal function and outcomes and between incremental hemodialysis and residual renal function should temper our enthusiasm for interventions in this area. PMID:25385441

  3. Table incremental slow injection CE-CT in lung cancer

    International Nuclear Information System (INIS)

    Yoshida, Shoji; Maeda, Tomoho; Morita, Masaru

    1988-01-01

    The purpose of this study is to evaluate tumor enhancement in lung cancer under the table incremental study with slow injection of contrast media. The early serial 8 sliced images during the slow injection (1.5 ml/sec) of contrant media were obtained. Following the early images, delayed 8 same sliced images were taken in 2 minutes later. Chacteristic enhanced patterns of the primary cancer and metastatic mediastinal lymphnode were recognized in this study. Enhancement of the primary lesion was classified in 4 patterns, irregular geographic pattern, heterogeneous pattern, homogeneous pattern and rim-enhanced pattern. In mediastinal metastatic lymphadenopathy, three enhanced patterns were obtained, heterogeneous, homogeneous and ring enhanced pattern. Some characteristic enhancement patterns according to the histopathological finding of the lung cancer were obtained. With using this incremental slow injection CE-CT, precise information about the relationship between lung cancer and adjacent mediastinal structure, and obvious staining patterns of the tumor and mediastinal lymphnode were recognized. (author)

  4. Final Safety Analysis Report (FSAR) for Building 332, Increment III

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B. N.; Toy, Jr., A. J.

    1977-08-31

    This Final Safety Analysis Report (FSAR) supplements the Preliminary Safety Analysis Report (PSAR), dated January 18, 1974, for Building 332, Increment III of the Plutonium Materials Engineering Facility located at the Lawrence Livermore Laboratory (LLL). The FSAR, in conjunction with the PSAR, shows that the completed increment provides facilities for safely conducting the operations as described. These documents satisfy the requirements of ERDA Manual Appendix 6101, Annex C, dated April 8, 1971. The format and content of this FSAR complies with the basic requirements of the letter of request from ERDA San to LLL, dated March 10, 1972. Included as appendices in support of th FSAR are the Building 332 Operational Safety Procedure and the LLL Disaster Control Plan.

  5. Decoupled Simulation Method For Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Sebastiani, G.; Brosius, A.; Tekkaya, A. E.; Homberg, W.; Kleiner, M.

    2007-01-01

    Within the scope of this article a decoupling algorithm to reduce computing time in Finite Element Analyses of incremental forming processes will be investigated. Based on the given position of the small forming zone, the presented algorithm aims at separating a Finite Element Model in an elastic and an elasto-plastic deformation zone. Including the elastic response of the structure by means of model simplifications, the costly iteration in the elasto-plastic zone can be restricted to the small forming zone and to few supporting elements in order to reduce computation time. Since the forming zone moves along the specimen, an update of both, forming zone with elastic boundary and supporting structure, is needed after several increments.The presented paper discusses the algorithmic implementation of the approach and introduces several strategies to implement the denoted elastic boundary condition at the boundary of the plastic forming zone

  6. Incremental exposure facilitates adaptation to sensory rearrangement. [vestibular stimulation patterns

    Science.gov (United States)

    Lackner, J. R.; Lobovits, D. N.

    1978-01-01

    Visual-target pointing experiments were performed on 24 adult volunteers in order to compare the relative effectiveness of incremental (stepwise) and single-step exposure conditions on adaptation to visual rearrangement. The differences between the preexposure and postexposure scores served as an index of the adaptation elicited during the exposure period. It is found that both single-step and stepwise exposure to visual rearrangement elicit compensatory changes in sensorimotor coordination. However, stepwise exposure, when compared to single-step exposur in terms of the average magnitude of visual displacement over the exposure period, clearly enhances the rate of adaptation. It seems possible that the enhancement of adaptation to unusual patterns of sensory stimulation produced by incremental exposure reflects a general principle of sensorimotor function.

  7. Thermomechanical simulations and experimental validation for high speed incremental forming

    Science.gov (United States)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  8. Automobile sheet metal part production with incremental sheet forming

    Directory of Open Access Journals (Sweden)

    İsmail DURGUN

    2016-02-01

    Full Text Available Nowadays, effect of global warming is increasing drastically so it leads to increased interest on energy efficiency and sustainable production methods. As a result of adverse conditions, national and international project platforms, OEMs (Original Equipment Manufacturers, SMEs (Small and Mid-size Manufacturers perform many studies or improve existing methodologies in scope of advanced manufacturing techniques. In this study, advanced manufacturing and sustainable production method "Incremental Sheet Metal Forming (ISF" was used for sheet metal forming process. A vehicle fender was manufactured with or without die by using different toolpath strategies and die sets. At the end of the study, Results have been investigated under the influence of method and parameters used.Keywords: Template incremental sheet metal, Metal forming

  9. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  10. Fault-tolerant incremental diagnosis with limited historical data

    OpenAIRE

    Gillblad, Daniel; Holst, Anders; Steinert, Rebecca

    2006-01-01

    In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. All relevant information may not be available initially and must be acquired manually or at a cost. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. Here, we will describe how to create an incremental classification system based on a statistical model that is trained from empirical dat...

  11. Diagnosis of small hepatocellular carcinoma by incremental dynamic CT

    International Nuclear Information System (INIS)

    Uchida, Masafumi; Kumabe, Tsutomu; Edamitsu, Osamu

    1993-01-01

    Thirty cases of pathologically confirmed small hepatocellular carcinoma were examined by Incremental Dynamic CT (ICT). ICT scanned the whole liver with single-breath-hold technique; therefore, effective early contrast enhancement could be obtained for diagnosis. Among the 30 tumors, 26 were detected. The detection rate was 87%. A high detection rate was obtained in tumors more than 20 mm in diameter. Twenty-two of 26 tumors could be diagnosed correctly. ICT examination was useful for detection of small hepatocellular carcinoma. (author)

  12. A parallel ILP algorithm that incorporates incremental batch learning

    OpenAIRE

    Nuno Fonseca; Rui Camacho; Fernado Silva

    2003-01-01

    In this paper we tackle the problems of eciency and scala-bility faced by Inductive Logic Programming (ILP) systems. We proposethe use of parallelism to improve eciency and the use of an incrementalbatch learning to address the scalability problem. We describe a novelparallel algorithm that incorporates into ILP the method of incremen-tal batch learning. The theoretical complexity of the algorithm indicatesthat a linear speedup can be achieved.

  13. Public Key Infrastructure Increment 2 (PKI Inc 2)

    Science.gov (United States)

    2016-03-01

    across the Global Information Grid (GIG) and at rest. Using authoritative data, obtained via face-to-face identity proofing, PKI creates a credential ...operating on a network by provision of assured PKI-based credentials for any device on that network. ​​​​PKI Increment One made significant...provide assured/secure validation of revocation of an electronic/ digital credential . 2.DoD PKI shall support assured revocation status requests of

  14. The intermetallic ThRh5: microstructure and enthalpy increments

    International Nuclear Information System (INIS)

    Banerjee, Aparna; Joshi, A.R.; Kaity, Santu; Mishra, R.; Roy, S.B.

    2013-01-01

    Actinide intermetallics are one of the most interesting and important series of compounds. Thermochemistry of these compounds play significant role in understand the nature of bonding in alloys and nuclear fuel performance. In the present paper we report synthesis and characterization of thorium based intermetallic compound ThRh 5 (s) by SEM/EDX technique. The mechanical properties and enthalpy increment as a function of temperature of the alloy has been measured. (author)

  15. Systematic Luby Transform codes as incremental redundancy scheme

    CSIR Research Space (South Africa)

    Grobler, TL

    2011-09-01

    Full Text Available Transform Codes as Incremental Redundancy Scheme T. L. Grobler y, E. R. Ackermann y, J. C. Olivier y and A. J. van Zylz Department of Electrical, Electronic and Computer Engineering University of Pretoria, Pretoria 0002, South Africa Email: trienkog...@gmail.com, etienne.ackermann@ieee.org yDefence, Peace, Safety and Security (DPSS) Council for Scientific and Industrial Research (CSIR), Pretoria 0001, South Africa zDepartment of Mathematics and Applied Mathematics University of Pretoria, Pretoria 0002, South...

  16. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    OpenAIRE

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  17. Health level seven interoperability strategy: big data, incrementally structured.

    Science.gov (United States)

    Dolin, R H; Rogers, B; Jaffe, C

    2015-01-01

    Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.

  18. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek

    2017-10-17

    Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.

  19. Conservation of wildlife populations: factoring in incremental disturbance.

    Science.gov (United States)

    Stewart, Abbie; Komers, Petr E

    2017-06-01

    Progressive anthropogenic disturbance can alter ecosystem organization potentially causing shifts from one stable state to another. This potential for ecosystem shifts must be considered when establishing targets and objectives for conservation. We ask whether a predator-prey system response to incremental anthropogenic disturbance might shift along a disturbance gradient and, if it does, whether any disturbance thresholds are evident for this system. Development of linear corridors in forested areas increases wolf predation effectiveness, while high density of development provides a safe-haven for their prey. If wolves limit moose population growth, then wolves and moose should respond inversely to land cover disturbance. Using general linear model analysis, we test how the rate of change in moose ( Alces alces ) density and wolf ( Canis lupus ) harvest density are influenced by the rate of change in land cover and proportion of land cover disturbed within a 300,000 km 2 area in the boreal forest of Alberta, Canada. Using logistic regression, we test how the direction of change in moose density is influenced by measures of land cover change. In response to incremental land cover disturbance, moose declines occurred where 43% of land cover was disturbed and wolf density declined. Wolves and moose appeared to respond inversely to incremental disturbance with the balance between moose decline and wolf increase shifting at about 43% of land cover disturbed. Conservation decisions require quantification of disturbance rates and their relationships to predator-prey systems because ecosystem responses to anthropogenic disturbance shift across disturbance gradients.

  20. Context-dependent incremental timing cells in the primate hippocampus.

    Science.gov (United States)

    Sakon, John J; Naya, Yuji; Wirth, Sylvia; Suzuki, Wendy A

    2014-12-23

    We examined timing-related signals in primate hippocampal cells as animals performed an object-place (OP) associative learning task. We found hippocampal cells with firing rates that incrementally increased or decreased across the memory delay interval of the task, which we refer to as incremental timing cells (ITCs). Three distinct categories of ITCs were identified. Agnostic ITCs did not distinguish between different trial types. The remaining two categories of cells signaled time and trial context together: One category of cells tracked time depending on the behavioral action required for a correct response (i.e., early vs. late release), whereas the other category of cells tracked time only for those trials cued with a specific OP combination. The context-sensitive ITCs were observed more often during sessions where behavioral learning was observed and exhibited reduced incremental firing on incorrect trials. Thus, single primate hippocampal cells signal information about trial timing, which can be linked with trial type/context in a learning-dependent manner.

  1. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  2. Novel Formulations for Antimicrobial Peptides

    Directory of Open Access Journals (Sweden)

    Ana Maria Carmona-Ribeiro

    2014-10-01

    Full Text Available Peptides in general hold much promise as a major ingredient in novel supramolecular assemblies. They may become essential in vaccine design, antimicrobial chemotherapy, cancer immunotherapy, food preservation, organs transplants, design of novel materials for dentistry, formulations against diabetes and other important strategical applications. This review discusses how novel formulations may improve the therapeutic index of antimicrobial peptides by protecting their activity and improving their bioavailability. The diversity of novel formulations using lipids, liposomes, nanoparticles, polymers, micelles, etc., within the limits of nanotechnology may also provide novel applications going beyond antimicrobial chemotherapy.

  3. Novel Formulations for Antimicrobial Peptides

    Science.gov (United States)

    Carmona-Ribeiro, Ana Maria; Carrasco, Letícia Dias de Melo

    2014-01-01

    Peptides in general hold much promise as a major ingredient in novel supramolecular assemblies. They may become essential in vaccine design, antimicrobial chemotherapy, cancer immunotherapy, food preservation, organs transplants, design of novel materials for dentistry, formulations against diabetes and other important strategical applications. This review discusses how novel formulations may improve the therapeutic index of antimicrobial peptides by protecting their activity and improving their bioavailability. The diversity of novel formulations using lipids, liposomes, nanoparticles, polymers, micelles, etc., within the limits of nanotechnology may also provide novel applications going beyond antimicrobial chemotherapy. PMID:25302615

  4. An Incremental Physically-Based Model of P91 Steel Flow Behaviour for the Numerical Analysis of Hot-Working Processes

    Directory of Open Access Journals (Sweden)

    Alberto Murillo-Marrodán

    2018-04-01

    Full Text Available This paper is aimed at modelling the flow behaviour of P91 steel at high temperature and a wide range of strain rates for constant and also variable strain-rate deformation conditions, such as those in real hot-working processes. For this purpose, an incremental physically-based model is proposed for the P91 steel flow behavior. This formulation considers the effects of dynamic recovery (DRV and dynamic recrystallization (DRX on the mechanical properties of the material, using only the flow stress, strain rate and temperature as state variables and not the accumulated strain. Therefore, it reproduces accurately the flow stress, work hardening and work softening not only under constant, but also under transient deformation conditions. To accomplish this study, the material is characterised experimentally by means of uniaxial compression tests, conducted at a temperature range of 900–1270 °C and at strain rates in the range of 0.005–10 s−1. Finally, the proposed model is implemented in commercial finite element (FE software to provide evidence of the performance of the proposed formulation. The experimental compression tests are simulated using the novel model and the well-known Hansel–Spittel formulation. In conclusion, the incremental physically-based model shows accurate results when work softening is present, especially under variable strain-rate deformation conditions. Hence, the present formulation is appropriate for the simulation of the hot-working processes typically conducted at industrial scale.

  5. Evaluation of incremental reactivity and its uncertainty in Southern California.

    Science.gov (United States)

    Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G

    2003-04-15

    The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.

  6. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  7. Incremental Innovation and Competitive Pressure in the Presence of Discrete Innovation

    DEFF Research Database (Denmark)

    Ghosh, Arghya; Kato, Takao; Morita, Hodaka

    2017-01-01

    Technical progress consists of improvements made upon the existing technology (incremental innovation) and innovative activities aiming at entirely new technology (discrete innovation). Incremental innovation is often of limited relevance to the new technology invented by successful discrete...

  8. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2006-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  9. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  10. Effective properties of linear viscoelastic heterogeneous media: Internal variables formulation and extension to ageing behaviours

    International Nuclear Information System (INIS)

    Ricaud, J.M.; Masson, R.; Masson, R.

    2009-01-01

    The Laplace-Carson transform classically used for homogenization of linear viscoelastic heterogeneous media yields integral formulations of effective behaviours. These are far less convenient than internal variables formulations with respect to computational aspects as well as to theoretical extensions to closely related problems such as ageing viscoelasticity. Noticing that the collocation method is usually adopted to invert the Laplace-Carson transforms, we first remark that this approximation is equivalent to an internal variables formulation which is exact in some specific situations. This result is illustrated for a two-phase composite with phases obeying a compressible Maxwellian behaviour. Next, an incremental formulation allows to extend at each time step the previous general framework to ageing viscoelasticity. Finally, with the help of a creep test of a porous viscoelastic matrix reinforced with elastic inclusions, it is shown that the method yields accurate predictions (comparing to reference results provided by periodic cell finite element computations). (authors)

  11. An Approach to Incremental Design of Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to incremental design of distributed embedded systems for hard real-time applications. We start from an already existing system running a set of applications and the design problem is to implement new functionality on this system. Thus, we propose mapping...... strategies of functionality so that the already running functionality is not disturbed and there is a good chance that, later, new functionality can easily be mapped on the resulted system. The mapping and scheduling for hard real-time embedded systems are considered the context of a realistic communication...

  12. From incremental to fundamental substitution in chemical alternatives assessment

    DEFF Research Database (Denmark)

    Fantke, Peter; Weber, Roland; Scheringer, Martin

    2015-01-01

    to similarity in chemical structures and, hence, similar hazard profiles between phase-out and substitute chemicals, leading to a rather incremental than fundamental substitution. A hampered phase-out process, the lack of implementing Green Chemistry principles in chemicals design, and lack of Sustainable...... an integrated approach of all stakeholders involved toward more fundamental and function-based substitution by greener and more sustainable alternatives. Our recommendations finally constitute a starting point for identifying further research needs and for improving current alternatives assessment practice....

  13. Automating the Incremental Evolution of Controllers for Physical Robots

    DEFF Research Database (Denmark)

    Faina, Andres; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    the evolution of digital objects.…” The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration...... of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range...

  14. Transferring the Incremental Capacity Analysis to Lithium-Sulfur Batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Kalogiannis, Theodoros; Purkayastha, Rajlakshmi

    2017-01-01

    In order to investigate the battery degradation and to estimate their health, various techniques can be applied. One of them, which is widely used for Lithium-ion batteries, is the incremental capacity analysis (ICA). In this work, we apply the ICA to Lithium-Sulfur batteries, which differ in many...... aspects from Lithium-ion batteries and possess unique behavior. One of the challenges of applying the ICA to Lithium-Sulfur batteries is the representation of the IC curves, as their voltage profiles are often non-monotonic, resulting in more complex IC curves. The ICA is at first applied to charge...

  15. Failure mechanisms in single-point incremental forming of metals

    DEFF Research Database (Denmark)

    Silva, Maria B.; Nielsen, Peter Søe; Bay, Niels

    2011-01-01

    The last years saw the development of two different views on how failure develops in single-point incremental forming (SPIF). Today, researchers are split between those claiming that fracture is always preceded by necking and those considering that fracture occurs with suppression of necking. Each...... on formability limits and development of fracture. The unified view conciliates the aforementioned different explanations on the role of necking in fracture and is consistent with the experimental observations that have been reported in the past years. The work is performed on aluminium AA1050-H111 sheets...

  16. Single Point Incremental Forming using a Dummy Sheet

    DEFF Research Database (Denmark)

    Skjødt, Martin; Silva, Beatriz; Bay, Niels

    2007-01-01

    A new version of single point incremental forming (SPIF) is presented. This version includes a dummy sheet on top of the work piece, thus forming two sheets instead of one. The dummy sheet, which is in contact with the rotating tool pin, is discarded after forming. The new set-up influences....... The possible influence of friction between the two sheets is furthermore investigated. The results show that the use of a dummy sheet reduces wear of the work piece to almost zero, but also causes a decrease in formability. Bulging of the planar sides of the pyramid is reduced and surface roughness...

  17. Neonates need tailored drug formulations.

    Science.gov (United States)

    Allegaert, Karel

    2013-02-08

    Drugs are very strong tools used to improve outcome in neonates. Despite this fact and in contrast to tailored perfusion equipment, incubators or ventilators for neonates, we still commonly use drug formulations initially developed for adults. We would like to make the point that drug formulations given to neonates need to be tailored for this age group. Besides the obvious need to search for active compounds that take the pathophysiology of the newborn into account, this includes the dosage and formulation. The dosage or concentration should facilitate the administration of low amounts and be flexible since clearance is lower in neonates with additional extensive between-individual variability. Formulations need to be tailored for dosage variability in the low ranges and also to the clinical characteristics of neonates. A specific focus of interest during neonatal drug development therefore is a need to quantify and limit excipient exposure based on the available knowledge of their safety or toxicity. Until such tailored vials and formulations become available, compounding practices for drug formulations in neonates should be evaluated to guarantee the correct dosing, product stability and safety.

  18. Microcanonical formulation of quantum field theories

    International Nuclear Information System (INIS)

    Iwazaki, A.

    1984-03-01

    A microcanonical formulation of Euclidean quantum field theories is presented. In the formulation, correlation functions are given by a microcanonical ensemble average of fields. Furthermore, the perturbative equivalence of the formulation and the standard functional formulation is proved and the equipartition low is derived in our formulation. (author)

  19. Switch-mode High Voltage Drivers for Dielectric Electro Active Polymer (DEAP) Incremental Actuators

    DEFF Research Database (Denmark)

    Thummala, Prasanth

    voltage DC-DC converters for driving the DEAP based incremental actuators. The DEAP incremental actuator technology has the potential to be used in various industries, e.g., automotive, space and medicine. The DEAP incremental actuator consists of three electrically isolated and mechanically connected...

  20. 21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... increment sensitivity index (SISI) adapter. (a) Identification. A short increment sensitivity index (SISI...

  1. Incremental Learning of Skill Collections based on Intrinsic Motivation

    Directory of Open Access Journals (Sweden)

    Jan Hendrik Metzen

    2013-07-01

    Full Text Available Life-long learning of reusable, versatile skills is a key prerequisite forembodied agents that act in a complex, dynamic environment and are faced withdifferent tasks over their lifetime. We address the question of how an agentcan learn useful skills efficiently during a developmental period,i.e., when no task is imposed on him and no external reward signal is provided.Learning of skills in a developmental period needs to be incremental andself-motivated. We propose a new incremental, task-independent skill discoveryapproach that is suited for continuous domains. Furthermore, the agent learnsspecific skills based on intrinsic motivation mechanisms thatdetermine on which skills learning is focused at a given point in time. Weevaluate the approach in a reinforcement learning setup in two continuousdomains with complex dynamics. We show that an intrinsically motivated, skilllearning agent outperforms an agent which learns task solutions from scratch.Furthermore, we compare different intrinsic motivation mechanisms and howefficiently they make use of the agent's developmental period.

  2. Optimal Output of Distributed Generation Based On Complex Power Increment

    Science.gov (United States)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  3. Phase retrieval via incremental truncated amplitude flow algorithm

    Science.gov (United States)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  4. Adaptive Incremental Genetic Algorithm for Task Scheduling in Cloud Environments

    Directory of Open Access Journals (Sweden)

    Kairong Duan

    2018-05-01

    Full Text Available Cloud computing is a new commercial model that enables customers to acquire large amounts of virtual resources on demand. Resources including hardware and software can be delivered as services and measured by specific usage of storage, processing, bandwidth, etc. In Cloud computing, task scheduling is a process of mapping cloud tasks to Virtual Machines (VMs. When binding the tasks to VMs, the scheduling strategy has an important influence on the efficiency of datacenter and related energy consumption. Although many traditional scheduling algorithms have been applied in various platforms, they may not work efficiently due to the large number of user requests, the variety of computation resources and complexity of Cloud environment. In this paper, we tackle the task scheduling problem which aims to minimize makespan by Genetic Algorithm (GA. We propose an incremental GA which has adaptive probabilities of crossover and mutation. The mutation and crossover rates change according to generations and also vary between individuals. Large numbers of tasks are randomly generated to simulate various scales of task scheduling problem in Cloud environment. Based on the instance types of Amazon EC2, we implemented virtual machines with different computing capacity on CloudSim. We compared the performance of the adaptive incremental GA with that of Standard GA, Min-Min, Max-Min , Simulated Annealing and Artificial Bee Colony Algorithm in finding the optimal scheme. Experimental results show that the proposed algorithm can achieve feasible solutions which have acceptable makespan with less computation time.

  5. Incremental support vector machines for fast reliable image recognition

    International Nuclear Information System (INIS)

    Makili, L.; Vega, J.; Dormido-Canto, S.

    2013-01-01

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency

  6. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    Science.gov (United States)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  7. STS-102 Expedition 2 Increment and Science Briefing

    Science.gov (United States)

    2001-01-01

    Merri Sanchez, Expedition 2 Increment Manager, John Uri, Increment Scientist, and Lybrease Woodard, Lead Payload Operations Director, give an overview of the upcoming activities and objectives of the Expedition 2's (E2's) mission in this prelaunch press conference. Ms. Sanchez describes the crew rotation of Expedition 1 to E2, the timeline E2 will follow during their stay on the International Space Station (ISS), and the various flights going to the ISS and what each will bring to ISS. Mr. Uri gives details on the on-board experiments that will take place on the ISS in the fields of microgravity research, commercial, earth, life, and space sciences (such as radiation characterization, H-reflex, colloids formation and interaction, protein crystal growth, plant growth, fermentation in microgravity, etc.). He also gives details on the scientific facilities to be used (laboratory racks and equipment such as the human torso facsimile or 'phantom torso'). Ms. Woodard gives an overview of Marshall Flight Center's role in the mission. Computerized simulations show the installation of the Space Station Remote Manipulator System (SSRMS) onto the ISS and the installation of the airlock using SSRMS. Live footage shows the interior of the ISS, including crew living quarters, the Progress Module, and the Destiny Laboratory. The three then answer questions from the press.

  8. Efficient incremental relaying for packet transmission over fading channels

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-07-01

    In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying (EIR) scheme with both amplify and forward and decode and forward relaying. We compare the performance of the EIR scheme with the threshold-based incremental relaying (TIR) scheme. It is shown that the efficiency of the TIR scheme is better for lower values of the threshold. However, the efficiency of the TIR scheme for higher values of threshold is outperformed by the EIR. In addition, three new threshold-based adaptive EIR are devised to further improve the efficiency of the EIR scheme. We calculate the packet error rate and the efficiency of these new schemes to provide the analytical insight. © 2014 IEEE.

  9. Incremental learning of concept drift in nonstationary environments.

    Science.gov (United States)

    Elwell, Ryan; Polikar, Robi

    2011-10-01

    We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. © 2011 IEEE

  10. Incremental learning of skill collections based on intrinsic motivation

    Science.gov (United States)

    Metzen, Jan H.; Kirchner, Frank

    2013-01-01

    Life-long learning of reusable, versatile skills is a key prerequisite for embodied agents that act in a complex, dynamic environment and are faced with different tasks over their lifetime. We address the question of how an agent can learn useful skills efficiently during a developmental period, i.e., when no task is imposed on him and no external reward signal is provided. Learning of skills in a developmental period needs to be incremental and self-motivated. We propose a new incremental, task-independent skill discovery approach that is suited for continuous domains. Furthermore, the agent learns specific skills based on intrinsic motivation mechanisms that determine on which skills learning is focused at a given point in time. We evaluate the approach in a reinforcement learning setup in two continuous domains with complex dynamics. We show that an intrinsically motivated, skill learning agent outperforms an agent which learns task solutions from scratch. Furthermore, we compare different intrinsic motivation mechanisms and how efficiently they make use of the agent's developmental period. PMID:23898265

  11. Incremental support vector machines for fast reliable image recognition

    Energy Technology Data Exchange (ETDEWEB)

    Makili, L., E-mail: makili_le@yahoo.com [Instituto Superior Politécnico da Universidade Katyavala Bwila, Benguela (Angola); Vega, J. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Dormido-Canto, S. [Dpto. Informática y Automática – UNED, Madrid (Spain)

    2013-10-15

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency.

  12. Incremental first pass technique to measure left ventricular ejection fraction

    International Nuclear Information System (INIS)

    Kocak, R.; Gulliford, P.; Hoggard, C.; Critchley, M.

    1980-01-01

    An incremental first pass technique was devised to assess the acute effects of any drug on left ventricular ejection fraction (LVEF) with or without a physiological stress. In particular, the effects of the vasodilater isosorbide dinitrate on LVEF before and after exercise were studied in 11 patients who had suffered cardiac failure. This was achieved by recording the passage of sup(99m)Tc pertechnetate through the heart at each stage of the study using a gamma camera computer system. Consistent values for four consecutive first pass values without exercise or drug in normal subjects illustrated the reproducibility of the technique. There was no significant difference between LVEF values obtained at rest and exercise before or after oral isosorbide dinitrate with the exception of one patient with gross mitral regurgitation. The advantages of the incremental first pass technique are that the patient need not be in sinus rhythm, the effects of physiological intervention may be studied and tests may also be repeated at various intervals during long term follow-up of patients. A disadvantage of the method is the limitation in the number of sequential measurements which can be carried out due to the amount of radioactivity injected. (U.K.)

  13. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    KAUST Repository

    Zhang, Chuxu

    2017-08-02

    Predicting the fast-rising young researchers (the Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt years. We explore a series of factors that can drive an author to be fast-rising and design a novel pairwise citation increment ranking (PCIR) method that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of PCIR. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.

  14. Tactile friction of topical formulations.

    Science.gov (United States)

    Skedung, L; Buraczewska-Norin, I; Dawood, N; Rutland, M W; Ringstad, L

    2016-02-01

    The tactile perception is essential for all types of topical formulations (cosmetic, pharmaceutical, medical device) and the possibility to predict the sensorial response by using instrumental methods instead of sensory testing would save time and cost at an early stage product development. Here, we report on an instrumental evaluation method using tactile friction measurements to estimate perceptual attributes of topical formulations. Friction was measured between an index finger and an artificial skin substrate after application of formulations using a force sensor. Both model formulations of liquid crystalline phase structures with significantly different tactile properties, as well as commercial pharmaceutical moisturizing creams being more tactile-similar, were investigated. Friction coefficients were calculated as the ratio of the friction force to the applied load. The structures of the model formulations and phase transitions as a result of water evaporation were identified using optical microscopy. The friction device could distinguish friction coefficients between the phase structures, as well as the commercial creams after spreading and absorption into the substrate. In addition, phase transitions resulting in alterations in the feel of the formulations could be detected. A correlation was established between skin hydration and friction coefficient, where hydrated skin gave rise to higher friction. Also a link between skin smoothening and finger friction was established for the commercial moisturizing creams, although further investigations are needed to analyse this and correlations with other sensorial attributes in more detail. The present investigation shows that tactile friction measurements have potential as an alternative or complement in the evaluation of perception of topical formulations. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Numerical integration of some new unified plasticity-creep formulations

    International Nuclear Information System (INIS)

    Krieg, R.D.

    1977-01-01

    The usual constitutive description of metals at high temperature treats creep as a phenomenon which must be added to time independent phenomena. A new approach is now being advocated by some people, principally metallurgists. They all treat the inelastic strain as a unified quantity, incapable of being separated into time dependent and time independent parts. This paper examines the behavior of the differential formulations reported in the literature together with one proposed by the author. These formulations are capable of representing primary and secondary creep, cyclic hardening to a stable cyclic stress-strain loop, a conventional plasticity behavior, and a Bauchinger effect which may be creep induced and discernable either at fast or slow loading rates. The new unified formulations seem to lead to very non-linear systems of equations which are very well behaved in some regions and very stiff in other regions where the word 'stiff' is used in the mathematical sense. Simple conventional methods of integrating incremental constitutive equations are observed to be totally inadequate. A method of numerically integrating the equations is presented. (Auth.)

  16. Incremental Optimization of Hub and Spoke Network for the Spokes’ Numbers and Flow

    Directory of Open Access Journals (Sweden)

    Yanfeng Wang

    2015-01-01

    Full Text Available Hub and spoke network problem is solved as part of a strategic decision making process which may have a profound effect on the future of enterprises. In view of the existing network structure, as time goes on, the number of spokes and the flow change because of different sources of uncertainty. Hence, the incremental optimization of hub and spoke network problem is considered in this paper, and the policy makers should adopt a series of strategies to cope with the change, such as setting up new hubs, adjusting the capacity level of original hubs, or closing some original hubs. The objective is to minimize the total cost, which includes the setup costs for the new hubs, the closure costs, and the adjustment costs for the original hubs as well as the flow routing costs. Two mixed-integer linear programming formulations are proposed and analyzed for this problem. China Deppon Logistics as an example is performed to present computational analysis, and we analyze the changes in the solutions driven by the number of spokes and the flow. The tests also allow an analysis to consider the effect of variation in parameters on network.

  17. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    International Nuclear Information System (INIS)

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  18. Decontamination formulation with sorbent additive

    Science.gov (United States)

    Tucker; Mark D. , Comstock; Robert H.

    2007-10-16

    A decontamination formulation and method of making that neutralizes the adverse health effects of both chemical and biological compounds, especially chemical warfare (CW) and biological warfare (BW) agents, and toxic industrial chemicals. The formulation provides solubilizing compounds that serve to effectively render the chemical and biological compounds, particularly CW and BW compounds, susceptible to attack, and at least one reactive compound that serves to attack (and detoxify or kill) the compound. The formulation includes at least one solubilizing agent, a reactive compound, a bleaching activator, a sorbent additive, and water. The highly adsorbent, water-soluble sorbent additive (e.g., sorbitol or mannitol) is used to "dry out" one or more liquid ingredients, such as the liquid bleaching activator (e.g., propylene glycol diacetate or glycerol diacetate) and convert the activator into a dry, free-flowing powder that has an extended shelf life, and is more convenient to handle and mix in the field.

  19. Rapid Prototyping by Single Point Incremental Forming of Sheet Metal

    DEFF Research Database (Denmark)

    Skjødt, Martin

    2008-01-01

    . The process is incremental forming since plastic deformation takes place in a small local zone underneath the forming tool, i.e. the sheet is formed as a summation of the movement of the local plastic zone. The process is slow and therefore only suited for prototypes or small batch production. On the other...... in the plastic zone. Using these it is demonstrated that the growth rate of accumulated damage in SPIF is small compared to conventional sheet forming processes. This combined with an explanation why necking is suppressed is a new theory stating that SPIF is limited by fracture and not necking. The theory...... SPIF. A multi stage strategy is presented which allows forming of a cup with vertical sides in about half of the depth. It is demonstrated that this results in strain paths which are far from straight, but strains are still limited by a straight fracture line in the principal strain space. The multi...

  20. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 data points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.

  1. Incremental and developmental perspectives for general-purpose learning systems

    Directory of Open Access Journals (Sweden)

    Fernando Martínez-Plumed

    2017-02-01

    Full Text Available The stupefying success of Articial Intelligence (AI for specic problems, from recommender systems to self-driving cars, has not yet been matched with a similar progress in general AI systems, coping with a variety of (dierent problems. This dissertation deals with the long-standing problem of creating more general AI systems, through the analysis of their development and the evaluation of their cognitive abilities. It presents a declarative general-purpose learning system and a developmental and lifelong approach for knowledge acquisition, consolidation and forgetting. It also analyses the use of the use of more ability-oriented evaluation techniques for AI evaluation and provides further insight for the understanding of the concepts of development and incremental learning in AI systems.

  2. Improving process performance in Incremental Sheet Forming (ISF)

    International Nuclear Information System (INIS)

    Ambrogio, G.; Filice, L.; Manco, G. L.

    2011-01-01

    Incremental Sheet Forming (ISF) is a relatively new process in which a sheet clamped along the borders is progressively deformed through a hemispherical tool. The tool motion is CNC controlled and the path is designed using a CAD-CAM approach, with the aim to reproduce the final shape contour such as in the surface milling. The absence of a dedicated setup and the related high flexibility is the main point of strength and the reason why several researchers focused their attentions on the ISF process.On the other hand the process slowness is the most relevant drawback which reduces a wider industrial application. In the paper, a first attempt to overcome this process limitation is presented taking into account a relevant speed increasing respect to the values currently used.

  3. Curcumin nanodisks: formulation and characterization

    OpenAIRE

    Ghosh, Mistuni; Singh, Amareshwar T. K.; Xu, Wenwei; Sulchek, Todd; Gordon, Leo I.; Ryan, Robert O.

    2010-01-01

    Nanodisks (ND) are nanoscale, disk-shaped phospholipid bilayers whose edge is stabilized by apolipoproteins. In the present study, ND were formulated with the bioactive polyphenol, curcumin, at a 6:1 phospholipid:curcumin molar ratio. Atomic force microscopy revealed that curcumin-ND are particles with diameters

  4. Covariant Formulation of Hooke's Law.

    Science.gov (United States)

    Gron, O.

    1981-01-01

    Introducing a four-vector strain and a four-force stress, Hooke's law is written as a four-vector equation. This formulation is shown to clarify seemingly paradoxical results in connection with uniformly accelerated motion, and rotational motion with angular acceleration. (Author/JN)

  5. Hamiltonian formulation of the supermembrane

    International Nuclear Information System (INIS)

    Bergshoeff, E.; Sezgin, E.; Tanii, Y.

    1987-06-01

    The Hamiltonian formulation of the supermembrane theory in eleven dimensions is given. The covariant split of the first and second class constraints is exhibited, and their Dirac brackets are computed. Gauge conditions are imposed in such a way that the reparametrizations of the membrane with divergence free 2-vectors are unfixed. (author). 10 refs

  6. Distance-independent individual tree diameter-increment model for Thuya [Tetraclinis articulata (VAHL. MAST.] stands in Tunisia

    Directory of Open Access Journals (Sweden)

    T. Sghaier

    2013-12-01

    Full Text Available Aim of study: The aim of the work was to develop an individual tree diameter-increment model for Thuya (Tetraclinis articulata in Tunisia.Area of study: The natural Tetraclinis articulata stands at Jbel Lattrech in north-eastern of Tunisia.Material and methods:  Data came from 200 trees located in 50 sample plots. The diameter at age t and the diameter increment for the last five years obtained from cores taken at breast height were measured for each tree. Four difference equations derived from the base functions of Richards, Lundqvist, Hossfeld IV and Weibull were tested using the age-independent formulations of the growth functions. Both numerical and graphical analyses were used to evaluate the performance of the candidate models.Main results: Based on the analysis, the age-independent difference equation derived from the base function Richards model was selected. Two of the three parameters (growth rate and shape parameter of the retained model were related to site quality, represented by a Growth Index, stand density and the basal area in larger trees divided by diameter of the subject tree expressing the inter-tree competition.Research highlights: The proposed model can be useful for predicting the diameter growth of Tetraclinis articulata in Tunisia when age is not available or for trees growing in uneven-aged stands.Keywords: Age-independent growth model; difference equations; Tetraclinis articulata; Tunisia.

  7. A user-friendly tool for incremental haemodialysis prescription.

    Science.gov (United States)

    Casino, Francesco Gaetano; Basile, Carlo

    2018-01-05

    There is a recently heightened interest in incremental haemodialysis (IHD), the main advantage of which could likely be a better preservation of the residual kidney function of the patients. The implementation of IHD, however, is hindered by many factors, among them, the mathematical complexity of its prescription. The aim of our study was to design a user-friendly tool for IHD prescription, consisting of only a few rows of a common spreadsheet. The keystone of our spreadsheet was the following fundamental concept: the dialysis dose to be prescribed in IHD depends only on the normalized urea clearance provided by the native kidneys (KRUn) of the patient for each frequency of treatment, according to the variable target model recently proposed by Casino and Basile (The variable target model: a paradigm shift in the incremental haemodialysis prescription. Nephrol Dial Transplant 2017; 32: 182-190). The first step was to put in sequence a series of equations in order to calculate, firstly, KRUn and, then, the key parameters to be prescribed for an adequate IHD; the second step was to compare KRUn values obtained with our spreadsheet with KRUn values obtainable with the gold standard Solute-solver (Daugirdas JT et al., Solute-solver: a web-based tool for modeling urea kinetics for a broad range of hemodialysis schedules in multiple patients. Am J Kidney Dis 2009; 54: 798-809) in a sample of 40 incident haemodialysis patients. Our spreadsheet provided excellent results. The differences with Solute-solver were clinically negligible. This was confirmed by the Bland-Altman plot built to analyse the agreement between KRUn values obtained with the two methods: the difference was 0.07 ± 0.05 mL/min/35 L. Our spreadsheet is a user-friendly tool able to provide clinically acceptable results in IHD prescription. Two immediate consequences could derive: (i) a larger dissemination of IHD might occur; and (ii) our spreadsheet could represent a useful tool for an ineludibly

  8. An Incremental Approach to Support Realization of Modularization Benefits

    DEFF Research Database (Denmark)

    Hansen, Poul H. Kyvsgård; Sun, Hongyi

    2010-01-01

    In general, the phenomenon of managing modularization is not well known. The cause-effect relationships between modularization and realized benefits are complex and comprehensive. Though a number of research works have contributed to the study of the phenomenon of efficient and effective...... modularization management it is far from clarified. Recognizing the need for further empirical research, we studied 40 modularity cases. Then we develop a research framework with the purpose of uncovering the current state. Furthermore, we formulate a tentative model aiming at guiding the platform management...

  9. Langevin formulation of quantum dynamics

    International Nuclear Information System (INIS)

    Roncadelli, M.

    1989-03-01

    We first show that nonrelativistic quantum mechanics formulated at imaginary-(h/2 π) can formally be viewed as the Fokker-Planck description of a frictionless brownian motion, which occurs (in general) in an absorbing medium. We next offer a new formulation of quantum mechanics, which is basically the Langevin treatment of this brownian motion. Explicitly, we derive a noise-average representation for the transition probability W(X'',t''|X',t'), in terms of the solutions to a Langevin equation with a Gaussian white-noise. Upon analytic continuation back to real-(h/2 π),W(X'',t''|X',t') becomes the propagator of the original Schroedinger equation. Our approach allows for a straightforward application to quantum dynamical problems of the mathematical techniques of classical stochastic processes. Moreover, computer simulations of quantum mechanical systems can be carried out by using numerical programs based on the Langevin dynamics. (author). 19 refs, 1 tab

  10. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    OpenAIRE

    Vermeulen, Patrick; Bosch, Frans; Volberda, Henk

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation. In this paper, we use an institutional perspective to investigate why established firms in the financial services industry struggle with their complex incremental product innovation efforts. We ar...

  11. Optimization of chlorphenesin emulgel formulation

    OpenAIRE

    Mohamed, Magdy I.

    2004-01-01

    This study was conducted to develop an emulgel formulation of chlorphenesin (CHL) using 2 types of gelling agents: hydroxypropylmethyl cellulose (HPMC) and Carbopol 934. The influence of the type of the gelling agent and the concentration of both the oil phase and emulsifying agent on the drug release from the prepared emulgels was investigated using a 23 factorial design. The prepared emulgels were evaluated for their physical appearance, rheological behavior, drug release, antifungal activi...

  12. Incremental cost of PACS in a medical intensive care unit

    Science.gov (United States)

    Langlotz, Curtis P.; Cleff, Bridget; Even-Shoshan, Orit; Bozzo, Mary T.; Redfern, Regina O.; Brikman, Inna; Seshadri, Sridhar B.; Horii, Steven C.; Kundel, Harold L.

    1995-05-01

    Our purpose is to determine the incremental costs (or savings) due to the introduction of picture archiving and communication systems (PACS) and computed radiology (CR) in a medical intensive care unit (MICU). Our economic analysis consists of three measurement methods. The first method is an assessment of the direct costs to the radiology department, implemented in a spreadsheet model. The second method consists of a series of brief observational studies to measure potential changes in personnel costs that might not be reflected in administrative claims. The third method (results not reported here) is a multivariate modeling technique which estimates the independent effect of PACS/CR on the cost of care (estimated from administrative claims data), while controlling for clinical case- mix variables. Our direct cost model shows no cost savings to the radiology department after the introduction of PACS in the medical intensive care unit. Savings in film supplies and film library personnel are offset by increases in capital equipment costs and PACS operation personnel. The results of observational studies to date demonstrate significant savings in clinician film-search time, but no significant change in technologist time or lost films. Our model suggests that direct radiology costs will increase after the limited introduction of PACS/CR in the MICU. Our observational studies show a small but significant effect on clinician film search time by the introduction of PACS/CR in the MICU, but no significant effect on other variables. The projected costs of a hospital-wide PACS are currently under study.

  13. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Science.gov (United States)

    Gan, Wensheng; Zhang, Binbin

    2015-01-01

    Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038

  14. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab

    2017-08-22

    Frequent subgraph mining is a core graph operation used in many domains, such as graph data management and knowledge exploration, bioinformatics and security. Most existing techniques target static graphs. However, modern applications, such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem on a single large evolving graph. We adapt the notion of “fringe” to the graph context, that is the set of subgraphs on the border between frequent and infrequent subgraphs. IncGM+ maintains fringe subgraphs and exploits them to prune the search space. To boost the efficiency, we propose an efficient index structure to maintain selected embeddings with minimal memory overhead. These embeddings are utilized to avoid redundant expensive subgraph isomorphism operations. Moreover, the proposed system supports batch updates. Using large real-world graphs, we experimentally verify that IncGM+ outperforms existing methods by up to three orders of magnitude, scales to much larger graphs and consumes less memory.

  15. A novel instrument for generating angular increments of 1 nanoradian

    Science.gov (United States)

    Alcock, Simon G.; Bugnar, Alex; Nistea, Ioana; Sawhney, Kawal; Scott, Stewart; Hillman, Michael; Grindrod, Jamie; Johnson, Iain

    2015-12-01

    Accurate generation of small angles is of vital importance for calibrating angle-based metrology instruments used in a broad spectrum of industries including mechatronics, nano-positioning, and optic fabrication. We present a novel, piezo-driven, flexure device capable of reliably generating micro- and nanoradian angles. Unlike many such instruments, Diamond Light Source's nano-angle generator (Diamond-NANGO) does not rely on two separate actuators or rotation stages to provide coarse and fine motion. Instead, a single Physik Instrumente NEXLINE "PiezoWalk" actuator provides millimetres of travel with nanometre resolution. A cartwheel flexure efficiently converts displacement from the linear actuator into rotary motion with minimal parasitic errors. Rotation of the flexure is directly measured via a Magnescale "Laserscale" angle encoder. Closed-loop operation of the PiezoWalk actuator, using high-speed feedback from the angle encoder, ensures that the Diamond-NANGO's output drifts by only ˜0.3 nrad rms over ˜30 min. We show that the Diamond-NANGO can reliably move with unprecedented 1 nrad (˜57 ndeg) angular increments over a range of >7000 μrad. An autocollimator, interferometer, and capacitive displacement sensor are used to independently confirm the Diamond-NANGO's performance by simultaneously measuring the rotation of a reflective cube.

  16. An incremental anomaly detection model for virtual machines

    Science.gov (United States)

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  17. Validation of daily increments periodicity in otoliths of spotted gar

    Science.gov (United States)

    Snow, Richard A.; Long, James M.; Frenette, Bryan D.

    2017-01-01

    Accurate age and growth information is essential in successful management of fish populations and for understanding early life history. We validated daily increment deposition, including the timing of first ring formation, for spotted gar (Lepisosteus oculatus) through 127 days post hatch. Fry were produced from hatchery-spawned specimens, and up to 10 individuals per week were sacrificed and their otoliths (sagitta, lapillus, and asteriscus) removed for daily age estimation. Daily age estimates for all three otolith pairs were significantly related to known age. The strongest relationships existed for measurements from the sagitta (r2 = 0.98) and the lapillus (r2 = 0.99) with asteriscus (r2 = 0.95) the lowest. All age prediction models resulted in a slope near unity, indicating that ring deposition occurred approximately daily. Initiation of ring formation varied among otolith types, with deposition beginning 3, 7, and 9 days for the sagitta, lapillus, and asteriscus, respectively. Results of this study suggested that otoliths are useful to estimate daily age of spotted gar juveniles; these data may be used to back calculate hatch dates, estimate early growth rates, and correlate with environmental factor that influence spawning in wild populations. is early life history information will be valuable in better understanding the ecology of this species. 

  18. An incremental anomaly detection model for virtual machines.

    Directory of Open Access Journals (Sweden)

    Hancui Zhang

    Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  19. VOLATILITAS RELEVANSI NILAI INCREMENTAL DARI LABA DAN NILAI BUKU

    Directory of Open Access Journals (Sweden)

    B. Linggar Yekti Nugraheni

    2012-03-01

    Full Text Available Dalam penelitian ini dikaji relevansi nilai pola volatilitas pendapatan dan equitas nilai buku. Diprediksi bahwa relevansi nilai volatilitas adalah dikaitkan dengan horizon waktu. Menggunakan model Ohslon, hipotesis yang dikembangkan adalah (1 laba dan nilai buku ekuitas berhubungan positif dengan harga saham, dan (2 ada penurunan atau peningkatan patern relevansi nilai tambahan. Sampel yang digunakan dalam penelitian ini adalah perusahaan manufaktur yang terdaftar di BEI (Bursa Efek Indonesia selama periode 1998-2007. Hasilnya menunjukkan bahwa earnings dan nilai buku ekuitas terkait secara positif dengan harga saham dan relevansi nilai laba menurun sementara ekuitas nilai buku ekuitas selama periode pengamatan. This research investigates the value of relevance volatility patern of earnings and book value equity. It is predicted that the volatility of value relevance is associated with the time horizon. Using the Ohslon model, the hypothesis developed are: (1 earnings and book value equity are associated positively with the stock price (2 There is decrease or increase patern of incremental value relevance. The sample used in this research is manufacturing companies listed in ISE (Indonesia Stock Exchange during 1998-2007 periods. The result shows that earnings and book value equity are related positively with stock price and value relevance of earnings is decreasing while book value equity is increasing during the observation periods.

  20. Automated Dimension Determination for NMF-based Incremental Collaborative Filtering

    Directory of Open Access Journals (Sweden)

    Xiwei Wang

    2015-12-01

    Full Text Available The nonnegative matrix factorization (NMF based collaborative filtering t e chniques h a ve a c hieved great success in product recommendations. It is well known that in NMF, the dimensions of the factor matrices have to be determined in advance. Moreover, data is growing fast; thus in some cases, the dimensions need to be changed to reduce the approximation error. The recommender systems should be capable of updating new data in a timely manner without sacrificing the prediction accuracy. In this paper, we propose an NMF based data update approach with automated dimension determination for collaborative filtering purposes. The approach can determine the dimensions of the factor matrices and update them automatically. It exploits the nearest neighborhood based clustering algorithm to cluster users and items according to their auxiliary information, and uses the clusters as the constraints in NMF. The dimensions of the factor matrices are associated with the cluster quantities. When new data becomes available, the incremental clustering algorithm determines whether to increase the number of clusters or merge the existing clusters. Experiments on three different datasets (MovieLens, Sushi, and LibimSeTi were conducted to examine the proposed approach. The results show that our approach can update the data quickly and provide encouraging prediction accuracy.

  1. Incremental Dynamic Analysis of Koyna Dam under Repeated Ground Motions

    Science.gov (United States)

    Zainab Nik Azizan, Nik; Majid, Taksiah A.; Nazri, Fadzli Mohamed; Maity, Damodar; Abdullah, Junaidah

    2018-03-01

    This paper discovers the incremental dynamic analysis (IDA) of concrete gravity dam under single and repeated earthquake loadings to identify the limit state of the dam. Seven ground motions with horizontal and vertical direction as seismic input considered in the nonlinear dynamic analysis based on the real repeated earthquake in the worldwide. All the ground motions convert to respond spectrum and scaled according to the developed elastic respond spectrum in order to match the characteristic of the ground motion to the soil type. The scaled was depends on the fundamental period, T1 of the dam. The Koyna dam has been selected as a case study for the purpose of the analysis by assuming that no sliding and rigid foundation, has been estimated. IDA curves for Koyna dam developed for single and repeated ground motions and the performance level of the dam identifies. The IDA curve of repeated ground motion shown stiffer rather than single ground motion. The ultimate state displacement for a single event is 45.59mm and decreased to 39.33mm under repeated events which are decreased about 14%. This showed that the performance level of the dam based on seismic loadings depend on ground motion pattern.

  2. Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation

    Science.gov (United States)

    Roberts, Seán G.

    2018-01-01

    This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech. PMID:29515487

  3. Numerical Simulation of Incremental Sheet Forming by Simplified Approach

    Science.gov (United States)

    Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.

    2011-01-01

    The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.

  4. Distribution of incremental static stress caused by earthquakes

    Directory of Open Access Journals (Sweden)

    Y. Y. Kagan

    1994-01-01

    Full Text Available Theoretical calculations, simulations and measurements of rotation of earthquake focal mechanisms suggest that the stress in earthquake focal zones follows the Cauchy distribution which is one of the stable probability distributions (with the value of the exponent α equal to 1. We review the properties of the stable distributions and show that the Cauchy distribution is expected to approximate the stress caused by earthquakes occurring over geologically long intervals of a fault zone development. However, the stress caused by recent earthquakes recorded in instrumental catalogues, should follow symmetric stable distributions with the value of α significantly less than one. This is explained by a fractal distribution of earthquake hypocentres: the dimension of a hypocentre set, ��, is close to zero for short-term earthquake catalogues and asymptotically approaches 2¼ for long-time intervals. We use the Harvard catalogue of seismic moment tensor solutions to investigate the distribution of incremental static stress caused by earthquakes. The stress measured in the focal zone of each event is approximated by stable distributions. In agreement with theoretical considerations, the exponent value of the distribution approaches zero as the time span of an earthquake catalogue (ΔT decreases. For large stress values α increases. We surmise that it is caused by the δ increase for small inter-earthquake distances due to location errors.

  5. Noise masking of S-cone increments and decrements.

    Science.gov (United States)

    Wang, Quanhong; Richters, David P; Eskew, Rhea T

    2014-11-12

    S-cone increment and decrement detection thresholds were measured in the presence of bipolar, dynamic noise masks. Noise chromaticities were the L-, M-, and S-cone directions, as well as L-M, L+M, and achromatic (L+M+S) directions. Noise contrast power was varied to measure threshold Energy versus Noise (EvN) functions. S+ and S- thresholds were similarly, and weakly, raised by achromatic noise. However, S+ thresholds were much more elevated by S, L+M, L-M, L- and M-cone noises than were S- thresholds, even though the noises consisted of two symmetric chromatic polarities of equal contrast power. A linear cone combination model accounts for the overall pattern of masking of a single test polarity well. L and M cones have opposite signs in their effects upon raising S+ and S- thresholds. The results strongly indicate that the psychophysical mechanisms responsible for S+ and S- detection, presumably based on S-ON and S-OFF pathways, are distinct, unipolar mechanisms, and that they have different spatiotemporal sampling characteristics, or contrast gains, or both. © 2014 ARVO.

  6. Business Collaboration in Food Networks: Incremental Solution Development

    Directory of Open Access Journals (Sweden)

    Harald Sundmaeker

    2014-10-01

    Full Text Available The paper will present an approach for an incremental solution development that is based on the usage of the currently developed Internet based FIspace business collaboration platform. Key element is the clear segmentation of infrastructures that are either internal or external to the collaborating business entity in the food network. On the one hand, the approach enables to differentiate between specific centralised as well as decentralised ways for data storage and hosting of IT based functionalities. The selection of specific dataexchange protocols and data models is facilitated. On the other hand, the supported solution design and subsequent development is focusing on reusable “software Apps” that can be used on their own and are incorporating a clear added value for the business actors. It will be outlined on how to push the development and introduction of Apps that do not require basic changes of the existing infrastructure. The paper will present an example that is based on the development of a set of Apps for the exchange of product quality related information in food networks, specifically addressing fresh fruits and vegetables. It combines workflow support for data exchange from farm to retail as well as to provide quality feedback information to facilitate the business process improvement. Finally, the latest status of theFIspace platform development will be outlined. Key features and potential ways for real users and software developers in using the FIspace platform that is initiated by science and industry will be outlined.

  7. Automating the Incremental Evolution of Controllers for Physical Robots.

    Science.gov (United States)

    Faíña, Andrés; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    Evolutionary robotics is challenged with some key problems that must be solved, or at least mitigated extensively, before it can fulfill some of its promises to deliver highly autonomous and adaptive robots. The reality gap and the ability to transfer phenotypes from simulation to reality constitute one such problem. Another lies in the embodiment of the evolutionary processes, which links to the first, but focuses on how evolution can act on real agents and occur independently from simulation, that is, going from being, as Eiben, Kernbach, & Haasdijk [2012, p. 261] put it, "the evolution of things, rather than just the evolution of digital objects.…" The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range of problems amenable to embodied evolution.

  8. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Directory of Open Access Journals (Sweden)

    Jerry Chun-Wei Lin

    2015-01-01

    Full Text Available Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.

  9. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  10. Incremental benefits from the increasing in the production of koi fish Cyprinus carpio var. koi culture

    Directory of Open Access Journals (Sweden)

    Iis Diatin

    2017-07-01

    Full Text Available ABSTRACT  Koi fish is one of the species included in the intensification program of ornamental fish production. Production of koi has only reached 82.04% of total national production target thus making it potential for development. The objective of the study was to assess additional financial benefit of production increment through stocking pattern modification. Present research was performed using case study method on Pokdakan PBC Fish Farm (PPFF, koi fish farmers in Sukabumi. Financial analysis consisted of business analysis, investment criteria, and sensitivity. Stocking pattern management could increase ornamental fish production and its benefit margin up to 1.5 times higher. That investment criteria has shown NPV at value of IDR3,824 million, net BCR 4.96, IRR 86.0%, and PP 1.7 years. Koi fish farming was sensitive to a decline in survival rate and insensitive to the rise of formulated feed price. Keywords: business analysis, koi, investment criteria, production pattern, sensitivity  ABSTRAK  Ikan koi merupakan salah satu ikan yang termasuk dalam program penguatan produksi ikan hias Indonesia. Capaian dari target produksi ikan koi yang ditetapkan pemerintah hanya mencapai 82,04 %, sehingga budidaya ikan koi potensial untuk dikembangkan dan ditingkatkan produksinya. Metode yang digunakan adalah studi kasus pada kelompok pembudidaya ikan hias koi PPFF di Sukabumi. Analisis finansial yang digunakan meliputi analisis usaha, kriteria investasi dan sensitivitas. Pengaturan pola tebar dapat meningkatkan jumlah produksi ikan hias dan marjin keuntungan hingga 1,5 kali. Analisis kriteria investasi menghasilkan nilai NPV sebesar Rp3.824 juta, net B/C 4,94, IRR 86,0% dan PP 1,7 tahun. Budidaya ikan koi sensitif terhadap penurunan kelangsungan hidup dan tidak sensitif terhadap peningkatan harga pakan buatan. Kata kunci: analisis usaha, koi, kriteria investasi, pola tebar, produksi

  11. Appropriate use of the increment entropy for electrophysiological time series.

    Science.gov (United States)

    Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin

    2018-04-01

    The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. An Incremental Type-2 Meta-Cognitive Extreme Learning Machine.

    Science.gov (United States)

    Pratama, Mahardhika; Zhang, Guangquan; Er, Meng Joo; Anavatti, Sreenatha

    2017-02-01

    Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.

  13. New tablet formulation of tacrolimus with smaller interindividual variability may become a better treatment option than the conventional capsule formulation in organ transplant patients

    Directory of Open Access Journals (Sweden)

    Kim YK

    2017-09-01

    Full Text Available Yu Kyong Kim,1 Anhye Kim,1,2 Shin Jung Park,3 Howard Lee1,4 1Department of Clinical Pharmacology and Therapeutics, Seoul National University College of Medicine and Hospital, Seoul, 2Clinical Trial Center, Ajou University Medical Center, Suwon, 3Research Institute, Chong Kun Dang Pharmaceutical Corp, Yongin, 4Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Suwon, Republic of Korea Abstract: To evaluate the pharmacokinetic (PK and tolerability profiles of a new tablet formulation of tacrolimus and its interindividual variability (IIV in the systemic exposure, and to compare them with those of the conventional capsule formulation, a randomized, open-label, two-treatment, two-period, two-sequence, crossover study was performed in 47 healthy males. The capsule or tablet formulation of tacrolimus was orally administered, and serial blood samples were collected up to 96 hours after dosing. Whole-blood tacrolimus concentration was determined using liquid chromatography–tandem mass spectrometry. The maximum whole-blood tacrolimus concentration (Cmax and the area under the whole-blood tacrolimus concentration–time curve from 0 hour to the last quantifiable concentration (AUClast were compared between the two formulations. The similarity factor (f2 of the in vitro dissolution profiles was calculated. The geometric mean ratio (90% confidence interval of tablet to capsule was 0.9680 (0.8873–1.0560 and 1.0322 (0.9359–1.1385 for Cmax and AUClast, respectively. The IIV of Cmax and AUClast of the tablet was smaller than the capsule. The f2 values were >50 in all media. Both formulations were well tolerated. Thus, the tablet formulation of tacrolimus has smaller IIV in the systemic exposure than capsule, while having comparable PK and tolerability profiles, which may render it as a better treatment option for organ transplant patients. Keywords: new formulation, incrementally

  14. Perfume formulation: words and chats.

    Science.gov (United States)

    Ellena, Céline

    2008-06-01

    What does it mean to create fragrances with materials from chemistry and/or from nature? How are they used to display their characteristic differences, their own personality? Is it easier to create with synthetic raw materials or with essential oils? This review explains why a perfume formulation corresponds in fact to a conversation, an interplay between synthetic and natural perfumery materials. A synthetic raw material carries a single information, and usually is very linear. Its smell is uniform, clear, and faithful. Natural raw materials, on the contrary, provide a strong, complex and generous image. While a synthetic material can be seen as a single word, a natural one such as rose oil could be compared to chatting: cold, warm, sticky, heavy, transparent, pepper, green, metallic, smooth, watery, fruity... full of information. Yet, if a very small amount of the natural material is used, nothing happens, the fragrance will not change. However, if a large amount is used, the rose oil will swallow up everything else. The fragrance will smell of nothing else except rose! To formulate a perfume is not to create a culinary recipe, with only dosing the ingredients in well-balanced amounts. To formulate rather means to flexibly knit materials together with a lively stitch, meeting or repelling each other, building a pleasant form, which is neither fixed, nor solid, nor rigid. A fragrance has an overall structure, which ranges from a clear sound, made up of stable, unique, and linear items, to a background chat, comfortable and reassuring. But that does, of course, not mean that there is only one way of creating a fragrance!

  15. Formulation of soy oil products

    Directory of Open Access Journals (Sweden)

    Woerfel, John B.

    1995-12-01

    Full Text Available The paper comments different formulations of soy oil products such as salad and cooking oils, margarine, shortenings, commercial shortenings, frying shortenings, and fluid shortenings. Hydrogenation and its influence on final products is also included.

    El trabajo presenta diferentes formulaciones a base de aceite de soja tales como aceites para ensalada y cocinado, margarina, grasas sólidas (shortenings, grasas sólidas comerciales, grasas sólidas para frituras y grasas fluidas. Hace también referencia al proceso de hidrogenación y a sus efectos en los productos finales.

  16. Formulation and Characterization of Sustained Release Floating ...

    African Journals Online (AJOL)

    Purpose: To formulate sustained release gastroretentive microballoons of metformin hydrochloride with the objective of improving its bioavailability. Methods: Microballoons of metformin hydrochloride were formulated by solvent evaporation and diffusion method using varying mixtures of hydroxypropyl methylcellulose ...

  17. Bioequivalence assessment of two formulations of ibuprofen

    KAUST Repository

    Al-Talla, Zeyad; Akrawi, Sabah H; Tolley, Luke T; Sioud, Salim H; Zaater, Mohammed F; Emwas, Abdul-Hamid M

    2011-01-01

    Background: This study assessed the relative bioavailability of two formulations of ibuprofen. The first formulation was Doloraz , produced by Al-Razi Pharmaceutical Company, Amman, Jordan. The second forumulation was Brufen , manufactured by Boots

  18. Modern approach to relativity theory (radar formulation)

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1991-01-01

    The main peculiarities of the radar formulation of the relativity theory are presented. This formulation operates with the retarded (light) distances and relativistic or radar length introduced on their basis. 21 refs.; 1 tab

  19. Numerical integration of some new unified plasticity-creep formulations

    International Nuclear Information System (INIS)

    Krieg, R.D.

    1977-01-01

    The unified formulations seem to lead to very non-linear systems of equations which are very well behaved in some regions and very stiff in other regions where the word 'stiff' is used in the mathematical sense. Simple conventional methods of integrating incremental constitutive equations are observed to be totally inadequate. A method of numerically integrating the equations is presented. Automatic step size determination based on accuracy and stability is a necessary expense. In the region where accuracy is the limiting condition the equations can be integrated directly. A forward Euler predictor with a trapezoidal corrector is used in the paper. In the region where stability is the limiting condition, direct integration methods become inefficient and an implicit integrator which is suited to stiff equations must be used. A backward Euler method is used in the paper. It is implemented with a Picard iteration method in which a Newton method is used to predict inelastic strainrate and speed convergence in a Newton-Raphson manner. This allows an analytic expression for the Jacobian to be used, where a full Newton-Raphson would require a numerical approximation to the Jacobian. The starting procedure for the iteration is an adaptation of time independent plasticity ideas. Because of the inherent capability of the unified plasticity-creep formulations, it is felt that these theories will become accepted in the metallurgical community. Structural analysts will then be required to incorporate these formulations and must be prepared to face the difficult implementation inherent in these models. This paper is an attempt to shed some light on the difficulties and expenses involved

  20. Lactate and ammonia concentration in blood and sweat during incremental cycle ergometer exercise

    NARCIS (Netherlands)

    Ament, W; Huizenga, [No Value; Mook, GA; Gips, CH; Verkerke, GJ

    It is known that the concentrations of ammonia and lactate in blood increase during incremental exercise. Sweat also contains lactate and ammonia. The aim of the present study was to investigate the physiological response of lactate and ammonia in plasma and sweat during a stepwise incremental cycle

  1. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  2. Incremental Beliefs of Ability, Achievement Emotions and Learning of Singapore Students

    Science.gov (United States)

    Luo, Wenshu; Lee, Kerry; Ng, Pak Tee; Ong, Joanne Xiao Wei

    2014-01-01

    This study investigated the relationships of students' incremental beliefs of math ability to their achievement emotions, classroom engagement and math achievement. A sample of 273 secondary students in Singapore were administered measures of incremental beliefs of math ability, math enjoyment, pride, boredom and anxiety, as well as math classroom…

  3. Formulation, Preparation, and Characterization of Polyurethane Foams

    Science.gov (United States)

    Pinto, Moises L.

    2010-01-01

    Preparation of laboratory-scale polyurethane foams is described with formulations that are easy to implement in experiments for undergraduate students. Particular attention is given to formulation aspects that are based on the main chemical reactions occurring in polyurethane production. This allows students to develop alternative formulations to…

  4. Performance Evaluation of Abrasive Grinding Wheel Formulated ...

    African Journals Online (AJOL)

    This paper presents a study on the formulation and manufacture of abrasive grinding wheel using locally formulated silicon carbide abrasive grains. Six local raw material substitutes were identified through pilot study and with the initial mix of the identified materials, a systematic search for an optimal formulation of silicon ...

  5. Optimization of chlorphenesin emulgel formulation.

    Science.gov (United States)

    Mohamed, Magdy I

    2004-10-11

    This study was conducted to develop an emulgel formulation of chlorphenesin (CHL) using 2 types of gelling agents: hydroxypropylmethyl cellulose (HPMC) and Carbopol 934. The influence of the type of the gelling agent and the concentration of both the oil phase and emulsifying agent on the drug release from the prepared emulgels was investigated using a 2(3) factorial design. The prepared emulgels were evaluated for their physical appearance, rheological behavior, drug release, antifungal activity, and stability. Commercially available CHL topical powder was used for comparison. All the prepared emulgels showed acceptable physical properties concerning color, homogeneity, consistency, spreadability, and pH value. They also exhibited higher drug release and antifungal activity than the CHL powder. It was found that the emulsifying agent concentration had the most pronounced effect on the drug release from the emulgels followed by the oil phase concentration and finally the type of the gelling agent. The drug release from all the emulgels was found to follow diffusion-controlled mechanism. Rheological studies revealed that the CHL emulgels exhibited a shear-thinning behavior with thixotropy. Stability studies showed that the physical appearance, rheological properties, drug release, and antifungal activity in all the prepared emulgels remained unchanged upon storage for 3 months. As a general conclusion, it was suggested that the CHL emulgel formulation prepared with HPMC with the oil phase concentration in its low level and emulsifying agent concentration in its high level was the formula of choice since it showed the highest drug release and antifungal activity.

  6. Formulation development for PREPP concreted waste forms

    International Nuclear Information System (INIS)

    Neilson, R.M. Jr.; Welch, J.M.

    1984-05-01

    Analysis of variance and logistic regression techniques have been used to develop models describing the effects of formulation variables and their interactions on compressive strength, solidification, free-standing water, and workability of hydraulic cement grouts incorporating simulated Process Experimental Pilot Plant (PREPP) wastes. These models provide the basis for specifications of grout formulations to solidify these wastes. The experimental test matrix, formulation preparation, and test methods employed are described. The development of analytical models for formulation behavior and the conclusions drawn regarding appropriate formulation variable ranges are discussed. 13 references, 9 figures, 15 tables

  7. Energy policy formulation for Pakistan

    International Nuclear Information System (INIS)

    Riaz, T.

    1981-01-01

    Pakistan is a low income, low energy consumption country. In view of the close interdependence between economic growth and energy consumption, she will need increasing energy supplies in order to maintain her economic growth. This paper develops an energy sector optimization model for the Pakistan economy, which consists of production models for five energy industries, ie oil, gas, coal, electricity (including electricity generated in nuclear power plants) and non-commercial fuels. The model is first used to forecast energy balances for the period 1975 - 2006. The model is then employed to formulate a long-term comprehensive energy policy for Pakistan. Finally the suggested policy is compared with the current official energy programme. (author)

  8. On Scalar Energy: Mathematical Formulation

    International Nuclear Information System (INIS)

    Hathout, A.M.

    2011-01-01

    A new kind of electromagnetic waves (EMW), which exists only in vacuum of the empty space, will be discussed and mathematically formulated in this paper. The mathematical existence of this energy was first proposed in a series of groundbreaking equations by Scottish Mathematician, James Clerk Maxwell, in the mid of 1800 and 39;s. This energy is called scalar energy. It is characterized by both particle and wave like. The waves of this energy are called longitudinal EMW to distinguish them from transverse EM, the kind we are familiar with in our daily life. Teslas name of this energy is scalar energy or zero point energy. It is aimed at this paper to explain more details and to verify the scalar EM concept in vacuum.

  9. Do otolith increments allow correct inferences about age and growth of coral reef fishes?

    Science.gov (United States)

    Booth, D. J.

    2014-03-01

    Otolith increment structure is widely used to estimate age and growth of marine fishes. Here, I test the accuracy of the long-term otolith increment analysis of the lemon damselfish Pomacentrus moluccensis to describe age and growth characteristics. I compare the number of putative annual otolith increments (as a proxy for actual age) and widths of these increments (as proxies for somatic growth) with actual tagged fish-length data, based on a 6-year dataset, the longest time course for a coral reef fish. Estimated age from otoliths corresponded closely with actual age in all cases, confirming annual increment formation. However, otolith increment widths were poor proxies for actual growth in length [linear regression r 2 = 0.44-0.90, n = 6 fish] and were clearly of limited value in estimating annual growth. Up to 60 % of the annual growth variation was missed using otolith increments, suggesting the long-term back calculations of otolith growth characteristics of reef fish populations should be interpreted with caution.

  10. Dental caries increments and related factors in children with type 1 diabetes mellitus.

    Science.gov (United States)

    Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I

    2008-01-01

    The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.

  11. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    Science.gov (United States)

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017

  12. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  13. Annual increments, specific gravity and energy of Eucalyptus grandis by gamma-ray attenuation technique

    International Nuclear Information System (INIS)

    Rezende, M.A.; Guerrini, I.A.; Ferraz, E.S.B.

    1990-01-01

    Specific gravity annual increments in volume, mass and energy of Eucalyptus grandis at thirteen years of age were made taking into account measurements of the calorific value for wood. It was observed that the calorific value for wood decrease slightly, while the specific gravity increase significantly with age. The so-called culmination age for the Annual Volume Increment was determined to be around fourth year of growth while for the Annual Mass and Energy Increment was around the eighty year. These results show that a tree in a particular age may not have a significant growth in volume, yet one is mass and energy. (author)

  14. Sustained change blindness to incremental scene rotation: a dissociation between explicit change detection and visual memory.

    Science.gov (United States)

    Hollingworth, Andrew; Henderson, John M

    2004-07-01

    In a change detection paradigm, the global orientation of a natural scene was incrementally changed in 1 degree intervals. In Experiments 1 and 2, participants demonstrated sustained change blindness to incremental rotation, often coming to consider a significantly different scene viewpoint as an unchanged continuation of the original view. Experiment 3 showed that participants who failed to detect the incremental rotation nevertheless reliably detected a single-step rotation back to the initial view. Together, these results demonstrate an important dissociation between explicit change detection and visual memory. Following a change, visual memory is updated to reflect the changed state of the environment, even if the change was not detected.

  15. Quality Characteristics of Frankfurters Formulated with Apricot Pomace Obtained from Apricot Juice Processing

    Directory of Open Access Journals (Sweden)

    Çilem Purma Adıbelli

    2017-03-01

    Full Text Available In this study the effects of dried apricot pomace (AP on the technological, nutritional and sensory quality of frankfurters were investigated. Frankfurters formulated with 5% AP showed better quality compared to the addition of 10 and 15% AP. Protein and fat content decreased as the concentration of added AP was over 5%. AP addition resulted in lower pH and energy values. Frankfurters formulated with AP had higher cooking and process yield values. AP addition resulted with decrement in lightness and increment in yellowness of samples. 5% addition of AP resulted in good sensory scores. The results indicate that apricot pomace could be an effective functional ingredient in emulsion type meat products.

  16. Slag-based saltstone formulations

    International Nuclear Information System (INIS)

    Langton, C.A.

    1987-01-01

    Approximately 400 x 10 6 liters of low-level alkaline salt solution will be treated at the Savannah River Plant (SRP) Defense Waste Processing Facility (DWPF) prior to disposal in concrete vaults at SRP. Treatment involves removal of CS + and Sr +2 followed by solidification and stabilization of potential contaminants in saltstone, a hydrated ceramic waste form. Chromium, technetium, and nitrate releases from saltstone can be significantly reduced by substituting hydraulic blast furnace slag for portland cement in the formulation designs. Slag-based mixes are also compatible with Class F fly ash used in saltstone as a functional extender to control heat of hydration and reduce permeability. A monolithic waste form is produced by the hydration of the slag and fly ash. Soluble ion release (NO 3 - ) is controlled by the saltstone microstructure. Chromium and technetium are less leachable from slag mixes compared to cement-based waste forms because these species are chemically reduced to a lower valence state by ferrous iron in the slag and precipitated as relatively insoluble phases, such as CR(OH) 3 and TcO 2 . 5 refs., 4 figs., 4 tabs

  17. Slag-based saltstone formulations

    International Nuclear Information System (INIS)

    Langton, C.A.

    1987-08-01

    Approximately 400 x 10 6 L of low-level alkaline salt solution will be treated at the Savannah River Plant (SRP) Defense Waste Processing Facility (DWPF) prior to disposal in concrete vaults at SRP. Treatment involves removal of Cs + and Sr +2 , followed by solidification and stabilization of potential contaminants in saltstone, a hydrated ceramic wasteform. Chromium, technetium, and nitrate releases from saltstone can be significantly reduced by substituting hydraulic blast furnace slag for portland cement in the formulation designs. Slag-based mixes are also compatible with the Class F flyash used in saltstone as a functional extender to control heat of hydration and reduce permeability. (Class F flyash is also locally available at SRP.) A monolithic wasteform is produced by the hydration of the slag and flyash. Soluble ion release (NO 3- ) is controlled by the saltstone microstructure. Chromium and technetium are less leachable from slag mixes because these species are chemically reduced to a lower valence state by ferrous iron in the slag and are precipitated as relatively insoluble phases, such as Cr(OH) 3 and TcO 2 . 3 refs., 3 figs., 2 tabs

  18. Policy formulation of public acceptance

    International Nuclear Information System (INIS)

    Kasai, Akihiro

    1978-01-01

    Since 1970, the new policy formulation for public acceptance of the new consideration on the location of electric power generation has been set and applied. The planning and the enforcement being conducted by local public organizations for the local economic build-up with plant location and also the adjustement of the requirements for fishery are two main specific characters in this new policy. The background of this new public acceptance policy, the history and the actual problems about the compensation for the location of power generation plants are reviewed. One new proposal, being recommended by the Policy and Science Laboratory to MITI in 1977 is explained. This is based on the method of promoting the location of power generation plants by public participation placing the redevelopment of regional societies as its basis. The problems concerning the industrial structures in farm villages, fishing villages and the areas of commerce and industry should be systematized, and explained from the viewpoint of outside impact, the characteristics of local areas and the location problems in this new proposal. Finally, the location process and its effectiveness should be put in order. (Nakai, Y.)

  19. Hamiltonian formulation of reduced magnetohydrodynamics

    International Nuclear Information System (INIS)

    Morrison, P.J.; Hazeltine, R.D.

    1983-07-01

    Reduced magnetohydrodynamics (RMHD) has become a principal tool for understanding nonlinear processes, including disruptions, in tokamak plasmas. Although analytical studies of RMHD turbulence have been useful, the model's impressive ability to simulate tokamak fluid behavior has been revealed primarily by numerical solution. The present work describes a new analytical approach, not restricted to turbulent regimes, based on Hamiltonian field theory. It is shown that the nonlinear (ideal) RMHD system, in both its high-beta and low-beta versions, can be expressed in Hanmiltonian form. Thus a Poisson bracket, [ , ], is constructed such that each RMHD field quantitity, xi/sub i/, evolves according to xi/sub i/ = [xi/sub i/,H], where H is the total field energy. The new formulation makes RMHD accessible to the methodology of Hamiltonian mechanics; it has lead, in particular, to the recognition of new RMHD invariants and even exact, nonlinear RMHD solutions. A canonical version of the Poisson bracket, which requires the introduction of additional fields, leads to a nonlinear variational principle for time-dependent RMHD

  20. Formulation of disperse systems science and technology

    CERN Document Server

    Tadros, Tharwat F

    2014-01-01

    This book presents comprehensively the science and technology behind the formulation of disperse systems like emulsions, suspensions, foams and others. Starting with a general introduction, the book covers a broad range of topics like the role of different classes of surfactants, stability of disperse systems, formulation of different dispersions, evaluation of formulations and many more. Many examples are included, too. Written by the experienced author and editor Tharwart Tadros, this book is indispensable for every scientist working in the field.

  1. Formulated arthropod cadavers for pest suppression

    OpenAIRE

    2001-01-01

    Pesticidal and/or antimicrobial biological agent-infected arthropod cadavers are formulated by applying a coating agent once on the surface of the cadaver which either (a) prevents the cadavers from sticking together and/or rupturing or (b) acts as an adhesive for a powder or granule applied to the cadaver to prevent sticking and rupturing. The formulated cadavers maintain or improve infectivity, reproducibility, and survivability. The formulated cadavers can be partially desiccated to improv...

  2. Multiple excitation of supports - Part 1. Formulation

    International Nuclear Information System (INIS)

    Galeao, A.C.N.R.; Barbosa, H.J.C.

    1980-12-01

    The formulation and the solution of a simple specific problem of support movement are presented. The formulation is extended to the general case of infinitesimal elasticity where the approximated solutions are obtained by the variational formulation with spatial discretization by Finite Element Method. Finally, the present usual numerical techniques for the treatment of the resulting ordinary differential equations system are discused: Direct integration, Modal overlap, Spectral response. (E.G.) [pt

  3. Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Martin, Corless

    2004-01-01

    We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.

  4. Performance and delay analysis of hybrid ARQ with incremental redundancy over double rayleigh fading channels

    KAUST Repository

    Chelli, Ali; Zedini, Emna; Alouini, Mohamed-Slim; Barry, John R.; Pä tzold, Matthias

    2014-01-01

    the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum

  5. Incremental Learning of Perceptual Categories for Open-Domain Sketch Recognition

    National Research Council Canada - National Science Library

    Lovett, Andrew; Dehghani, Morteza; Forbus, Kenneth

    2007-01-01

    .... This paper describes an incremental learning technique for opendomain recognition. Our system builds generalizations for categories of objects based upon previous sketches of those objects and uses those generalizations to classify new sketches...

  6. Performance of hybrid-ARQ with incremental redundancy over relay channels

    KAUST Repository

    Chelli, Ali; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ) with incremental redundancy (IR). The relay overhears

  7. Robust flight control using incremental nonlinear dynamic inversion and angular acceleration prediction

    NARCIS (Netherlands)

    Sieberling, S.; Chu, Q.P.; Mulder, J.A.

    2010-01-01

    This paper presents a flight control strategy based on nonlinear dynamic inversion. The approach presented, called incremental nonlinear dynamic inversion, uses properties of general mechanical systems and nonlinear dynamic inversion by feeding back angular accelerations. Theoretically, feedback of

  8. An Environment for Incremental Development of Distributed Extensible Asynchronous Real-time Systems

    Science.gov (United States)

    Ames, Charles K.; Burleigh, Scott; Briggs, Hugh C.; Auernheimer, Brent

    1996-01-01

    Incremental parallel development of distributed real-time systems is difficult. Architectural techniques and software tools developed at the Jet Propulsion Laboratory's (JPL's) Flight System Testbed make feasible the integration of complex systems in various stages of development.

  9. MUNIX and incremental stimulation MUNE in ALS patients and control subjects

    DEFF Research Database (Denmark)

    Furtula, Jasna; Johnsen, Birger; Christensen, Peter Broegger

    2013-01-01

    This study compares the new Motor Unit Number Estimation (MUNE) technique, MUNIX, with the more common incremental stimulation MUNE (IS-MUNE) with respect to reproducibility in healthy subjects and as potential biomarker of disease progression in patients with ALS....

  10. Effect of multiple forming tools on geometrical and mechanical properties in incremental sheet forming

    Science.gov (United States)

    Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.

    2018-05-01

    The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.

  11. Unified performance analysis of hybrid-ARQ with incremental redundancy over free-space optical channels

    KAUST Repository

    Zedini, Emna; Chelli, Ali; Alouini, Mohamed-Slim

    2014-01-01

    In this paper, we carry out a unified performance analysis of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) from an information theoretic perspective over a point-to-point free-space optical (FSO) system. First, we

  12. Superspace formulation of new nonlinear sigma models

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1983-07-01

    The superspace formulation of two classes of supersymmetric nonlinear σ-models are presented. Two alternative N=1 superspace formulations are given for the d=2 supersymmetric nonlinear σ-models with Killing vector potentials: (a) formulation uses an active central charge and, (b) formulation uses a spurion superfield without inducing a classical breakdown of supersymmetry. The N=2 vector multiplet is used to construct a new class of d=4 nonlinear σ-models which when reduced to d=2 possess N=4 supersymmetry. Implications of these two classes of nonlinear σ-models for N>=4 superfield supergravity are discussed. (author)

  13. Incremental Reconstruction of Urban Environments by Edge-Points Delaunay Triangulation

    OpenAIRE

    Romanoni, Andrea; Matteucci, Matteo

    2016-01-01

    Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we ...

  14. The Boundary Between Planning and Incremental Budgeting: Empirical Examination in a Publicly-Owned Corporation

    OpenAIRE

    S. K. Lioukas; D. J. Chambers

    1981-01-01

    This paper is a study within the field of public budgeting. It focuses on the capital budget, and it attempts to model and analyze the capital budgeting process using a framework previously developed in the literature of incremental budgeting. Within this framework the paper seeks to determine empirically whether the movement of capital expenditure budgets can be represented as the routine application of incremental adjustments over an existing base of allocations and whether further, forward...

  15. Global Combat Support System - Army Increment 2 (GCSS-A Inc 2)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) Defense Acquisition...Secretary of Defense PB - President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be...Date Assigned: Program Information Program Name Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) DoD Component Army Responsible

  16. Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A...Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A) DoD Component Air Force Responsible Office Program...APB) dated March 9, 2015 DCAPES Inc 2A 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments

  17. Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B...Information Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B) DoD Component Air Force Responsible Office...been established. DCAPES Inc 2B 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments (DCAPES) is

  18. Applying CLSM to increment core surfaces for histometric analyses: A novel advance in quantitative wood anatomy

    OpenAIRE

    Wei Liang; Ingo Heinrich; Gerhard Helle; I. Dorado Liñán; T. Heinken

    2013-01-01

    A novel procedure has been developed to conduct cell structure measurements on increment core samples of conifers. The procedure combines readily available hardware and software equipment. The essential part of the procedure is the application of a confocal laser scanning microscope (CLSM) which captures images directly from increment cores surfaced with the advanced WSL core-microtome. Cell wall and lumen are displayed with a strong contrast due to the monochrome black and green nature of th...

  19. A System to Derive Optimal Tree Diameter Increment Models from the Eastwide Forest Inventory Data Base (EFIDB)

    Science.gov (United States)

    Don C. Bragg

    2002-01-01

    This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...

  20. Estimation of incremental reactivities for multiple day scenarios: an application to ethane and dimethyoxymethane

    Science.gov (United States)

    Stockwell, William R.; Geiger, Harald; Becker, Karl H.

    Single-day scenarios are used to calculate incremental reactivities by definition (Carter, J. Air Waste Management Assoc. 44 (1994) 881-899.) but even unreactive organic compounds may have a non-negligible effect on ozone concentrations if multiple-day scenarios are considered. The concentration of unreactive compounds and their products may build up over a multiple-day period and the oxidation products may be highly reactive or highly unreactive affecting the overall incremental reactivity of the organic compound. We have developed a method for calculating incremental reactivities for multiple days based on a standard scenario for polluted European conditions. This method was used to estimate maximum incremental reactivities (MIR) and maximum ozone incremental reactivities (MOIR) for ethane and dimethyoxymethane for scenarios ranging from 1 to 6 days. It was found that the incremental reactivities increased as the length of the simulation period increased. The MIR of ethane increased faster than the value for dimethyoxymethane as the scenarios became longer. The MOIRs of ethane and dimethyoxymethane increased but the change was more modest for scenarios longer than 3 days. MOIRs of both volatile organic compounds were equal within the uncertainties of their chemical mechanisms by the 5 day scenario. These results show that dimethyoxymethane has an ozone forming potential on a per mass basis that is only somewhat greater than ethane if multiple-day scenarios are considered.

  1. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    Science.gov (United States)

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  2. Calculation of the increment reduction in spruce stands by charcoal smoke

    Energy Technology Data Exchange (ETDEWEB)

    Guede, J

    1954-01-01

    Chronic damage to spruce trees by charcoal smoke, often hardly noticeable from outward appearance but causing marked reductions of wood increment can be determined by means of a calculation by increment cores. Sulfurous acid anhydride causes the closure of the stomates of needles by which the circulation of water is checked. The assimilation and the wood increment are reduced. The cores are taken from uninjured trees belonging to the dominant class. These trees are liable to irregular variations in the trend of growth only by atmospheric influences and disturbances in the circulation of water. The decrease of increment of a stand can be judged by the trend of growth of the basal area of sample trees. Two methods are applied: in the first method, the difference between the mean total increment before the damage has been caused and that after it is calculated by the yield table in deriving the site quality classes from the basal area growth of dominant stems. This is possible by using the mean diameter of each age class and the frequency curve of basal area for each site class. In the other method, the reduction of basal area increment of sample trees is measured directly. The total reduction of a stand can be judged by the share of the dominant class of stem in the total current growth of the basal area of a sound stand and by the percent of reduction of the sample trees.

  3. Glass Ceramic Formulation Data Package

    International Nuclear Information System (INIS)

    Crum, Jarrod V.; Rodriguez, Carmen P.; McCloy, John S.; Vienna, John D.; Chung, Chul-Woo

    2012-01-01

    A glass ceramic waste form is being developed for treatment of secondary waste streams generated by aqueous reprocessing of commercial used nuclear fuel (Crum et al. 2012b). The waste stream contains a mixture of transition metals, alkali, alkaline earths, and lanthanides, several of which exceed the solubility limits of a single phase borosilicate glass (Crum et al. 2009; Caurant et al. 2007). A multi-phase glass ceramic waste form allows incorporation of insoluble components of the waste by designed crystallization into durable heat tolerant phases. The glass ceramic formulation and processing targets the formation of the following three stable crystalline phases: (1) powellite (XMoO4) where X can be (Ca, Sr, Ba, and/or Ln), (2) oxyapatite Yx,Z(10-x)Si6O26 where Y is alkaline earth, Z is Ln, and (3) lanthanide borosilicate (Ln5BSi2O13). These three phases incorporate the waste components that are above the solubility limit of a single-phase borosilicate glass. The glass ceramic is designed to be a single phase melt, just like a borosilicate glass, and then crystallize upon slow cooling to form the targeted phases. The slow cooling schedule is based on the centerline cooling profile of a 2 foot diameter canister such as the Hanford High-Level Waste canister. Up to this point, crucible testing has been used for glass ceramic development, with cold crucible induction melter (CCIM) targeted as the ultimate processing technology for the waste form. Idaho National Laboratory (INL) will conduct a scaled CCIM test in FY2012 with a glass ceramic to demonstrate the processing behavior. This Data Package documents the laboratory studies of the glass ceramic composition to support the CCIM test. Pacific Northwest National Laboratory (PNNL) measured melt viscosity, electrical conductivity, and crystallization behavior upon cooling to identify a processing window (temperature range) for melter operation and cooling profiles necessary to crystallize the targeted phases in the

  4. A New Resistance Formulation for Carbon Nanotubes

    Directory of Open Access Journals (Sweden)

    Ji-Huan He

    2008-01-01

    Full Text Available A new resistance formulation for carbon nanotubes is suggested using fractal approach. The new formulation is also valid for other nonmetal conductors including nerve fibers, conductive polymers, and molecular wires. Our theoretical prediction agrees well with experimental observation.

  5. Advanced Query Formulation in Deductive Databases.

    Science.gov (United States)

    Niemi, Timo; Jarvelin, Kalervo

    1992-01-01

    Discusses deductive databases and database management systems (DBMS) and introduces a framework for advanced query formulation for end users. Recursive processing is described, a sample extensional database is presented, query types are explained, and criteria for advanced query formulation from the end user's viewpoint are examined. (31…

  6. Aerosol formulation and clinical efficacy of bronchodilators

    NARCIS (Netherlands)

    Zanen, Pieter

    1998-01-01

    This thesis subject is the improvement of the formulation of inhaled aerosols. It is well known that the formulation of inhaled drugs is not optimal: the major part of the mass delivered does not reach the lower airways. This phenomenon is due to the particle size of the inhaled particles, which

  7. Hamiltonian formulation of anomaly free chiral bosons

    International Nuclear Information System (INIS)

    Abdalla, E.; Abdalla, M.C.B.; Devecchi, F.P.; Zadra, A.

    1988-01-01

    Starting out of an anomaly free Lagrangian formulation for chiral scalars, which a Wess-Zumino Term (to cancel the anomaly), we formulate the corresponding hamiltonian problem. Ther we use the (quantum) Siegel invariance to choose a particular, which turns out coincide with the obtained by Floreanini and Jackiw. (author) [pt

  8. Undefined cellulase formulations hinder scientific reproducibility.

    Science.gov (United States)

    Himmel, Michael E; Abbas, Charles A; Baker, John O; Bayer, Edward A; Bomble, Yannick J; Brunecky, Roman; Chen, Xiaowen; Felby, Claus; Jeoh, Tina; Kumar, Rajeev; McCleary, Barry V; Pletschke, Brett I; Tucker, Melvin P; Wyman, Charles E; Decker, Stephen R

    2017-01-01

    In the shadow of a burgeoning biomass-to-fuels industry, biological conversion of lignocellulose to fermentable sugars in a cost-effective manner is key to the success of second-generation and advanced biofuel production. For the effective comparison of one cellulase preparation to another, cellulase assays are typically carried out with one or more engineered cellulase formulations or natural exoproteomes of known performance serving as positive controls. When these formulations have unknown composition, as is the case with several widely used commercial products, it becomes impossible to compare or reproduce work done today to work done in the future, where, for example, such preparations may not be available. Therefore, being a critical tenet of science publishing, experimental reproducibility is endangered by the continued use of these undisclosed products. We propose the introduction of standard procedures and materials to produce specific and reproducible cellulase formulations. These formulations are to serve as yardsticks to measure improvements and performance of new cellulase formulations.

  9. Formulation of heat absorbing glasses

    Directory of Open Access Journals (Sweden)

    Álvarez-Casariego, Pedro

    1996-06-01

    Full Text Available In the thermal exchanges between buildings and environment, glazing is an element of major importance, for it largely influences the so-called Solar Heat Gain and Thermal Losses. These parameters can be modified by applying different type of coatings onto glass surface or by adding colorant compounds during glass melting. The latter is a cheaper way to control the Solar Heat Gain. The knowledge of the laws governing the interaction between colorant compounds and solar radiation, allows us to define glass formulations achieving specific aesthetic requirements and solar energy absorption. In this paper two examples of application of the modelling of glass colorants spectral absorptance are presented. First is addressed to obtaining a glass with high luminous transmittance and low solar energy transmittance, and the other one to obtaining a glass with neutral colour appearance and minimized solar energy transmittance. Calculation formulas are defined together with photometric properties so-obtained. These type of glasses are particularly suitable to be used as building and automotive glazing, for they retain the mechanical characteristics and possibilities of transformation of standard glass.

    En los intercambios de energía entre un edificio y el medio exterior, el vidrio es el elemento de mayor importancia, por su influencia en la Ganancia de Calor Solar y en las Pérdidas Térmicas. Estos parámetros pueden ser modificados mediante el depósito de capas sobre el vidrio o mediante la adición de compuestos absorbentes de la radiación solar. Esta última vía es la más económica para controlar la Ganancia de Calor Solar. El conocimiento de las leyes que gobiernan la interacción de los diversos colorantes con la radiación solar, permite definir formulaciones de vidrios con características especificas de tipo estético y de absorción energética. En este trabajo se presentan dos ejemplos de aplicación de esta modelización de las

  10. Stem analysis program (GOAP for evaluating of increment and growth data at individual tree

    Directory of Open Access Journals (Sweden)

    Gafura Aylak Özdemir

    2016-07-01

    Full Text Available Stem analysis is a method evaluating in a detailed way data of increment and growth of individual tree at the past periods and widely used in various forestry disciplines. Untreated data of stem analysis consist of annual ring count and measurement procedures performed on cross sections taken from individual tree by section method. The evaluation of obtained this untreated data takes quite some time. Thus, a computer software was developed in this study to quickly and efficiently perform stem analysis. This computer software developed to evaluate untreated data of stem analysis as numerical and graphical was programmed as macro by utilizing Visual Basic for Application feature of MS Excel 2013 program currently the most widely used. In developed this computer software, growth height model is formed from two different approaches, individual tree volume depending on section method, cross-sectional area, increments of diameter, height and volume, volume increment percent and stem form factor at breast height are calculated depending on desired period lengths. This calculated values are given as table. Development of diameter, height, volume, increments of these variables, volume increment percent and stem form factor at breast height according to periodic age are given as chart. Stem model showing development of diameter, height and shape of individual tree in the past periods also can be taken from computer software as chart.

  11. Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-09-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.

  12. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Science.gov (United States)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  13. Canonical operator formulation of nonequilibrium thermodynamics

    International Nuclear Information System (INIS)

    Mehrafarin, M.

    1992-09-01

    A novel formulation of nonequilibrium thermodynamics is proposed which emphasises the fundamental role played by the Boltzmann constant k in fluctuations. The equivalence of this and the stochastic formulation is demonstrated. The k → 0 limit of this theory yields the classical deterministic description of nonequilibrium thermodynamics. The new formulation possesses unique features which bear two important results namely the thermodynamic uncertainty principle and the quantisation of entropy production rate. Such a theory becomes indispensable whenever fluctuations play a significant role. (author). 7 refs

  14. Application of UV Imaging in Formulation Development

    DEFF Research Database (Denmark)

    Sun, Yu; Østergaard, Jesper

    2017-01-01

    defining formulation behavior after exposure to the aqueous environments and pharmaceutical performance is critical in pharmaceutical development, manufacturing and quality control of drugs. UV imaging has been explored as a tool for qualitative and quantitative characterization of drug dissolution...... related to the structural properties of the drug substance or formulation can be monitored. UV imaging is a non-intrusive and simple-to-operate analytical technique which holds potential for providing a mechanistic foundation for formulation development. This review aims to cover applications of UV...

  15. Formulation of 11-dimensional supergravity in superspace

    International Nuclear Information System (INIS)

    Cremmer, E.; Ferrara, S.

    1980-01-01

    We formulate on-shell 11-dimensional supergravity in superspace and express its equations of motion in terms of purely geometrical quantities. All torsion and curvature components are solved in terms of a single superfield Wsub(rstu), totally antisymmetric in its (flat vector) indices. The dimensional reduction of this formulation is expected to be related to the superspace formulation of N = 8 extended supergravity and might explain the origin of the hidden (local) SU(8) and (global) E 7 symmetries present in this theory. (orig.)

  16. An exact approach for aggregated formulations

    DEFF Research Database (Denmark)

    Gamst, Mette; Spoorendonk, Simon; Røpke, Stefan

    Aggregating formulations is a powerful approach for problems to take on tractable forms. Aggregation may lead to loss of information, i.e. the aggregated formulation may be an approximation of the original problem. In branch-and-bound context, aggregation can also complicate branching, e.g. when...... optimality cannot be guaranteed by branching on aggregated variables. We present a generic exact solution method to remedy the drawbacks of aggregation. It combines the original and aggregated formulations and applies Benders' decomposition. We apply the method to the Split Delivery Vehicle Routing Problem....

  17. Formulation and integration of constitutive models describing large deformations in thermoplasticity and thermoviscoplasticity

    International Nuclear Information System (INIS)

    Jansohn, W.

    1997-10-01

    This report deals with the formulation and numerical integration of constitutive models in the framework of finite deformation thermomechanics. Based on the concept of dual variables, plasticity and viscoplasticity models exhibiting nonlinear kinematic hardening as well as nonlinear isotropic hardening rules are presented. Care is taken that the evolution equations governing the hardening response fulfill the intrinsic dissipation inequality in every admissible process. In view of the development of an efficient numerical integration procedure, simplified versions of these constitutive models are supposed. In these versions, the thermoelastic strains are assumed to be small and a simplified kinematic hardening rule is considered. Additionally, in view of an implementation into the ABAQUS finite element code, the elasticity law is approximated by a hypoelasticity law. For the simplified onstitutive models, an implicit time-integration algorithm is developed. First, in order to obtain a numerical objective integration scheme, use is made of the HUGHES-WINGET-Algorithm. In the resulting system of ordinary differential equations, it can be distinguished between three differential operators representing different physical effects. The structure of this system of differential equations allows to apply an operator split scheme, which leads to an efficient integration scheme for the constitutive equations. By linearizing the integration algorithm the consistent tangent modulus is derived. In this way, the quadratic convergence of Newton's method used to solve the basic finite element equations (i.e. the finite element discretization of the governing thermomechanical field equations) is preserved. The resulting integration scheme is implemented as a user subroutine UMAT in ABAQUS. The properties of the applied algorithm are first examined by test calculations on a single element under tension-compression-loading. For demonstrating the capabilities of the constitutive theory

  18. Gauge formulation of gravitation theories. I. The Poincare, de Sitter, and conformal cases

    International Nuclear Information System (INIS)

    Ivanov, E.A.; Niederle, J.

    1982-01-01

    The gauge formulations of various gravitation theories are discussed. They are based on the approach in which we have the group Diff R 4 acting on x/sup μ/ and in which we attach to every x/sup μ/ a tangent space with the group of action H. Group H does not act on x/sup μ/ and plays the role of an internal (global) symmetry group in the standard Yang-Mills theory. The matter fields in the theory transform according to representations of H and are assumed to be scalars of Diff R 4 . The full invariance group of the Lagrangian is then of the form H/sup loc/xDiff R 4 . Here H/sup loc/ is a local gauge group obtained from H exactly as in the Yang-Mills theory. The approach has two characteristic features: (i) The group H/sup loc/ must be spontaneously broken in order to exclude redundant gauge fields (the Lorentz connections) from the theory in a way covariant with respect to the gauge transformations. (ii) To different H there correspond different gravitational theories, all invariant under Diff R 4 but differing in backgrounds. Thus if H is isomorphic to the Poincare group the corresponding gauge theory turns out to be equivalent to the usual Einstein or Einstein-Cartan theory of gravity in the Minkowski space as a background. The other choices for H considered in the paper are the de Sitter groups and the conformal group. They yield the Einstein theory with a negative (or positive) cosmological term in the corresponding de Sitter space and the Weyl or Cartan-Weyl theory (depending on realization of the conformal group), respectively

  19. Incremental Validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF).

    Science.gov (United States)

    Siegling, A B; Vesely, Ashley K; Petrides, K V; Saklofske, Donald H

    2015-01-01

    This study examined the incremental validity of the adult short form of the Trait Emotional Intelligence Questionnaire (TEIQue-SF) in predicting 7 construct-relevant criteria beyond the variance explained by the Five-factor model and coping strategies. Additionally, the relative contributions of the questionnaire's 4 subscales were assessed. Two samples of Canadian university students completed the TEIQue-SF, along with measures of the Big Five, coping strategies (Sample 1 only), and emotion-laden criteria. The TEIQue-SF showed consistent incremental effects beyond the Big Five or the Big Five and coping strategies, predicting all 7 criteria examined across the 2 samples. Furthermore, 2 of the 4 TEIQue-SF subscales accounted for the measure's incremental validity. Although the findings provide good support for the validity and utility of the TEIQue-SF, directions for further research are emphasized.

  20. Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy

    Directory of Open Access Journals (Sweden)

    Tarun K. Sen

    2011-11-01

    Full Text Available Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy Abstract Innovation in information technology is a primary driver for growth in developed economies. Research indicates that countries go through three stages in the adoption of innovation strategies: buying innovation through global trade, incremental innovation from other countries by enhancing efficiency, and, at the most developed stage, radically innovating independently for competitive advantage. The first two stages of innovation maturity depend more on cross-border trade than the third stage. In this paper, we find that IT professionals in in an emerging economy such as India believe in radical innovation over incremental innovation (adaptation as a growth strategy, even though competitive advantage may rest in adaptation. The results of the study report the preference for innovation strategies among IT professionals in India and its implications for other rapidly growing emerging economies.

  1. Determining frustum depth of 304 stainless steel plates with various diameters and thicknesses by incremental forming

    Energy Technology Data Exchange (ETDEWEB)

    Golabi, Sa' id [University of Kashan, Kashan (Iran, Islamic Republic of); Khazaali, Hossain [Bu-Ali Sina University, Hamedan (Iran, Islamic Republic of)

    2014-08-15

    Nowadays incremental forming is more popular because of its flexibility and cost saving. However, no engineering data is available for manufacturers for forming simple shapes like a frustum by incremental forming, and either expensive experimental tests or finite element analysis (FEA) should be employed to determine the depth of a frustum considering: thickness, material, cone diameter, wall angle, feed rate, tool diameter, etc. In this study, finite element technique, confirmed by experimental study, was employed for developing applicable curves for determining the depth of frustums made from 304 stainless steel (SS304) sheet with various cone angles, thicknesses from 0.3 to 1 mm and major diameters from 50 to 200 mm using incremental forming. Using these curves, the frustum angle and its depth knowing its thickness and major diameter can be predicted. The effects of feed rate, vertical pitch and tool diameter on frustum depth and surface quality were also addressed in this study.

  2. EFFECT OF COST INCREMENT DISTRIBUTION PATTERNS ON THE PERFORMANCE OF JIT SUPPLY CHAIN

    Directory of Open Access Journals (Sweden)

    Ayu Bidiawati J.R

    2008-01-01

    Full Text Available Cost is an important consideration in supply chain (SC optimisation. This is due to emphasis placed on cost reduction in order to optimise profit. Some researchers use cost as one of their performance measures and others propose ways of accurately calculating cost. As product moves across SC, the product cost also increases. This paper studied the effect of cost increment distribution patterns on the performance of a JIT Supply Chain. In particular, it is necessary to know if inventory allocation across SC needs to be modified to accommodate different cost increment distribution patterns. It was found that funnel is still the best card distribution pattern for JIT-SC regardless the cost increment distribution patterns used.

  3. Incremental Validity of the DSM-5 Section III Personality Disorder Traits With Respect to Psychosocial Impairment.

    Science.gov (United States)

    Simms, Leonard J; Calabrese, William R

    2016-02-01

    Traditional personality disorders (PDs) are associated with significant psychosocial impairment. DSM-5 Section III includes an alternative hybrid personality disorder (PD) classification approach, with both type and trait elements, but relatively little is known about the impairments associated with Section III traits. Our objective was to study the incremental validity of Section III traits--compared to normal-range traits, traditional PD criterion counts, and common psychiatric symptomatology--in predicting psychosocial impairment. To that end, 628 current/recent psychiatric patients completed measures of PD traits, normal-range traits, traditional PD criteria, psychiatric symptomatology, and psychosocial impairments. Hierarchical regressions revealed that Section III PD traits incrementally predicted psychosocial impairment over normal-range personality traits, PD criterion counts, and common psychiatric symptomatology. In contrast, the incremental effects for normal-range traits, PD symptom counts, and common psychiatric symptomatology were substantially smaller than for PD traits. These findings have implications for PD classification and the impairment literature more generally.

  4. Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.

    Science.gov (United States)

    Ng, Thomas W H; Feldman, Daniel C

    2015-05-01

    This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  5. Incremental net social benefit associated with using nuclear-fueled power plants

    International Nuclear Information System (INIS)

    Maoz, I.

    1976-12-01

    The incremental net social benefit (INSB) resulting from nuclear-fueled, rather than coal-fired, electric power generation is assessed. The INSB is defined as the difference between the 'incremental social benefit' (ISB)--caused by the cheaper technology of electric power generation, and the 'incremental social cost' (ISC)--associated with an increased power production, which is induced by cheaper technology. Section 2 focuses on the theoretical and empirical problems associated with the assessment of the long-run price elasticity of the demand for electricity, and the theoretical-econometric considerations that lead to the reasonable estimates of price elasticities of demand from those provided by recent empirical studies. Section 3 covers the theoretical and empirical difficulties associated with the construction of the long-run social marginal cost curves (LRSMC) of electricity. Sections 4 and 5 discuss the assessment methodology and provide numerical examples for the calculation of the INSB resulting from nuclear-fueled power generation

  6. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  7. OXYGEN UPTAKE KINETICS DURING INCREMENTAL- AND DECREMENTAL-RAMP CYCLE ERGOMETRY

    Directory of Open Access Journals (Sweden)

    Fadıl Özyener

    2011-09-01

    Full Text Available The pulmonary oxygen uptake (VO2 response to incremental-ramp cycle ergometry typically demonstrates lagged-linear first-order kinetics with a slope of ~10-11 ml·min-1·W-1, both above and below the lactate threshold (ӨL, i.e. there is no discernible VO2 slow component (or "excess" VO2 above ӨL. We were interested in determining whether a reverse ramp profile would yield the same response dynamics. Ten healthy males performed a maximum incremental -ramp (15-30 W·min-1, depending on fitness. On another day, the work rate (WR was increased abruptly to the incremental maximum and then decremented at the same rate of 15-30 W.min-1 (step-decremental ramp. Five subjects also performed a sub-maximal ramp-decremental test from 90% of ӨL. VO2 was determined breath-by-breath from continuous monitoring of respired volumes (turbine and gas concentrations (mass spectrometer. The incremental-ramp VO2-WR slope was 10.3 ± 0.7 ml·min-1·W-1, whereas that of the descending limb of the decremental ramp was 14.2 ± 1.1 ml·min-1·W-1 (p < 0.005. The sub-maximal decremental-ramp slope, however, was only 9. 8 ± 0.9 ml·min-1·W-1: not significantly different from that of the incremental-ramp. This suggests that the VO2 response in the supra-ӨL domain of incremental-ramp exercise manifest not actual, but pseudo, first-order kinetics

  8. Chemical-Based Formulation Design: Virtual Experimentation

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul

    This paper presents a software, the virtual Product-Process Design laboratory (virtual PPD-lab) and the virtual experimental scenarios for design/verification of consumer oriented liquid formulated products where the software can be used. For example, the software can be employed for the design......, the additives and/or their mixtures (formulations). Therefore, the experimental resources can focus on a few candidate product formulations to find the best product. The virtual PPD-lab allows various options for experimentations related to design and/or verification of the product. For example, the selection...... design, model adaptation). All of the above helps to perform virtual experiments by blending chemicals together and observing their predicted behaviour. The paper will highlight the application of the virtual PPD-lab in the design and/or verification of different consumer products (paint formulation...

  9. Understanding Pesticide Risks: Toxicity and Formulation

    OpenAIRE

    Muntz, Helen; Miller, Rhonda; Alston, Diane

    2016-01-01

    This fact sheet provides information about pesticide risks to human health, primary means of pesticide exposure, standardized measures of pesticide toxicity, pesticide signal words and type of pesticide formulations.

  10. Emulsifying Formulation of Rosuvastatin Calcium for Improved ...

    African Journals Online (AJOL)

    solubility study, liquid formulations were prepared using LAS/Capryol 90: Maisine 35-1 as oil phase and. Tween 20 with ... evaluated for globule size, zeta potential, and emulsion properties. ..... surfactants which decreases the electrostatic.

  11. Drug Nanoparticle Formulation Using Ascorbic Acid Derivatives

    Directory of Open Access Journals (Sweden)

    Kunikazu Moribe

    2011-01-01

    Full Text Available Drug nanoparticle formulation using ascorbic acid derivatives and its therapeutic uses have recently been introduced. Hydrophilic ascorbic acid derivatives such as ascorbyl glycoside have been used not only as antioxidants but also as food and pharmaceutical excipients. In addition to drug solubilization, drug nanoparticle formation was observed using ascorbyl glycoside. Hydrophobic ascorbic acid derivatives such as ascorbyl mono- and di-n-alkyl fatty acid derivatives are used either as drugs or carrier components. Ascorbyl n-alkyl fatty acid derivatives have been formulated as antioxidants or anticancer drugs for nanoparticle formulations such as micelles, microemulsions, and liposomes. ASC-P vesicles called aspasomes are submicron-sized particles that can encapsulate hydrophilic drugs. Several transdermal and injectable formulations of ascorbyl n-alkyl fatty acid derivatives were used, including ascorbyl palmitate.

  12. Dynamic psychiatry and the psychodynamic formulation

    African Journals Online (AJOL)

    processes and psychiatric disorders are biological, the range ... The formulation furthermore helps with the initial orientation towards the patient: it anticipates and predicts how the patient ..... contributed to problems with his sexual identity.

  13. Formulation of Thermosensitive Hydrogel Containing Cyclodextrin ...

    African Journals Online (AJOL)

    Materials. Chitosan (deacetylation degree, DDA = 80 %) was obtained from HiMedia Laboratories Pvt. ... Sterile formulations were ... Chilled β-GP aqueous solution (sterilized through ..... generally decreasing away from the center of the tumor.

  14. Current advances on polynomial resultant formulations

    Science.gov (United States)

    Sulaiman, Surajo; Aris, Nor'aini; Ahmad, Shamsatun Nahar

    2017-08-01

    Availability of computer algebra systems (CAS) lead to the resurrection of the resultant method for eliminating one or more variables from the polynomials system. The resultant matrix method has advantages over the Groebner basis and Ritt-Wu method due to their high complexity and storage requirement. This paper focuses on the current resultant matrix formulations and investigates their ability or otherwise towards producing optimal resultant matrices. A determinantal formula that gives exact resultant or a formulation that can minimize the presence of extraneous factors in the resultant formulation is often sought for when certain conditions that it exists can be determined. We present some applications of elimination theory via resultant formulations and examples are given to explain each of the presented settings.

  15. Formulation and Characterization of Biodegradable Medicated ...

    African Journals Online (AJOL)

    PEG)-600, tributyl citrate, PEG-200, PEG-300, PEG-400, PEG-4000, triethyl citrate and castor oil. The gum formulations were characterized for the following parameters: texture profile analysis (TPA), biodegradation, in vitro drug release using a ...

  16. A simplectic formulation of relativistic particle dynamics

    International Nuclear Information System (INIS)

    Tulczyjew, W.M.

    1976-12-01

    Particle mechanics is formulated in terms of symplectic relations and infinitesimal symplectic relations. Generating functions of symplectic relations are shown to be classical counterparts of Green's functions of wave mechanics. (orig.) [de

  17. A sympletic formulation of relativistic particle dynamics

    International Nuclear Information System (INIS)

    Tulczyjew, W.M.

    1977-01-01

    Particle mechanics is formulated in terms of sympletic relations and infinitesimal symplectic relations. Generating functions of symplectic relations are shown to be classical counterparts of Green's functions of wave mechanics. (author)

  18. Paclitaxel Albumin-stabilized Nanoparticle Formulation

    Science.gov (United States)

    This page contains brief information about paclitaxel albumin-stabilized nanoparticle formulation and a collection of links to more information about the use of this drug, research results, and ongoing clinical trials.

  19. Maximal power output during incremental exercise by resistance and endurance trained athletes.

    Science.gov (United States)

    Sakthivelavan, D S; Sumathilatha, S

    2010-01-01

    This study was aimed at comparing the maximal power output by resistance trained and endurance trained athletes during incremental exercise. Thirty male athletes who received resistance training (Group I) and thirty male athletes of similar age group who received endurance training (Group II) for a period of more than 1 year were chosen for the study. Physical parameters were measured and exercise stress testing was done on a cycle ergometer with a portable gas analyzing system. The maximal progressive incremental cycle ergometer power output at peak exercise and carbon dioxide production at VO2max were measured. Highly significant (P biofeedback and perk up the athlete's performance.

  20. BMI and BMI SDS in childhood: annual increments and conditional change

    OpenAIRE

    Brannsether-Ellingsen, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Juliusson, Petur Benedikt

    2016-01-01

    Background: Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim: To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods: The distributions of 1-year increments of BMI (kg/m2) and BMI SDS are summarised by...

  1. Learning in Different Modes: The Interaction Between Incremental and Radical Change

    DEFF Research Database (Denmark)

    Petersen, Anders Hedegaard; Boer, Harry; Gertsen, Frank

    2004-01-01

    The objective of the study presented in this article is to contribute to the development of theory on continuous innovation, i.e. the combination of operationally effective exploitation and strategically flexible exploration. A longitudinal case study is presented of the interaction between...... incremental and radical change in Danish company, observed through the lens of organizational learning. The radical change process is described in five phases, each of which had its own effects on incremental change initiatives in the company. The research identified four factors explaining these effects, all...

  2. A program for the numerical control of a pulse increment system

    Energy Technology Data Exchange (ETDEWEB)

    Gray, D.C.

    1963-08-21

    This report will describe the important features of the development of magnetic tapes for the numerical control of a pulse-increment system consisting of a modified Gorton lathe and its associated control unit developed by L. E. Foley of Equipment Development Service, Engineering Services, General Electric Co., Schenectady, N.Y. Included is a description of CUPID (Control and Utilization of Pulse Increment Devices), a FORTRAN program for the design of these tapes on the IBM 7090 computer, and instructions for its operation.

  3. Incremental Approach to the Technology of Test Design for Industrial Projects

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2014-01-01

    Full Text Available The paper presents an approach to effort reduction in developing test suites for industrial software products based on the incremental technology. The main problems to be solved by the incremental technology are full automation design of test scenarios and significant reducing of test explosion. The proposed approach provides solutions to the mentioned problems through joint co-working of a designer and a customer, through the integration of symbolic verification with the automatic generation of test suites; through the usage of an efficient technology with the toolset VRS/TAT.

  4. Incremental electrohydraulic forming - A new approach for the manufacture of structured multifunctional sheet metal blanks

    Science.gov (United States)

    Djakow, Eugen; Springer, Robert; Homberg, Werner; Piper, Mark; Tran, Julian; Zibart, Alexander; Kenig, Eugeny

    2017-10-01

    Electrohydraulic Forming (EHF) processes permit the production of complex, sharp-edged geometries even when high-strength materials are used. Unfortunately, the forming zone is often limited as compared to other sheet metal forming processes. The use of a special industrial-robot-based tool setup and an incremental process strategy could provide a promising solution for this problem. This paper describes such an innovative approach using an electrohydraulic incremental forming machine, which can be employed to manufacture the large multifunctional and complex part geometries in steel, aluminium, magnesium and reinforced plastic that are employed in lightweight constructions or heating elements.

  5. The Boltzmann equation in the difference formulation

    Energy Technology Data Exchange (ETDEWEB)

    Szoke, Abraham [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brooks III, Eugene D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    First we recall the assumptions that are needed for the validity of the Boltzmann equation and for the validity of the compressible Euler equations. We then present the difference formulation of these equations and make a connection with the time-honored Chapman - Enskog expansion. We discuss the hydrodynamic limit and calculate the thermal conductivity of a monatomic gas, using a simplified approximation for the collision term. Our formulation is more consistent and simpler than the traditional derivation.

  6. Limitations of high dose carrier based formulations.

    Science.gov (United States)

    Yeung, Stewart; Traini, Daniela; Tweedie, Alan; Lewis, David; Church, Tanya; Young, Paul M

    2018-06-10

    This study was performed to investigate how increasing the active pharmaceutical ingredient (API) content within a formulation affects the dispersion of particles and the aerosol performance efficiency of a carrier based dry powder inhalable (DPI) formulation, using a custom dry powder inhaler (DPI) development rig. Five formulations with varying concentrations of API beclomethasone dipropionate (BDP) between 1% and 30% (w/w) were formulated as a multi-component carrier system containing coarse lactose and fine lactose with magnesium stearate. The morphology of the formulation and each component were investigated using scanning electron micrographs while the particle size was measured by laser diffraction. The aerosol performance, in terms of aerodynamic diameter, was assessed using the British pharmacopeia Apparatus E cascade impactor (Next generation impactor). Chemical analysis of the API was observed by high performance liquid chromatography (HPLC). Increasing the concentration of BDP in the blend resulted in increasing numbers and size of individual agglomerates and densely packed BDP multi-layers on the surface of the lactose carrier. BDP present within the multi-layer did not disperse as individual primary particles but as dense agglomerates, which led to a decrease in aerosol performance and increased percentage of BDP deposition within the Apparatus E induction port and pre-separator. As the BDP concentration in the blends increases, aerosol performance of the formulation decreases, in an inversely proportional manner. Concurrently, the percentage of API deposition in the induction port and pre-separator could also be linked to the amount of micronized particles (BDP and Micronized composite carrier) present in the formulation. The effect of such dose increase on the behaviour of aerosol dispersion was investigated to gain greater insight in the development and optimisation of higher dosed carrier-based formulations. Copyright © 2018 Elsevier B.V. All

  7. Chemicals-Based Formulation Design: Virtual Experimentations

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul

    2011-01-01

    This paper presents a systematic procedure for virtual experimentations related to the design of liquid formulated products. All the experiments that need to be performed when designing a liquid formulated product (lotion), such as ingredients selection and testing, solubility tests, property mea...... on the design of an insect repellent lotion will show that the software is an essential instrument in decision making, and that it reduces time and resources since experimental efforts can be focused on one or few product alternatives....

  8. Comments on alternate formulations for preequilibrium decay

    International Nuclear Information System (INIS)

    Blann, M.

    1978-01-01

    The physical and mathematical differences of several formulations for preequilibrium decay are discussed. Mathematical models and examples are presented or referred to in order to illustrate what the author believes to be errors in the exciton formulation as being due to improper inclusion of spectator effects. An earlier work of Gadioli et al. is reinterpreted, and quotations therein to work of the present author are corrected

  9. The coevent formulation of quantum theory

    International Nuclear Information System (INIS)

    Wallden, Petros

    2013-01-01

    Understanding quantum theory has been a subject of debate from its birth. Many different formulations and interpretations have been proposed. Here we examine a recent novel formulation, namely the coevents formulation. It is a histories formulation and has as starting point the Feynman path integral and the decoherence functional. The new ontology turns out to be that of a coarse-grained history. We start with a quantum measure defined on the space of histories, and the existence of zero covers rules out single-history as potential reality (the Kochen Specker theorem casted in histories form is a special case of a zero cover). We see that allowing coarse-grained histories as potential realities avoids the previous paradoxes, maintains deductive non-contextual logic (alas non-Boolean) and gives rise to a unique classical domain. Moreover, we can recover the probabilistic predictions of quantum theory with the use of the Cournot's principle. This formulation, being both a realist formulation and based on histories, is well suited conceptually for the purposes of quantum gravity and cosmology.

  10. An herbal formulation for hemorrhoids treatment

    Directory of Open Access Journals (Sweden)

    S. Dehdari

    2017-11-01

    Full Text Available Background and objectives: Hemorrhoids is the most painful rectal disease. Straining and pregnancy seem playing chief roles in the development of hemorrhoids. Symptoms of hemorrhoids may include bleeding, inflammation and pain. Despite current medical efforts, many discomforts of hemorrhoids have not been handled. The aim of the present study was to formulate and evaluate Itrifal-e muqil (IM tablet to achieve desired pharmaceutical properties. Method: Quality control tests of Allium ampeloperasum L, Commiphora mukul (Hook. ex Stocks Engl., Phyllanthus emblica L., Terminalia chebula Retz. and Terminalia bellerica Retz. were performed. Afterwards, different formulations were prepared and their physical properties were evaluated. Subsequently, the formulation was coated and its physicochemical characteristics were assessed. Result: All of the herbs demonstrated good results in quality control tests according to United State Pharmacopeia (USP. Formulation-1 that was completely prepared based on explained manufacturing process of IM in traditional medicine manuscripts did not show suitable pharmaceutical properties. Among different formulations, Formulation-3 that consisted of A. ampeloperasum, C. mukul, P. emblica, T. chebula and T. bellerica, displayed best outcomes through different tests. Conclusion: Modern pharmaceutical approaches can excellently be adapted for IM preparations.

  11. Enthalpy increment measurements of Sr3Zr2O7(s) and Sr4Zr3O10(s)

    International Nuclear Information System (INIS)

    Banerjee, A.; Dash, S.; Prasad, R.; Venugopal, V.

    1998-01-01

    Enthalpy increment measurements on Sr 3 Zr 2 O 7 (s) and Sr 4 Zr 3 O 10 (s) were carried out using a Calvet micro-calorimeter. The enthalpy increment values were least squares analyzed with the constraints that H 0 (T)-H 0 (298.15 K) at 298.15 K equals to zero and C p 0 (298.15 K) equals to the estimated value. The dependence of enthalpy increment with temperature is given. (orig.)

  12. Formulation studies for mirtazapine orally disintegrating tablets.

    Science.gov (United States)

    Yıldız, Simay; Aytekin, Eren; Yavuz, Burçin; Bozdağ Pehlivan, Sibel; Ünlü, Nurşen

    2016-01-01

    Orally disintegrating tablets (ODTs) recently have gained much attention to fulfill the needs for pediatric, geriatric, and psychiatric patients with dysphagia. Aim of this study was to develop new ODT formulations containing mirtazapine, an antidepressant drug molecule having bitter taste, by using simple and inexpensive preparation methods such as coacervation, direct compression and to compare their characteristics with those of reference product (Remereon SolTab). Coacervation method was chosen for taste masking of mirtazapine. In vitro characterization studies such as diameter and thickness, weight variation, tablet hardness, tablet friability and disintegration time were performed on tablet formulations. Wetting time and in vitro dissolution tests of developed ODTs also studied using 900 mL 0.1 N HCl medium, 900 mL pH 6.8 phosphate buffer or 900 mL pH 4.5 acetate buffer at 37 ± 0.2 °C as dissolution medium. Ratio of Eudragit® E-100 was chosen as 6% (w/w) since the dissolution profile of A1 (6% Eudragit® E-100) was found closer to the reference product than A2 (4% Eudragit® E-100) and A3 (8% Eudragit® E-100). Group D, E and F formulations were presented better results in terms of disintegration time. Dissolution results indicated that Group E and F formulations showed optimum properties in all three dissolution media. Formulations D1, D4, D5, E3, E4, F1 and F5 found suitable as ODT formulations due to their favorable disintegration times and dissolution profiles. Developed mirtazapine ODTs were found promising in terms of showing the similar characteristics to the original formulation.

  13. TURVA-2012: Formulation of radionuclide release scenarios

    International Nuclear Information System (INIS)

    Marcos, Nuria; Hjerpe, Thomas; Snellman, Margit; Ikonen, Ari; Smith, Paul

    2014-01-01

    TURVA-2012 is Posiva's safety case in support of the Preliminary Safety Analysis Report (PSAR) and application for a construction licence for a repository for disposal of spent nuclear fuel at the Olkiluoto site in south-western Finland. This paper gives a summary of the scenarios and the methodology followed in formulating them as described in TURVA-2012: Formulation of Radionuclide Release Scenarios (Posiva, 2013). The scenarios are further analysed in TURVA-2012: Assessment of Radionuclide Release Scenarios for the Repository System and TURVA-2012: Biosphere Assessment (Posiva, 2012a, 2012b). The formulation of scenarios takes into account the safety functions of the main barriers of the repository system and the uncertainties in the features, events, and processes (FEP) that may affect the entire disposal system (i.e. repository system plus the surface environment) from the emplacement of the first canister until the far future. In the report TURVA-2012: Performance Assessment (2012d), the performance of the engineered and natural barriers has been assessed against the loads expected during the evolution of the repository system and the site. Uncertainties have been identified and these are taken into account in the formulation of radionuclide release scenarios. The uncertainties in the FEP and evolution of the surface environment are taken into account in formulating the surface environment scenarios used ultimately in estimating radiation exposure. Formulating radionuclide release scenarios for the repository system links the reports Performance Assessment and Assessment of Radionuclide Release Scenarios for the Repository System. The formulation of radionuclide release scenarios for the surface environment brings together biosphere description and the surface environment FEP and is the link to the assessment of the surface environment scenarios summarised in TURVA-2012: Biosphere Assessment. (authors)

  14. The Trait Emotional Intelligence Questionnaire: Internal Structure, Convergent, Criterion, and Incremental Validity in an Italian Sample

    Science.gov (United States)

    Andrei, Federica; Smith, Martin M.; Surcinelli, Paola; Baldaro, Bruno; Saklofske, Donald H.

    2016-01-01

    This study investigated the structure and validity of the Italian translation of the Trait Emotional Intelligence Questionnaire. Data were self-reported from 227 participants. Confirmatory factor analysis supported the four-factor structure of the scale. Hierarchical regressions also demonstrated its incremental validity beyond demographics, the…

  15. One Size Does Not Fit All: Managing Radical and Incremental Creativity

    Science.gov (United States)

    Gilson, Lucy L.; Lim, Hyoun Sook; D'Innocenzo, Lauren; Moye, Neta

    2012-01-01

    This research extends creativity theory by re-conceptualizing creativity as a two-dimensional construct (radical and incremental) and examining the differential effects of intrinsic motivation, extrinsic rewards, and supportive supervision on perceptions of creativity. We hypothesize and find two distinct types of creativity that are associated…

  16. Relating annual increments of the endangered Blanding's turtle plastron growth to climate.

    Science.gov (United States)

    Richard, Monik G; Laroque, Colin P; Herman, Thomas B

    2014-05-01

    This research is the first published study to report a relationship between climate variables and plastron growth increments of turtles, in this case the endangered Nova Scotia Blanding's turtle (Emydoidea blandingii). We used techniques and software common to the discipline of dendrochronology to successfully cross-date our growth increment data series, to detrend and average our series of 80 immature Blanding's turtles into one common chronology, and to seek correlations between the chronology and environmental temperature and precipitation variables. Our cross-dated chronology had a series intercorrelation of 0.441 (above 99% confidence interval), an average mean sensitivity of 0.293, and an average unfiltered autocorrelation of 0.377. Our master chronology represented increments from 1975 to 2007 (33 years), with index values ranging from a low of 0.688 in 2006 to a high of 1.303 in 1977. Univariate climate response function analysis on mean monthly air temperature and precipitation values revealed a positive correlation with the previous year's May temperature and current year's August temperature; a negative correlation with the previous year's October temperature; and no significant correlation with precipitation. These techniques for determining growth increment response to environmental variables should be applicable to other turtle species and merit further exploration.

  17. An Empirical Analysis of Incremental Capital Structure Decisions Under Managerial Entrenchment

    NARCIS (Netherlands)

    de Jong, A.; Veld, C.H.

    1998-01-01

    We study incremental capital structure decisions of Dutch companies. From 1977 to 1996 these companies have made 110 issues of public and private seasoned equity and 137 public issues of straight debt. Managers of Dutch companies are entrenched. For this reason a discrepancy exists between

  18. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    Science.gov (United States)

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  19. BMI and BMI SDS in childhood: annual increments and conditional change.

    Science.gov (United States)

    Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt

    2017-02-01

    Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.

  20. Investigating Postgraduate College Admission Interviews: Generalizability Theory Reliability and Incremental Predictive Validity

    Science.gov (United States)

    Arce-Ferrer, Alvaro J.; Castillo, Irene Borges

    2007-01-01

    The use of face-to-face interviews is controversial for college admissions decisions in light of the lack of availability of validity and reliability evidence for most college admission processes. This study investigated reliability and incremental predictive validity of a face-to-face postgraduate college admission interview with a sample of…

  1. Software designs of image processing tasks with incremental refinement of computation.

    Science.gov (United States)

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  2. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    International Nuclear Information System (INIS)

    Mabit, L.; Toloza, A.; Meusburger, K.; Alewell, C.; Iurian, A-R.; Owens, P.N.

    2014-01-01

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”

  3. Per tree estimates with n-tree distance sampling: an application to increment core data

    Science.gov (United States)

    Thomas B. Lynch; Robert F. Wittwer

    2002-01-01

    Per tree estimates using the n trees nearest a point can be obtained by using a ratio of per unit area estimates from n-tree distance sampling. This ratio was used to estimate average age by d.b.h. classes for cottonwood trees (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland. Increment...

  4. Literature Review of Data on the Incremental Costs to Design and Build Low-Energy Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, W. D.

    2008-05-14

    This document summarizes findings from a literature review into the incremental costs associated with low-energy buildings. The goal of this work is to help establish as firm an analytical foundation as possible for the Building Technology Program's cost-effective net-zero energy goal in the year 2025.

  5. Analogical reasoning: An incremental or insightful process? What cognitive and cortical evidence suggests.

    Science.gov (United States)

    Antonietti, Alessandro; Balconi, Michela

    2010-06-01

    Abstract The step-by-step, incremental nature of analogical reasoning can be questioned, since analogy making appears to be an insight-like process. This alternative view of analogical thinking can be integrated in Speed's model, even though the alleged role played by dopaminergic subcortical circuits needs further supporting evidence.

  6. A diameter increment model for Red Fir in California and Southern Oregon

    Science.gov (United States)

    K. Leroy Dolph

    1992-01-01

    Periodic (10-year) diameter increment of individual red fir trees in Califomia and southern Oregon can be predicted from initial diameter and crown ratio of each tree, site index, percent slope, and aspect of the site. The model actually predicts the natural logarithm ofthe change in squared diameter inside bark between the startand the end of a 10-year growth period....

  7. Is It that Difficult to Find a Good Preference Order for the Incremental Algorithm?

    Science.gov (United States)

    Krahmer, Emiel; Koolen, Ruud; Theune, Mariet

    2012-01-01

    In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order.…

  8. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    Energy Technology Data Exchange (ETDEWEB)

    Mabit, L.; Toloza, A. [Soil and Water Management and Crop Nutrition Laboratory, IAEA, Seibersdorf (Austria); Meusburger, K.; Alewell, C. [Environmental Geosciences, Department of Environmental Sciences, University of Basel, Basel (Switzerland); Iurian, A-R. [Babes-Bolyai University, Faculty of Environmental Science and Engineering, Cluj-Napoca (Romania); Owens, P. N. [Environmental Science Program and Quesnel River Research Centre, University of Northern British Columbia, Prince George, British Columbia (Canada)

    2014-07-15

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”.

  9. Evaluating growth assumptions using diameter or radial increments in natural even-aged longleaf pine

    Science.gov (United States)

    John C. Gilbert; Ralph S. Meldahl; Jyoti N. Rayamajhi; John S. Kush

    2010-01-01

    When using increment cores to predict future growth, one often assumes future growth is identical to past growth for individual trees. Once this assumption is accepted, a decision has to be made between which growth estimate should be used, constant diameter growth or constant basal area growth. Often, the assumption of constant diameter growth is used due to the ease...

  10. Diagnostic value of triphasic incremental helical CT in early and progressive gastric carcinoma

    International Nuclear Information System (INIS)

    Gao Jianbo; Yan Xuehua; Li Mengtai; Guo Hua; Chen Xuejun; Guan Sheng; Zhang Xiefu; Li Shuxin; Yang Xiaopeng

    2001-01-01

    Objective: To investigate helical CT enhancement characteristics of gastric carcinoma, and the diagnostic value and preoperative staging of gastric carcinoma with triphasic incremental helical CT of the stomach with water-filling method. Methods: Both double-contrast barium examination and triphasic incremental helical CT of the stomach with water-filling method were performed in 46 patients with gastric carcinoma. Results: (1) Among these patients, normal gastric wall exhibited one layered structure in 18 patients, two or three layered structure in 28 patients in the arterial and portal venous phase. (2) Two cases of early stomach cancer showed marked enhancement in the arterial and portal venous phase and obvious attenuation of enhancement in the equilibrium phase. On the contrary, 32 of the 44 advanced gastric carcinoma was showed marked enhancement in the venous phase compared with the arterial phase ( t = 4.226, P < 0.05). (3) The total accuracy of triphasic incremental helical CT in determining TNM-staging was 81.0%. Conclusion: Different types of gastric carcinoma have different enhancement features. Triphases incremental helical CT is more accurate than conventional CT in the preoperative staging of gastric carcinoma

  11. On the search for an appropriate metric for reaction time to suprathreshold increments and decrements.

    Science.gov (United States)

    Vassilev, Angel; Murzac, Adrian; Zlatkova, Margarita B; Anderson, Roger S

    2009-03-01

    Weber contrast, DeltaL/L, is a widely used contrast metric for aperiodic stimuli. Zele, Cao & Pokorny [Zele, A. J., Cao, D., & Pokorny, J. (2007). Threshold units: A correct metric for reaction time? Vision Research, 47, 608-611] found that neither Weber contrast nor its transform to detection-threshold units equates human reaction times in response to luminance increments and decrements under selective rod stimulation. Here we show that their rod reaction times are equated when plotted against the spatial luminance ratio between the stimulus and its background (L(max)/L(min), the larger and smaller of background and stimulus luminances). Similarly, reaction times to parafoveal S-cone selective increments and decrements from our previous studies [Murzac, A. (2004). A comparative study of the temporal characteristics of processing of S-cone incremental and decremental signals. PhD thesis, New Bulgarian University, Sofia, Murzac, A., & Vassilev, A. (2004). Reaction time to S-cone increments and decrements. In: 7th European conference on visual perception, Budapest, August 22-26. Perception, 33, 180 (Abstract).], are better described by the spatial luminance ratio than by Weber contrast. We assume that the type of stimulus detection by temporal (successive) luminance discrimination, by spatial (simultaneous) luminance discrimination or by both [Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145.] determines the appropriateness of one or other contrast metric for reaction time.

  12. Lead 210 and moss-increment dating of two Finnish Sphagnum hummocks

    International Nuclear Information System (INIS)

    El-Daoushy, F.

    1982-01-01

    A comparison is presented of 210 Pb dating data with mass-increment dates of selected peat material from Finland. The measurements of 210 Pb were carried out by determining the granddaughter product 210 Po by means of the isotope dilution. The ages in 210 Pb yr were calculated using the constant initial concentration and the constant rate of supply models. (U.K.)

  13. Successive 1-Month Weight Increments in Infancy Can Be Used to Screen for Faltering Linear Growth.

    Science.gov (United States)

    Onyango, Adelheid W; Borghi, Elaine; de Onis, Mercedes; Frongillo, Edward A; Victora, Cesar G; Dewey, Kathryn G; Lartey, Anna; Bhandari, Nita; Baerug, Anne; Garza, Cutberto

    2015-12-01

    Linear growth faltering in the first 2 y contributes greatly to a high stunting burden, and prevention is hampered by the limited capacity in primary health care for timely screening and intervention. This study aimed to determine an approach to predicting long-term stunting from consecutive 1-mo weight increments in the first year of life. By using the reference sample of the WHO velocity standards, the analysis explored patterns of consecutive monthly weight increments among healthy infants. Four candidate screening thresholds of successive increments that could predict stunting were considered, and one was selected for further testing. The selected threshold was applied in a cohort of Bangladeshi infants to assess its predictive value for stunting at ages 12 and 24 mo. Between birth and age 12 mo, 72.6% of infants in the WHO sample tracked within 1 SD of their weight and length. The selected screening criterion ("event") was 2 consecutive monthly increments below the 15th percentile. Bangladeshi infants were born relatively small and, on average, tracked downward from approximately age 6 to strategy is effective, the estimated preventable proportion in the group who experienced the event would be 34% at 12 mo and 24% at 24 mo. This analysis offers an approach for frontline workers to identify children at risk of stunting, allowing for timely initiation of preventive measures. It opens avenues for further investigation into evidence-informed application of the WHO growth velocity standards. © 2015 American Society for Nutrition.

  14. Raising Cervical Cancer Awareness: Analysing the Incremental Efficacy of Short Message Service

    Science.gov (United States)

    Lemos, Marina Serra; Rothes, Inês Areal; Oliveira, Filipa; Soares, Luisa

    2017-01-01

    Objective: To evaluate the incremental efficacy of a Short Message Service (SMS) combined with a brief video intervention in increasing the effects of a health education intervention for cervical cancer prevention, over and beyond a video-alone intervention, with respect to key determinants of health behaviour change--knowledge, motivation and…

  15. The Interpersonal Measure of Psychopathy: Construct and Incremental Validity in Male Prisoners

    Science.gov (United States)

    Zolondek, Stacey; Lilienfeld, Scott O.; Patrick, Christopher J.; Fowler, Katherine A.

    2006-01-01

    The authors examined the construct and incremental validity of the Interpersonal Measure of Psychopathy (IM-P), a relatively new instrument designed to detect interpersonal behaviors associated with psychopathy. Observers of videotaped Psychopathy Checklist-Revised (PCL-R) interviews rated male prisoners (N = 93) on the IM-P. The IM-P correlated…

  16. Between structures and norms : Assessing tax increment financing for the Dutch spatial planning toolkit

    NARCIS (Netherlands)

    Root, Liz; Van Der Krabben, Erwin; Spit, Tejo

    2015-01-01

    The aim of the paper is to assess the institutional (mis)fit of tax increment financing for the Dutch spatial planning financial toolkit. By applying an institutionally oriented assessment framework, we analyse the interconnectivity of Dutch municipal finance and spatial planning structures and

  17. Analytic description of the frictionally engaged in-plane bending process incremental swivel bending (ISB)

    Science.gov (United States)

    Frohn, Peter; Engel, Bernd; Groth, Sebastian

    2018-05-01

    Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.

  18. On critical cases in limit theory for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Podolskij, Mark

    averages. The limit theory heavily depends on the interplay between the given order of the increments, the considered power, the Blumenthal-Getoor index of the driving pure jump Lévy process L and the behavior of the kernel function g at 0. In this work we will study the critical cases, which were...

  19. TCAM-based High Speed Longest Prefix Matching with Fast Incremental Table Updates

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Kragelund, A.; Berger, Michael Stübert

    2013-01-01

    and consequently a higher throughput of the network search engine, since the TCAM down time caused by incremental updates is eliminated. The LPM scheme is described in HDL for FPGA implementation and compared to an existing scheme for customized CAM circuits. The paper shows that the proposed scheme can process...

  20. Differentiating Major and Incremental New Product Development: The Effects of Functional and Numerical Workforce Flexibility

    NARCIS (Netherlands)

    Kok, R.A.W.; Ligthart, P.E.M.

    2014-01-01

    This study seeks to explain the differential effects of workforce flexibility on incremental and major new product development (NPD). Drawing on the resource-based theory of the firm, human resource management research, and innovation management literature, the authors distinguish two types of

  1. Real Time Implementation of Incremental Fuzzy Logic Controller for Gas Pipeline Corrosion Control

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Jayapalan

    2014-01-01

    Full Text Available A robust virtual instrumentation based fuzzy incremental corrosion controller is presented to protect metallic gas pipelines. Controller output depends on error and change in error of the controlled variable. For corrosion control purpose pipe to soil potential is considered as process variable. The proposed fuzzy incremental controller is designed using a very simple control rule base and the most natural and unbiased membership functions. The proposed scheme is tested for a wide range of pipe to soil potential control. Performance comparison between the conventional proportional integral type and proposed fuzzy incremental controller is made in terms of several performance criteria such as peak overshoot, settling time, and rise time. Result shows that the proposed controller outperforms its conventional counterpart in each case. Designed controller can be taken in automode without waiting for initial polarization to stabilize. Initial startup curve of proportional integral controller and fuzzy incremental controller is reported. This controller can be used to protect any metallic structures such as pipelines, tanks, concrete structures, ship, and offshore structures.

  2. On the Perturb-and-Observe and Incremental Conductance MPPT methods for PV systems

    DEFF Research Database (Denmark)

    Sera, Dezso; Mathe, Laszlo; Kerekes, Tamas

    2013-01-01

    This paper presents a detailed analysis of the two most well-known hill-climbing MPPT algorithms, the Perturb-and-Observe (P&O) and Incremental Conductance (INC). The purpose of the analysis is to clarify some common misconceptions in the literature regarding these two trackers, therefore helping...

  3. Gradient nanostructured surface of a Cu plate processed by incremental frictional sliding

    DEFF Research Database (Denmark)

    Hong, Chuanshi; Huang, Xiaoxu; Hansen, Niels

    2015-01-01

    The flat surface of a Cu plate was processed by incremental frictional sliding at liquid nitrogen temperature. The surface treatment results in a hardened gradient surface layer as thick as 1 mm in the Cu plate, which contains a nanostructured layer on the top with a boundary spacing of the order...

  4. A gradient surface produced by combined electroplating and incremental frictional sliding

    DEFF Research Database (Denmark)

    Yu, Tianbo; Hong, Chuanshi; Kitamura, K.

    2017-01-01

    A Cu plate was first electroplated with a Ni layer, with a thickness controlled to be between 1 and 2 mu m. The coated surface was then deformed by incremental frictional sliding with liquid nitrogen cooling. The combined treatment led to a multifunctional surface with a gradient in strain...

  5. The effects of the pine processionary moth on the increment of ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-05-18

    May 18, 2009 ... sycophanta L. (Coleoptera: Carabidae) used against the pine processionary moth (Thaumetopoea pityocampa Den. & Schiff.) (Lepidoptera: Thaumetopoeidae) in biological control. T. J. Zool. 30:181-185. Kanat M, Sivrikaya F (2005). Effect of the pine processionary moth on diameter increment of Calabrian ...

  6. Incrementally Detecting Change Types of Spatial Area Object: A Hierarchical Matching Method Considering Change Process

    Directory of Open Access Journals (Sweden)

    Yanhui Wang

    2018-01-01

    Full Text Available Detecting and extracting the change types of spatial area objects can track area objects’ spatiotemporal change pattern and provide the change backtracking mechanism for incrementally updating spatial datasets. To respond to the problems of high complexity of detection methods, high redundancy rate of detection factors, and the low automation degree during incrementally update process, we take into account the change process of area objects in an integrated way and propose a hierarchical matching method to detect the nine types of changes of area objects, while minimizing the complexity of the algorithm and the redundancy rate of detection factors. We illustrate in details the identification, extraction, and database entry of change types, and how we achieve a close connection and organic coupling of incremental information extraction and object type-of-change detection so as to characterize the whole change process. The experimental results show that this method can successfully detect incremental information about area objects in practical applications, with the overall accuracy reaching above 90%, which is much higher than the existing weighted matching method, making it quite feasible and applicable. It helps establish the corresponding relation between new-version and old-version objects, and facilitate the linked update processing and quality control of spatial data.

  7. Combining Compact Representation and Incremental Generation in Large Games with Sequential Strategies

    DEFF Research Database (Denmark)

    Bosansky, Branislav; Xin Jiang, Albert; Tambe, Milind

    2015-01-01

    representation of sequential strategies and linear programming, or by incremental strategy generation of iterative double-oracle methods. In this paper, we present novel hybrid of these two approaches: compact-strategy double-oracle (CS-DO) algorithm that combines the advantages of the compact representation...

  8. A power-driven increment borer for sampling high-density tropical wood

    Czech Academy of Sciences Publication Activity Database

    Krottenthaler, S.; Pitsch, P.; Helle, G.; Locosselli, G. M.; Ceccantini, G.; Altman, Jan; Svoboda, M.; Doležal, Jiří; Schleser, G.; Anhuf, D.

    2015-01-01

    Roč. 36, November (2015), s. 40-44 ISSN 1125-7865 R&D Projects: GA ČR GAP504/12/1952; GA ČR(CZ) GA14-12262S Institutional support: RVO:67985939 Keywords : tropical dendrochronology * tree sampling methods * increment cores Subject RIV: EF - Botanics Impact factor: 2.107, year: 2015

  9. Substructuring in the implicit simulation of single point incremental sheet forming

    NARCIS (Netherlands)

    Hadoush, A.; van den Boogaard, Antonius H.

    2009-01-01

    This paper presents a direct substructuring method to reduce the computing time of implicit simulations of single point incremental forming (SPIF). Substructuring is used to divide the finite element (FE) mesh into several non-overlapping parts. Based on the hypothesis that plastic deformation is

  10. Incremental Validity of the WJ III COG: Limited Predictive Effects beyond the GIA-E

    Science.gov (United States)

    McGill, Ryan J.; Busse, R. T.

    2015-01-01

    This study is an examination of the incremental validity of Cattell-Horn-Carroll (CHC) broad clusters from the Woodcock-Johnson III Tests of Cognitive Abilities (WJ III COG) for predicting scores on the Woodcock-Johnson III Tests of Achievement (WJ III ACH). The participants were children and adolescents, ages 6-18 (n = 4,722), drawn from the WJ…

  11. Feasibility of Incremental 2-Times Weekly Hemodialysis in Incident Patients With Residual Kidney Function

    Directory of Open Access Journals (Sweden)

    Andrew I. Chin

    2017-09-01

    Discussion: More than 50% of incident HD patients with RKF have adequate kidney urea clearance to be considered for 2-times weekly HD. When additionally ultrafiltration volume and blood pressure stability are taken into account, more than one-fourth of the total cohort could optimally start HD in an incremental fashion.

  12. A Self-Organizing Incremental Neural Network based on local distribution learning.

    Science.gov (United States)

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. 29 CFR 825.205 - Increments of FMLA leave for intermittent or reduced schedule leave.

    Science.gov (United States)

    2010-07-01

    ... intermittent leave or working a reduced leave schedule to commence or end work mid-way through a shift, such as... per week, but works only 20 hours a week under a reduced leave schedule, the employee's ten hours of... 29 Labor 3 2010-07-01 2010-07-01 false Increments of FMLA leave for intermittent or reduced...

  14. Lipid Based Formulations of Biopharmaceutics Classification System (BCS Class II Drugs: Strategy, Formulations, Methods and Saturation

    Directory of Open Access Journals (Sweden)

    Šoltýsová I.

    2016-12-01

    Full Text Available Active ingredients in pharmaceuticals differ by their physico-chemical properties and their bioavailability therefore varies. The most frequently used and most convenient way of administration of medicines is oral, however many drugs are little soluble in water. Thus they are not sufficiently effective and suitable for such administration. For this reason a system of lipid based formulations (LBF was developed. Series of formulations were prepared and tested in water and biorelevant media. On the basis of selection criteria, there were selected formulations with the best emulsification potential, good dispersion in the environment and physical stability. Samples of structurally different drugs included in the Class II of the Biopharmaceutics classification system (BCS were obtained, namely Griseofulvin, Glibenclamide, Carbamazepine, Haloperidol, Itraconazol, Triclosan, Praziquantel and Rifaximin, for testing of maximal saturation in formulations prepared from commercially available excipients. Methods were developed for preparation of formulations, observation of emulsification and its description, determination of maximum solubility of drug samples in the respective formulation and subsequent analysis. Saturation of formulations with drugs showed that formulations 80 % XA and 20 % Xh, 35 % XF and 65 % Xh were best able to dissolve the drugs which supports the hypothesis that it is desirable to identify limited series of formulations which could be generally applied for this purpose.

  15. High-Level Waste Glass Formulation Model Sensitivity Study 2009 Glass Formulation Model Versus 1996 Glass Formulation Model

    International Nuclear Information System (INIS)

    Belsher, J.D.; Meinert, F.L.

    2009-01-01

    This document presents the differences between two HLW glass formulation models (GFM): The 1996 GFM and 2009 GFM. A glass formulation model is a collection of glass property correlations and associated limits, as well as model validity and solubility constraints; it uses the pretreated HLW feed composition to predict the amount and composition of glass forming additives necessary to produce acceptable HLW glass. The 2009 GFM presented in this report was constructed as a nonlinear optimization calculation based on updated glass property data and solubility limits described in PNNL-18501 (2009). Key mission drivers such as the total mass of HLW glass and waste oxide loading are compared between the two glass formulation models. In addition, a sensitivity study was performed within the 2009 GFM to determine the effect of relaxing various constraints on the predicted mass of the HLW glass.

  16. [Spatiotemporal variation of Populus euphratica's radial increment at lower reaches of Tarim River after ecological water transfer].

    Science.gov (United States)

    An, Hong-Yan; Xu, Hai-Liang; Ye, Mao; Yu, Pu-Ji; Gong, Jun-Jun

    2011-01-01

    Taking the Populus euphratica at lower reaches of Tarim River as test object, and by the methods of tree dendrohydrology, this paper studied the spatiotemporal variation of P. euphratic' s branch radial increment after ecological water transfer. There was a significant difference in the mean radial increment before and after ecological water transfer. The radial increment after the eco-water transfer was increased by 125%, compared with that before the water transfer. During the period of ecological water transfer, the radial increment was increased with increasing water transfer quantity, and there was a positive correlation between the annual radial increment and the total water transfer quantity (R2 = 0.394), suggesting that the radial increment of P. euphratica could be taken as the performance indicator of ecological water transfer. After the ecological water transfer, the radial increment changed greatly with the distance to the River, i.e. , decreased significantly along with the increasing distance to the River (P = 0.007). The P. euphratic' s branch radial increment also differed with stream segment (P = 0.017 ), i.e. , the closer to the head-water point (Daxihaizi Reservoir), the greater the branch radial increment. It was considered that the limited effect of the current ecological water transfer could scarcely change the continually deteriorating situation of the lower reaches of Tarim River.

  17. Formulation of Sustained-Release Diltiazem Matrix Tablets Using ...

    African Journals Online (AJOL)

    Erah

    surface, their drug release behavior appears simple, but ... matrix material for the formulation of ..... formulation F5 (,) and reference formulations. ( , □). 0. 50. 100. 150. 200. 250. 300. 0. 3. 6 .... Coviello T, Matricardi P, Marianecci C, Alhaique F.

  18. A maximal incremental effort alters tear osmolarity depending on the fitness level in military helicopter pilots.

    Science.gov (United States)

    Vera, Jesús; Jiménez, Raimundo; Madinabeitia, Iker; Masiulis, Nerijus; Cárdenas, David

    2017-10-01

    Fitness level modulates the physiological responses to exercise for a variety of indices. While intense bouts of exercise have been demonstrated to increase tear osmolarity (Tosm), it is not known if fitness level can affect the Tosm response to acute exercise. This study aims to compare the effect of a maximal incremental test on Tosm between trained and untrained military helicopter pilots. Nineteen military helicopter pilots (ten trained and nine untrained) performed a maximal incremental test on a treadmill. A tear sample was collected before and after physical effort to determine the exercise-induced changes on Tosm. The Bayesian statistical analysis demonstrated that Tosm significantly increased from 303.72 ± 6.76 to 310.56 ± 8.80 mmol/L after performance of a maximal incremental test. However, while the untrained group showed an acute Tosm rise (12.33 mmol/L of increment), the trained group experienced a stable Tosm physical effort (1.45 mmol/L). There was a significant positive linear association between fat indices and Tosm changes (correlation coefficients [r] range: 0.77-0.89), whereas the Tosm changes displayed a negative relationship with the cardiorespiratory capacity (VO2 max; r = -0.75) and performance parameters (r = -0.75 for velocity, and r = -0.67 for time to exhaustion). The findings from this study provide evidence that fitness level is a major determinant of Tosm response to maximal incremental physical effort, showing a fairly linear association with several indices related to fitness level. High fitness level seems to be beneficial to avoid Tosm changes as consequence of intense exercise. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Towards Reliable and Energy-Efficient Incremental Cooperative Communication for Wireless Body Area Networks.

    Science.gov (United States)

    Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor

    2016-02-24

    In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption.

  20. A design of LED adaptive dimming lighting system based on incremental PID controller

    Science.gov (United States)

    He, Xiangyan; Xiao, Zexin; He, Shaojia

    2010-11-01

    As a new generation energy-saving lighting source, LED is applied widely in various technology and industry fields. The requirement of its adaptive lighting technology is more and more rigorous, especially in the automatic on-line detecting system. In this paper, a closed loop feedback LED adaptive dimming lighting system based on incremental PID controller is designed, which consists of MEGA16 chip as a Micro-controller Unit (MCU), the ambient light sensor BH1750 chip with Inter-Integrated Circuit (I2C), and constant-current driving circuit. A given value of light intensity required for the on-line detecting environment need to be saved to the register of MCU. The optical intensity, detected by BH1750 chip in real time, is converted to digital signal by AD converter of the BH1750 chip, and then transmitted to MEGA16 chip through I2C serial bus. Since the variation law of light intensity in the on-line detecting environment is usually not easy to be established, incremental Proportional-Integral-Differential (PID) algorithm is applied in this system. Control variable obtained by the incremental PID determines duty cycle of Pulse-Width Modulation (PWM). Consequently, LED's forward current is adjusted by PWM, and the luminous intensity of the detection environment is stabilized by self-adaptation. The coefficients of incremental PID are obtained respectively after experiments. Compared with the traditional LED dimming system, it has advantages of anti-interference, simple construction, fast response, and high stability by the use of incremental PID algorithm and BH1750 chip with I2C serial bus. Therefore, it is suitable for the adaptive on-line detecting applications.

  1. Endogenous-cue prospective memory involving incremental updating of working memory: an fMRI study.

    Science.gov (United States)

    Halahalli, Harsha N; John, John P; Lukose, Ammu; Jain, Sanjeev; Kutty, Bindu M

    2015-11-01

    Prospective memory paradigms are conventionally classified on the basis of event-, time-, or activity-based intention retrieval. In the vast majority of such paradigms, intention retrieval is provoked by some kind of external event. However, prospective memory retrieval cues that prompt intention retrieval in everyday life are commonly endogenous, i.e., linked to a specific imagined retrieval context. We describe herein a novel prospective memory paradigm wherein the endogenous cue is generated by incremental updating of working memory, and investigated the hemodynamic correlates of this task. Eighteen healthy adult volunteers underwent functional magnetic resonance imaging while they performed a prospective memory task where the delayed intention was triggered by an endogenous cue generated by incremental updating of working memory. Working memory and ongoing task control conditions were also administered. The 'endogenous-cue prospective memory condition' with incremental working memory updating was associated with maximum activations in the right rostral prefrontal cortex, and additional activations in the brain regions that constitute the bilateral fronto-parietal network, central and dorsal salience networks as well as cerebellum. In the working memory control condition, maximal activations were noted in the left dorsal anterior insula. Activation of the bilateral dorsal anterior insula, a component of the central salience network, was found to be unique to this 'endogenous-cue prospective memory task' in comparison to previously reported exogenous- and endogenous-cue prospective memory tasks without incremental working memory updating. Thus, the findings of the present study highlight the important role played by the dorsal anterior insula in incremental working memory updating that is integral to our endogenous-cue prospective memory task.

  2. Influence of increment thickness on dentin bond strength and light transmission of composite base materials.

    Science.gov (United States)

    Omran, Tarek A; Garoushi, Sufyan; Abdulmajeed, Aous A; Lassila, Lippo V; Vallittu, Pekka K

    2017-06-01

    Bulk-fill resin composites (BFCs) are gaining popularity in restorative dentistry due to the reduced chair time and ease of application. This study aimed to evaluate the influence of increment thickness on dentin bond strength and light transmission of different BFCs and a new discontinuous fiber-reinforced composite. One hundred eighty extracted sound human molars were prepared for a shear bond strength (SBS) test. The teeth were divided into four groups (n = 45) according to the resin composite used: regular particulate filler resin composite: (1) G-ænial Anterior [GA] (control); bulk-fill resin composites: (2) Tetric EvoCeram Bulk Fill [TEBF] and (3) SDR; and discontinuous fiber-reinforced composite: (4) everX Posterior [EXP]. Each group was subdivided according to increment thickness (2, 4, and 6 mm). The irradiance power through the material of all groups/subgroups was quantified (MARC® Resin Calibrator; BlueLight Analytics Inc.). Data were analyzed using two-way ANOVA followed by Tukey's post hoc test. SBS and light irradiance decreased as the increment's height increased (p composite used. EXP presented the highest SBS in 2- and 4-mm-thick increments when compared to other composites, although the differences were not statistically significant (p > 0.05). Light irradiance mean values arranged in descending order were (p composites. Discontinuous fiber-reinforced composite showed the highest value of curing light transmission, which was also seen in improved bonding strength to the underlying dentin surface. Discontinuous fiber-reinforced composite can be applied safely in bulks of 4-mm increments same as other bulk-fill composites, although, in 2-mm thickness, the investigated composites showed better performance.

  3. A fast implementation of the incremental backprojection algorithms for parallel beam geometries

    International Nuclear Information System (INIS)

    Chen, C.M.; Wang, C.Y.; Cho, Z.H.

    1996-01-01

    Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N 2 ) multiplications in contrast to O(N 2 ) and O(N 3 ) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested

  4. Formulation of lubricating grease using Beeswax thickener

    Science.gov (United States)

    Suhaila, N.; Japar, A.; Aizudin, M.; Aziz, A.; Najib Razali, Mohd

    2018-04-01

    The issues on environmental pollution has brought the industries to seek the alternative green solutions for lubricating grease formulation. The significant challenges in producing modified grease are in which considering the chosen thickener as one of the environmental friendly material. The main purposes of the current research were to formulate lubricant grease using different types of base oils and to study the effect of thickener on the formulated lubricant grease. Used oil and motor oil were used as the base oils for the grease preparation. Beeswax and Damar were used as thickener and additive. The grease is tested based on its consistency, stability and oil bleeding. The prepared greases achieved grease consistency of grade 2 and 3 except for grease with unfiltered used oil. Grease formulated with used oil and synthetic oil tend to harden and loss its lubricating ability under high temperature compared to motor oil’ grease. Grease modification using environmental friendly thickener were successfully formulated but it is considered as a low temperature grease as the beeswax have low melting point of 62°C-65°C.

  5. Bioequivalence assessment of two formulations of ibuprofen

    KAUST Repository

    Al-Talla, Zeyad

    2011-10-19

    Background: This study assessed the relative bioavailability of two formulations of ibuprofen. The first formulation was Doloraz , produced by Al-Razi Pharmaceutical Company, Amman, Jordan. The second forumulation was Brufen , manufactured by Boots Company, Nottingham, UK. Methods and results: A prestudy validation of ibuprofen demonstrated long-term stability, freeze-thaw stability, precision, and accuracy. Twenty-four healthy volunteers were enrolled in this study. After overnight fasting, the two formulations (test and reference) of ibuprofen (100 mg ibuprofen/5 mL suspension) were administered as a single dose on two treatment days separated by a one-week washout period. After dosing, serial blood samples were drawn for a period of 14 hours. Serum harvested from the blood samples was analyzed for the presence of ibuprofen by high-pressure liquid chromatography with ultraviolet detection. Pharmacokinetic parameters were determined from serum concentrations for both formulations. The 90% confidence intervals of the ln-transformed test/reference treatment ratios for peak plasma concentration and area under the concentration-time curve (AUC) parameters were found to be within the predetermined acceptable interval of 80%-125% set by the US Food and Drug Administration. Conclusion: Analysis of variance for peak plasma concentrations and AUC parameters showed no significant difference between the two formulations and, therefore, Doloraz was considered bioequivalent to Brufen. 2011 Al-Talla et al, publisher and licensee Dove Medical Press Ltd.

  6. Formulation and solution of the classical seashell problem

    International Nuclear Information System (INIS)

    Illert, C.

    1987-01-01

    Despite an extensive scholarly literature dating back to classical times, seashell geometries have hiterto resisted rigorous theoretical analysis, leaving applied scientists to adopt a directionless empirical approach toward classification. The voluminousness of recent paleontological literature demonstrates the importance of this problem to applied scientists, but in no way reflects corresponding conceptual or theoretical advances beyond the XIX century thinking which was so ably summarized by Sir D'Arcy Wentworth Thompson in 1917. However, in this foundation paper for the newly emerging science of theoretical conchology, unifying theoretical considerations for the first time, permits a rigorous formulation and a complete solution of the problem of biological shell geometries. Shell coiling about the axis of symmetry can be deduced from first principles using energy considerations associated with incremental growth. The present paper shows that those shell apertures which are incurved (''cowrielike''), outflared (''stromblike'') or even backturned (''Opisthostomoidal'') are merely special cases of a much broader spectrum of ''allowable'' energy-efficient growth trajectories (tensile elastic clockspring spirals), many of which were widely used by Cretaceous ammonites. Energy considerations also dictate shell growth along the axis of symmetry, thus seashell spires can be understood in terms of certain special figures of revolution (Moebius elastic conoids), the better-known coeloconoidal and cyrtoconoidal shell spires being only two special cases arising from a whole class of topologically possible, energy efficient and biologically observed geometries. The ''wires'' and ''conoids'' of the present paper are instructive conceptual simplifications sufficient for present purposes. A second paper will later deal with generalized tubular surfaces in thre

  7. Cyclodextrins as excipients in tablet formulations.

    Science.gov (United States)

    Conceição, Jaime; Adeoye, Oluwatomide; Cabral-Marques, Helena Maria; Lobo, José Manuel Sousa

    2018-04-22

    This paper aims to provide a critical review of cyclodextrins as excipients in tablet formulations, highlighting: (i) the principal pharmaceutical applications of cyclodextrins; (ii) the most relevant technological aspects in pharmaceutical formulation development; and (iii) the actual regulatory status of cyclodextrins. Moreover, several illustrative examples are presented. Cyclodextrins can be used as complexing excipients in tablet formulations for low-dose drugs. By contrast, for medium-dose drugs and/or when the complexation efficiency is low, the methods to enhance the complexation efficiency play a key part in reducing the cyclodextrin quantity. In addition, these compounds are used as fillers, disintegrants, binders and multifunctional direct compression excipients of the tablets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. STRATEGY FORMULATION PROCESS AND INNOVATION PERFORMANCE NEXUS

    Directory of Open Access Journals (Sweden)

    Chijioke Nwachukwu

    2018-03-01

    Full Text Available The purpose of this study is to examine the link between strategy formulation process and innovation performance indicators in microfinance banks in Nigeria (MFBs. 100 employees of leading microfinance banks were randomly selected for this study. 80 questionnaires were returned but only 76 were found usable for the analysis. Regression analysis technique was used in examining the nature of the relationships of the variables and for hypotheses testing. The authors used exploratory factor analysis and Cronbach's alpha to test for the validity and reliability of the questionnaires. The results show that strategy formulation process has a positive effect on process innovation performance, product innovation performance and marketing innovation performance. Thus, all the three hypotheses tested were supported. The authors, therefore, concludes that a systematic strategy formulation process is necessary for firms to achieve and sustain process innovation performance, product innovation performance and marketing innovation performance. This study proposed suggestion for further studies.

  9. Formulating viscous hydrodynamics for large velocity gradients

    International Nuclear Information System (INIS)

    Pratt, Scott

    2008-01-01

    Viscous corrections to relativistic hydrodynamics, which are usually formulated for small velocity gradients, have recently been extended from Navier-Stokes formulations to a class of treatments based on Israel-Stewart equations. Israel-Stewart treatments, which treat the spatial components of the stress-energy tensor τ ij as dynamical objects, introduce new parameters, such as the relaxation times describing nonequilibrium behavior of the elements τ ij . By considering linear response theory and entropy constraints, we show how the additional parameters are related to fluctuations of τ ij . Furthermore, the Israel-Stewart parameters are analyzed for their ability to provide stable and physical solutions for sound waves. Finally, it is shown how these parameters, which are naturally described by correlation functions in real time, might be constrained by lattice calculations, which are based on path-integral formulations in imaginary time

  10. RAACFDb: Rheumatoid arthritis ayurvedic classical formulations database.

    Science.gov (United States)

    Mohamed Thoufic Ali, A M; Agrawal, Aakash; Sajitha Lulu, S; Mohana Priya, A; Vino, S

    2017-02-02

    In the past years, the treatment of rheumatoid arthritis (RA) has undergone remarkable changes in all therapeutic modes. The present newfangled care in clinical research is to determine and to pick a new track for better treatment options for RA. Recent ethnopharmacological investigations revealed that traditional herbal remedies are the most preferred modality of complementary and alternative medicine (CAM). However, several ayurvedic modes of treatments and formulations for RA are not much studied and documented from Indian traditional system of medicine. Therefore, this directed us to develop an integrated database, RAACFDb (acronym: Rheumatoid Arthritis Ayurvedic Classical Formulations Database) by consolidating data from the repository of Vedic Samhita - The Ayurveda to retrieve the available formulations information easily. Literature data was gathered using several search engines and from ayurvedic practitioners for loading information in the database. In order to represent the collected information about classical ayurvedic formulations, an integrated database is constructed and implemented on a MySQL and PHP back-end. The database is supported by describing all the ayurvedic classical formulations for the treatment rheumatoid arthritis. It includes composition, usage, plant parts used, active ingredients present in the composition and their structures. The prime objective is to locate ayurvedic formulations proven to be quite successful and highly effective among the patients with reduced side effects. The database (freely available at www.beta.vit.ac.in/raacfdb/index.html) hopefully enables easy access for clinical researchers and students to discover novel leads with reduced side effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Formulation and stability testing of photolabile drugs.

    Science.gov (United States)

    Tønnesen, H H

    2001-08-28

    Exposure of a drug to irradiation can influence the stability of the formulation, leading to changes in the physicochemical properties of the product. The influence of excipients of frequently used stabilizers is often difficult to predict and, therefore, stability testing of the final preparation is important. The selection of a protective packaging must be based on knowledge about the wavelength causing the instability. Details on drug photoreactivity will also be helpful in order to minimize side-effects and/or optimize drug targeting by developing photoresponsive drug delivery systems. This review focuses on practical problems related to formulation and stability testing of photolabile drugs.

  12. Controlled-release tablet formulation of isoniazid.

    Science.gov (United States)

    Jain, N K; Kulkarni, K; Talwar, N

    1992-04-01

    Guar (GG) and Karaya gums (KG) alone and in combination with hydroxy-propylmethylcellulose (HPMC) were evaluated as release retarding materials to formulate a controlled-release tablet dosage form of isoniazid (1). In vitro release of 1 from tablets followed non-Fickian release profile with rapid initial release. Urinary excretion studies in normal subjects showed steady-state levels of 1 for 13 h. In vitro and in vivo data correlated (r = 0.9794). The studies suggested the potentiality of GG and KG as release retarding materials in formulating controlled-release tablet dosage forms of 1.

  13. An exact approach for aggregated formulations

    DEFF Research Database (Denmark)

    Gamst, Mette; Spoorendonk, Simon

    Aggregating formulations is a powerful approach for transforming problems into taking more tractable forms. Aggregated formulations can, though, have drawbacks: some information may get lost in the aggregation and { put in a branch-and-bound context { branching may become very di_cult and even....... The paper includes general considerations on types of problems for which the method is of particular interest. Furthermore, we prove the correctness of the procedure and consider how to include extensions such as cutting planes and advanced branching strategies....

  14. Quaternionic formulation of the exact parity model

    Energy Technology Data Exchange (ETDEWEB)

    Brumby, S.P.; Foot, R.; Volkas, R.R.

    1996-02-28

    The exact parity model (EPM) is a simple extension of the standard model which reinstates parity invariance as an unbroken symmetry of nature. The mirror matter sector of the model can interact with ordinary matter through gauge boson mixing, Higgs boson mixing and, if neutrinos are massive, through neutrino mixing. The last effect has experimental support through the observed solar and atmospheric neutrino anomalies. In the paper it is shown that the exact parity model can be formulated in a quaternionic framework. This suggests that the idea of mirror matter and exact parity may have profound implications for the mathematical formulation of quantum theory. 13 refs.

  15. Quaternionic formulation of the exact parity model

    International Nuclear Information System (INIS)

    Brumby, S.P.; Foot, R.; Volkas, R.R.

    1996-01-01

    The exact parity model (EPM) is a simple extension of the standard model which reinstates parity invariance as an unbroken symmetry of nature. The mirror matter sector of the model can interact with ordinary matter through gauge boson mixing, Higgs boson mixing and, if neutrinos are massive, through neutrino mixing. The last effect has experimental support through the observed solar and atmospheric neutrino anomalies. In the paper it is shown that the exact parity model can be formulated in a quaternionic framework. This suggests that the idea of mirror matter and exact parity may have profound implications for the mathematical formulation of quantum theory. 13 refs

  16. Dynamic modeling of geometrically nonlinear electrostatically actuated microbeams (Corotational Finite Element formulation and analysis)

    Energy Technology Data Exchange (ETDEWEB)

    Borhan, H; Ahmadian, M T [Sharif University of Technology, Center of Excellence for Design, Robotics and Automation, School of Mechanical Engineering, PO Box 11365-9567, Tehran (Iran, Islamic Republic of)

    2006-04-01

    In this paper, a complete nonlinear finite element model for coupled-domain MEMS devices with electrostatic actuation and squeeze film effect is developed. For this purpose, a corotational finite element formulation for the dynamic analysis of planer Euler beams is employed. In this method, the internal nodal forces due to deformation and intrinsic residual stresses, the inertial nodal forces, and the damping effect of squeezed air film are systematically derived by consistent linearization of the fully geometrically nonlinear beam theory using d'Alamber and virtual work principles. An incremental-iterative method based on the Newmark direct integration procedure and the Newton-Raphson algorithm is used to solve the nonlinear dynamic equilibrium equations. Numerical examples are presented and compared with experimental findings which indicate properly good agreement.

  17. Potential stocks and increments of woody biomass in the European Union under different management and climate scenarios.

    Science.gov (United States)

    Kindermann, Georg E; Schörghuber, Stefan; Linkosalo, Tapio; Sanchez, Anabel; Rammer, Werner; Seidl, Rupert; Lexer, Manfred J

    2013-02-01

    Forests play an important role in the global carbon flow. They can store carbon and can also provide wood which can substitute other materials. In EU27 the standing biomass is steadily increasing. Increments and harvests seem to have reached a plateau between 2005 and 2010. One reason for reaching this plateau will be the circumstance that the forests are getting older. High ages have the advantage that they typical show high carbon concentration and the disadvantage that the increment rates are decreasing. It should be investigated how biomass stock, harvests and increments will develop under different climate scenarios and two management scenarios where one is forcing to store high biomass amounts in forests and the other tries to have high increment rates and much harvested wood. A management which is maximising standing biomass will raise the stem wood carbon stocks from 30 tC/ha to 50 tC/ha until 2100. A management which is maximising increments will lower the stock to 20 tC/ha until 2100. The estimates for the climate scenarios A1b, B1 and E1 are different but there is much more effect by the management target than by the climate scenario. By maximising increments the harvests are 0.4 tC/ha/year higher than in the management which maximises the standing biomass. The increments until 2040 are close together but around 2100 the increments when maximising standing biomass are approximately 50 % lower than those when maximising increments. Cold regions will benefit from the climate changes in the climate scenarios by showing higher increments. The results of this study suggest that forest management should maximise increments, not stocks to be more efficient in sense of climate change mitigation. This is true especially for regions which have already high carbon stocks in forests, what is the case in many regions in Europe. During the time span 2010-2100 the forests of EU27 will absorb additional 1750 million tC if they are managed to maximise increments compared

  18. Formulation of Sodium Alginate Nanospheres Containing ...

    African Journals Online (AJOL)

    Purpose: The aim of this work was to formulate sodium alginate nanospheres of amphotericin B by controlled gellification method and to evaluate the role of the nanospheres as a “passive carrier” in targeted antifungal therapy. Methods: Sodium alginate nanospheres of amphotericin B were prepared by controlled ...

  19. [Optimization of formulations for dietetic pastry products].

    Science.gov (United States)

    Villarroel, M; Uquiche, E; Brito, G; Cancino, M

    2000-03-01

    Optimized formulations of dietetic pastry products such as cake and sponge cake premixes were formulated using the surface response methodology. % Emulsifier agent and baking time were the selected independent variables for cake, as well as % emulsifier agent % chlorinated flour the variables selected for sponge cake. Three different level of each variable summing up thirteen experimental formulae of each product were assessed to optimize the variables that could have some influence in the sensory characteristics of these dietetic products. The total sensory quality was determined for both dietetic products using the composite scoring test and a panel of 18 trained judges. Looking at the contour graphic and considering economic aspects the best combination of variables for cake formulation was 2% emulsifier agent and 48 minutes for baking time, With respect to sponge cake, the best combination was 6% emulsifier agent and 48% chlorinated flour. Shelf life studies showed that both dietetic formulations remained stable during storage conditions of 75 days at 30 degrees C. During this period, significant differences in sensory characteristics were not found (p pastry products had good acceptability, and open up marketing opportunities for new products with potential health benefits to consumers.

  20. Facile Colorimetric Determination of Duloxetine in Formulations ...

    African Journals Online (AJOL)

    ... the determination of duloxetine hydrochloride (DX). Methods: Ion-pair spectrophotometric method was employed for the determination of duloxetine hydrochloride (DX) in bulk and pharmaceutical formulations using acidic dye methyl orange (MO) as ion-pairing agent at pH 4 (phthalate buffer). The yellow ion-pair complex ...

  1. Engaged Problem Formulation in IS Research

    DEFF Research Database (Denmark)

    Nielsen, Peter Axel; Persson, John Stouby

    2016-01-01

    problems requires a more substantial engagement with the different stakeholders, especially when their problems are ill structured and situated in complex organizational settings. On this basis, we present an engaged approach to formulating IS problems with, not for, IS practitioners. We have come...

  2. Effects of Methionine Containing Paracetamol Formulation on ...

    African Journals Online (AJOL)

    Effects of Methionine Containing Paracetamol Formulation on Serum Vitamins and Trace Elements in Male Rats. AA Iyanda, JI Anetor, DP Oparinde, FAA Adeniyi. Abstract. Methionine is an effective antidote in the treatment of paracetamol-induced toxicity but at large doses it has been reported to induce or aggravate a ...

  3. State-Space Formulation for Circuit Analysis

    Science.gov (United States)

    Martinez-Marin, T.

    2010-01-01

    This paper presents a new state-space approach for temporal analysis of electrical circuits. The method systematically obtains the state-space formulation of nondegenerate linear networks without using concepts of topology. It employs nodal/mesh systematic analysis to reduce the number of undesired variables. This approach helps students to…

  4. Yang-Mills formulation of interacting strings

    International Nuclear Information System (INIS)

    Chan Hongmo; Tsou Sheungtsun

    1988-06-01

    A suggestion that the theory of interacting open bosonic string be reformulated as a generalised Yang-Mills theory is further elucidated. Moreover, a serious reservation regarding the ordering of operators in the earlier 'proof' of equivalence between the new and standard formulations is now removed. (author)

  5. Initial value formulation of higher derivative gravity

    International Nuclear Information System (INIS)

    Noakes, D.R.

    1983-01-01

    The initial value problem is considered for the conformally coupled scalar field and higher derivative gravity, by expressing the equations of each theory in harmonic coordinates. For each theory it is shown that the (vacuum) equations can take the form of a diagonal hyperbolic system with constraints on the initial data. Consequently these theories possess well-posed initial value formulations

  6. Release Properties of Paracetamol Granulationa Formulated with ...

    African Journals Online (AJOL)

    Theobroma cacao gum, TCG was derived as a dry powder from fresh fruits of Theobroma cacao. Various granulations of paracetamol were prepared with TCG at the concentrations of 0.5 – 4% w/w. Similar formulations were prepared using sodium carboxymethyl cellulose, SCMC and acacia gums as standards. In each ...

  7. Probabilites in the general boundary formulation

    Energy Technology Data Exchange (ETDEWEB)

    Oeckl, Robert [Instituto de Matematicas, UNAM, Campus Morelia, C.P. 58190, Morelia, Michoacan (Mexico)

    2007-05-15

    We give an introductory account of the general boundary formulation of quantum theory. We refine its probability interpretation and emphasize a conceptual and historical perspective. We give motivations from quantum gravity and illustrate them with a scenario for describing gravitons in quantum gravity.

  8. On fictitious domain formulations for Maxwell's equations

    DEFF Research Database (Denmark)

    Dahmen, W.; Jensen, Torben Klint; Urban, K.

    2003-01-01

    We consider fictitious domain-Lagrange multiplier formulations for variational problems in the space H(curl: Omega) derived from Maxwell's equations. Boundary conditions and the divergence constraint are imposed weakly by using Lagrange multipliers. Both the time dependent and time harmonic formu...

  9. Necessity of rethinking oral pediatric formulations

    DEFF Research Database (Denmark)

    Bar-Shalom, Daniel

    2014-01-01

    by all patient groups, is needed, and an automated compounding concept is proposed. The finishing of the formulation is done at the dispensing pharmacy using an automated process. The individual components (pudding-like carrier, microencapsulated drug, and the dispensing robot and its software...

  10. Lagrangian formulation of classical BMT-theory

    International Nuclear Information System (INIS)

    Pupasov-Maksimov, Andrey; Deriglazov, Alexei; Guzman, Walberto

    2013-01-01

    Full text: The most popular classical theory of electron has been formulated by Bargmann, Michel and Telegdi (BMT) in 1959. The BMT equations give classical relativistic description of a charged particle with spin and anomalous magnetic momentum moving in homogeneous electro-magnetic field. This allows to study spin dynamics of polarized beams in uniform fields. In particular, first experimental measurements of muon anomalous magnetic momentum were done using changing of helicity predicted by BMT equations. Surprisingly enough, a systematic formulation and the analysis of the BMT theory are absent in literature. In the present work we particularly fill this gap by deducing Lagrangian formulation (variational problem) for BMT equations. Various equivalent forms of Lagrangian will be discussed in details. An advantage of the obtained classical model is that the Lagrangian action describes a relativistic spinning particle without Grassmann variables, for both free and interacting cases. This implies also the possibility of canonical quantization. In the interacting case, an arbitrary electromagnetic background may be considered, which generalizes the BMT theory formulated to the case of homogeneous fields. The classical model has two local symmetries, which gives an interesting example of constrained classical dynamics. It is surprising, that the case of vanishing anomalous part of the magnetic momentum is naturally highlighted in our construction. (author)

  11. A parcel formulation for Hamiltonian layer models

    NARCIS (Netherlands)

    Bokhove, Onno; Oliver, M.

    Starting from the three-dimensional hydrostatic primitive equations, we derive Hamiltonian N-layer models with isentropic tropospheric and isentropic or isothermal stratospheric layers. Our construction employs a new parcel Hamiltonian formulation which describes the fluid as a continuum of

  12. Clinical pharmacology of novel anticancer drug formulations

    NARCIS (Netherlands)

    Stuurman, F.E.

    2013-01-01

    Studies outlined in this thesis describe the impact of drug formulations on pharmacology of anticancer drugs. It consists of four parts and starts with a review describing the mechanisms of low oral bioavailability of anti-cancer drugs and strategies for improvement of the bioavailability. The

  13. Comparative evaluation of organic formulations of Pseudomonas ...

    African Journals Online (AJOL)

    An experiment was conducted in the laboratory and farm of the Department of Biotechnology, Gauhati University, to explore the potentiality of various organic formulations of Pseudomonas fluorescens (Pf) and to manage bacterial wilt disease of brinjal (Solanum melongena L.) under local conditions. Different organic ...

  14. Formulation and evaluation and terbutaline sulphate and ...

    African Journals Online (AJOL)

    We report the use of low rugosity lactose, product of controlled crystallization of this carrier, in the formulation of terbutaline sulphate and beclomethasone dipropionate dry powder inhalers. The deposition patterns obtained with inhalation mixtures consisting of the modified lactose and each of the micronised drugs ...

  15. Development and Evaluation of Topical Gabapentin Formulations

    Directory of Open Access Journals (Sweden)

    Christopher J. Martin

    2017-08-01

    Full Text Available Topical delivery of gabapentin is desirable to treat peripheral neuropathic pain conditions whilst avoiding systemic side effects. To date, reports of topical gabapentin delivery in vitro have been variable and dependent on the skin model employed, primarily involving rodent and porcine models. In this study a variety of topical gabapentin formulations were investigated, including Carbopol® hydrogels containing various permeation enhancers, and a range of proprietary bases including a compounded Lipoderm® formulation; furthermore microneedle facilitated delivery was used as a positive control. Critically, permeation of gabapentin across a human epidermal membrane in vitro was assessed using Franz-type diffusion cells. Subsequently this data was contextualised within the wider scope of the literature. Although reports of topical gabapentin delivery have been shown to vary, largely dependent upon the skin model used, this study demonstrated that 6% (w/w gabapentin 0.75% (w/w Carbopol® hydrogels containing 5% (w/w DMSO or 70% (w/w ethanol and a compounded 10% (w/w gabapentin Lipoderm® formulation were able to facilitate permeation of the molecule across human skin. Further pre-clinical and clinical studies are required to investigate the topical delivery performance and pharmacodynamic actions of prospective formulations.

  16. Development and Evaluation of Topical Gabapentin Formulations

    Science.gov (United States)

    Alcock, Natalie; Hiom, Sarah; Birchall, James C.

    2017-01-01

    Topical delivery of gabapentin is desirable to treat peripheral neuropathic pain conditions whilst avoiding systemic side effects. To date, reports of topical gabapentin delivery in vitro have been variable and dependent on the skin model employed, primarily involving rodent and porcine models. In this study a variety of topical gabapentin formulations were investigated, including Carbopol® hydrogels containing various permeation enhancers, and a range of proprietary bases including a compounded Lipoderm® formulation; furthermore microneedle facilitated delivery was used as a positive control. Critically, permeation of gabapentin across a human epidermal membrane in vitro was assessed using Franz-type diffusion cells. Subsequently this data was contextualised within the wider scope of the literature. Although reports of topical gabapentin delivery have been shown to vary, largely dependent upon the skin model used, this study demonstrated that 6% (w/w) gabapentin 0.75% (w/w) Carbopol® hydrogels containing 5% (w/w) DMSO or 70% (w/w) ethanol and a compounded 10% (w/w) gabapentin Lipoderm® formulation were able to facilitate permeation of the molecule across human skin. Further pre-clinical and clinical studies are required to investigate the topical delivery performance and pharmacodynamic actions of prospective formulations. PMID:28867811

  17. Engaged Problem Formulation in IS Research

    DEFF Research Database (Denmark)

    Nielsen, Peter Axel; Persson, John Stouby

    2016-01-01

    “Is this the problem?”: the question that haunts many information systems (IS) researchers when they pursue work relevant to both practice and research. Nevertheless, a deliberate answer to this question requires more than simply asking the involved IS practitioners. Deliberately formulating...

  18. Kit systems for granulated decontamination formulations

    Science.gov (United States)

    Tucker, Mark D.

    2010-07-06

    A decontamination formulation and method of making that neutralizes the adverse health effects of both chemical and biological compounds, especially chemical warfare (CW) and biological warfare (BW) agents, and toxic industrial chemicals. The formulation provides solubilizing compounds that serve to effectively render the chemical and biological compounds, particularly CW and BW compounds, susceptible to attack, and at least one reactive compound that serves to attack (and detoxify or kill) the compound. The formulation includes at least one solubilizing agent, a reactive compound, a sorbent additive, and water. A highly adsorbent sorbent additive (e.g., amorphous silica, sorbitol, mannitol, etc.) is used to "dry out" one or more liquid ingredients into a dry, free-flowing powder that has an extended shelf life, and is more convenient to handle and mix in the field. The formulation can be pre-mixed and pre-packaged as a multi-part kit system, where one or more of the parts are packaged in a powdered, granulated form for ease of handling and mixing in the field.

  19. Application of UV Imaging in Formulation Development.

    Science.gov (United States)

    Sun, Yu; Østergaard, Jesper

    2017-05-01

    Efficient drug delivery is dependent on the drug substance dissolving in the body fluids, being released from dosage forms and transported to the site of action. A fundamental understanding of the interplay between the physicochemical properties of the active compound and pharmaceutical excipients defining formulation behavior after exposure to the aqueous environments and pharmaceutical performance is critical in pharmaceutical development, manufacturing and quality control of drugs. UV imaging has been explored as a tool for qualitative and quantitative characterization of drug dissolution and release with the characteristic feature of providing real-time visualization of the solution phase drug transport in the vicinity of the formulation. Events occurring during drug dissolution and release, such as polymer swelling, drug precipitation/recrystallization, or solvent-mediated phase transitions related to the structural properties of the drug substance or formulation can be monitored. UV imaging is a non-intrusive and simple-to-operate analytical technique which holds potential for providing a mechanistic foundation for formulation development. This review aims to cover applications of UV imaging in the early and late phase pharmaceutical development with a special focus on the relation between structural properties and performance. Potential areas of future advancement and application are also discussed.

  20. Ductility, strength and hardness relation after prior incremental deformation (ratcheting) of austenitic steel

    International Nuclear Information System (INIS)

    Kussmaul, K.; Diem, H.K.; Wachter, O.

    1993-01-01

    Experimental investigations into the stress/strain behavior of the niobium stabilized austenitic material with the German notation X6 CrNiNb 18 10 proved that a limited incrementally applied prior deformation will reduce the total deformation capability only by the amount of the prior deformation. It could especially be determined on the little changes in the reduction of area that the basically ductile deformation behavior will not be changed by the type of the prior loading. There is a correlation between the amount of deformation and the increase in hardness. It is possible to correlate both the changes in hardness and the material properties. In the case of low cycle fatigue tests with alternating temperature an incremental increase in total strain (ratcheting) was noted to depend on the strain range applied