WorldWideScience

Sample records for incremental tangent formulation

  1. Tangent bundle formulation of a charged gas

    CERN Document Server

    Sarbach, Olivier

    2013-01-01

    We discuss the relativistic kinetic theory for a simple, collisionless, charged gas propagating on an arbitrary curved spacetime geometry. Our general relativistic treatment is formulated on the tangent bundle of the spacetime manifold and takes advantage of its rich geometric structure. In particular, we point out the existence of a natural metric on the tangent bundle and illustrate its role for the development of the relativistic kinetic theory. This metric, combined with the electromagnetic field of the spacetime, yields an appropriate symplectic form on the tangent bundle. The Liouville vector field arises as the Hamiltonian vector field of a natural Hamiltonian. The latter also defines natural energy surfaces, called mass shells, which turn out to be smooth Lorentzian submanifolds. A simple, collisionless, charged gas is described by a distribution function which is defined on the mass shell and satisfies the Liouville equation. Suitable fibre integrals of the distribution function define observable fie...

  2. Degradation theories of concrete and development of a new deviatoric model in incremental tangent formulation: limit analysis applied to case of anchor bolts embedded in concrete; Theorie de degradation du beton et developpement d'un nouveau modele d'endommagement en formulation incrementale tangente: calcul a la rupture applique au cas des chevilles de fixation ancrees dans le beton

    Energy Technology Data Exchange (ETDEWEB)

    Ung Quoc, H

    2003-12-15

    This research is achieved in the general framework of the study of the concrete behaviour. It has for objective the development of a new behaviour model satisfying to the particular requirements for an industrial exploitation. After the analysis of different existent models, a first development has concerned models based on the smeared crack theory. A new formulation of the theory permitted to overcome the stress locking problem. However, the analysis showed the persistence of some limits inert to this approach in spite of this improvement. Then, an analysis of the physical mechanisms of the concrete degradation has been achieved and permitted to develop the new damage model MODEV. The general formulation of this model is based on the theory of the thermodynamics and applied to the case of the heterogeneous and brittle materials. The MODEV model considers two damage mechanisms: extension and sliding. The model considers also that the relative tangent displacement between microcracks lips is responsible of the strain irreversibility. Thus, the rate of inelastic strain becomes function of the damage and the heterogeneity index of the material. The unilateral effect is taken in account as an elastic hardening or softening process according to re-closing or reopening of cracks. The model is written within the framework of non standard generalised materials in incremental tangent formulation and implemented in the general finite element code SYMPHONIE. The validation of the model has been achieved on the basis of several tests issued from the literature. The second part of this research has concerned the development of the CHEVILAB software. This simulation tool based on the limit analysis approach permit the evaluation of the ultimate load capacity of anchors bolts. The kinematics approach of the limit analysis has been adapted to the problem of anchors while considering several specific failure mechanisms. This approach has been validated then by comparison with the

  3. A multiaxial incremental fatigue damage formulation using nested damage surfaces

    Directory of Open Access Journals (Sweden)

    Marco Antonio Meggiolaro

    2016-07-01

    Full Text Available Multiaxial fatigue damage calculations under non-proportional variable amplitude loadings still remains a quite challenging task in practical applications, in part because most fatigue models require cycle identification and counting to single out individual load events before quantifying the damage induced by them. Moreover, to account for the non-proportionality of the load path of each event, semi-empirical methods are required to calculate path-equivalent ranges, e.g. using a convex enclosure or the MOI (Moment Of Inertia method. In this work, a novel Incremental Fatigue Damage methodology is introduced to continuously account for the accumulation of multiaxial fatigue damage under service loads, without requiring rainflow counters or path-equivalent range estimators. The proposed approach is not based on questionable Continuum Damage Mechanics concepts or on the integration of elastoplastic work. Instead, fatigue damage itself is continuously integrated, based on damage parameters adopted by traditional fatigue models well tested in engineering practice. A framework of nested damage surfaces is introduced, allowing the calculation of fatigue damage even for general 6D multiaxial load histories. The proposed approach is validated by non-proportional tensiontorsion experiments on tubular 316L stainless steel specimens.

  4. Modelling large-deforming fluid-saturated porous media using an Eulerian incremental formulation

    CERN Document Server

    Rohan, Eduard

    2016-01-01

    The paper deals with modelling fluid saturated porous media subject to large deformation. An Eulerian incremental formulation is derived using the problem imposed in the spatial configuration in terms of the equilibrium equation and the mass conservation. Perturbation of the hyperelastic porous medium is described by the Biot model which involves poroelastic coefficients and the permeability governing the Darcy flow. Using the material derivative with respect to a convection velocity field we obtain the rate formulation which allows for linearization of the residuum function. For a given time discretization with backward finite difference approximation of the time derivatives, two incremental problems are obtained which constitute the predictor and corrector steps of the implicit time-integration scheme. Conforming mixed finite element approximation in space is used. Validation of the numerical model implemented in the SfePy code is reported for an isotropic medium with a hyperelastic solid phase. The propose...

  5. A Single-Instance Incremental SAT Formulation of Proof- and Counterexample-Based Abstraction

    CERN Document Server

    Een, Niklas; Amla, Nina

    2010-01-01

    This paper presents an efficient, combined formulation of two widely used abstraction methods for bit-level verification: counterexample-based abstraction (CBA) and proof-based abstraction (PBA). Unlike previous work, this new method is formulated as a single, incremental SAT-problem, interleaving CBA and PBA to develop the abstraction in a bottom-up fashion. It is argued that the new method is simpler conceptually and implementation-wise than previous approaches. As an added bonus, proof-logging is not required for the PBA part, which allows for a wider set of SAT-solvers to be used.

  6. Complete Tangent Stiffness for eXtended Finite Element Method by including crack growth parameters

    DEFF Research Database (Denmark)

    Mougaard, J.F.; Poulsen, P.N.; Nielsen, L.O.

    2013-01-01

    The eXtended Finite Element Method (XFEM) is a useful tool for modeling the growth of discrete cracks in structures made of concrete and other quasi‐brittle and brittle materials. However, in a standard application of XFEM, the tangent stiffness is not complete. This is a result of not including...... the crack geometry parameters, such as the crack length and the crack direction directly in the virtual work formulation. For efficiency, it is essential to obtain a complete tangent stiffness. A new method in this work is presented to include an incremental form the crack growth parameters on equal terms...

  7. A variational formulation for the incremental homogenization of elasto-plastic composites

    Science.gov (United States)

    Brassart, L.; Stainier, L.; Doghri, I.; Delannay, L.

    2011-12-01

    This work addresses the micro-macro modeling of composites having elasto-plastic constituents. A new model is proposed to compute the effective stress-strain relation along arbitrary loading paths. The proposed model is based on an incremental variational principle (Ortiz, M., Stainier, L., 1999. The variational formulation of viscoplastic constitutive updates. Comput. Methods Appl. Mech. Eng. 171, 419-444) according to which the local stress-strain relation derives from a single incremental potential at each time step. The effective incremental potential of the composite is then estimated based on a linear comparison composite (LCC) with an effective behavior computed using available schemes in linear elasticity. Algorithmic elegance of the time-integration of J 2 elasto-plasticity is exploited in order to define the LCC. In particular, the elastic predictor strain is used explicitly. The method yields a homogenized yield criterion and radial return equation for each phase, as well as a homogenized plastic flow rule. The predictive capabilities of the proposed method are assessed against reference full-field finite element results for several particle-reinforced composites.

  8. Illumination by Tangent Lines

    CERN Document Server

    Horwitz, Alan

    2011-01-01

    Let f be a differentiable function on the real line, and let P\\inG_{f}^{C}= all points not on the graph of f. We say that the illumination index of P, denoted by I_{f}(P), is k if there are k distinct tangents to the graph of f which pass through P. In section 2 we prove results about the illumination index of f with f" (x)\\geq 0 on \\Re. In particular, suppose that y=L_1(x) and y=L_2(x) are distinct oblique asymptotes of f and let P=(s,t)\\in G_{f}^{C}. If max(L_1(s),L_2(s))

  9. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  10. Teachers' Conceptions of Tangent Line

    Science.gov (United States)

    Paez Murillo, Rosa Elvira; Vivier, Laurent

    2013-01-01

    In order to study the conceptions, and their evolutions, of the tangent line to a curve an updating workshop which took place in Mexico was designed for upper secondary school teachers. This workshop was planned using the methodology of cooperative learning, scientific debate and auto reflection (ACODESA) and the conception-knowing-concept model…

  11. Teachers' Conceptions of Tangent Line

    Science.gov (United States)

    Paez Murillo, Rosa Elvira; Vivier, Laurent

    2013-01-01

    In order to study the conceptions, and their evolutions, of the tangent line to a curve an updating workshop which took place in Mexico was designed for upper secondary school teachers. This workshop was planned using the methodology of cooperative learning, scientific debate and auto reflection (ACODESA) and the conception-knowing-concept model…

  12. Atlantic NAD 83 SLA Baseline Tangents

    Data.gov (United States)

    Bureau of Ocean Energy Management, Department of the Interior — This data set contains baseline tangent lines in ArcGIS shapefile format for the BOEM Atlantic Region. Baseline tangent lines are typically bay or river closing...

  13. Loss tangent measurements on unirradiated alumina

    Energy Technology Data Exchange (ETDEWEB)

    Zinkle, S.J.; Goulding, R.H. [Oak Ridge National Lab., TN (United States)

    1996-04-01

    Unirradiated room temperature loss tangent for sapphire and several commercial grades of polycrystalline alumina are complied for frequencies between 10{sup 5} and 4x10{sup 11} Hz. Sapphire exhibits significantly lower values for the loss tangent at frequencies up to 10{sup 11} Hz. The loss tangents of 3 different grades of Wesgo alumina (AL300, AL995, AL998) and 2 different grades of Coors alumina (AD94, AD995) have typical values near {approx}10{sup -4} at a frequency of 10{sup 8} Hz. On the other hand, the loss tangent of Vitox alumina exhibits a large loss peak tan d{approx} 5x10{sup -3} at this frequency.

  14. Data assimilation in a coupled physical-biogeochemical model of the California Current System using an incremental lognormal 4-dimensional variational approach: Part 1-Model formulation and biological data assimilation twin experiments

    Science.gov (United States)

    Song, Hajoon; Edwards, Christopher A.; Moore, Andrew M.; Fiechter, Jerome

    2016-10-01

    A quadratic formulation for an incremental lognormal 4-dimensional variational assimilation method (incremental L4DVar) is introduced for assimilation of biogeochemical observations into a 3-dimensional ocean circulation model. L4DVar assumes that errors in the model state are lognormally rather than Gaussian distributed, and implicitly ensures that state estimates are positive definite, making this approach attractive for biogeochemical variables. The method is made practical for a realistic implementation having a large state vector through linear assumptions that render the cost function quadratic and allow application of existing minimization techniques. A simple nutrient-phytoplankton-zooplankton-detritus (NPZD) model is coupled to the Regional Ocean Modeling System (ROMS) and configured for the California Current System. Quadratic incremental L4DVar is evaluated in a twin model framework in which biological fields only are in error and compared to G4DVar which assumes Gaussian distributed errors. Five-day assimilation cycles are used and statistics from four years of model integration analyzed. The quadratic incremental L4DVar results in smaller root-mean-squared errors and better statistical agreement with reference states than G4DVar while maintaining a positive state vector. The additional computational cost and implementation effort are trivial compared to the G4DVar system, making quadratic incremental L4DVar a practical and beneficial option for realistic biogeochemical state estimation in the ocean.

  15. Calculation for path-domain independent J integral with elasto-viscoplastic consistent tangent operator concept-based boundary element methods

    Science.gov (United States)

    Yong, Liu; Qichao, Hong; Lihua, Liang

    1999-05-01

    This paper presents an elasto-viscoplastic consistent tangent operator (CTO) based boundary element formulation, and application for calculation of path-domain independent J integrals (extension of the classical J integrals) in nonlinear crack analysis. When viscoplastic deformation happens, the effective stresses around the crack tip in the nonlinear region is allowed to exceed the loading surface, and the pure plastic theory is not suitable for this situation. The concept of consistency employed in the solution of increment viscoplastic problem, plays a crucial role in preserving the quadratic rate asymptotic convergence of iteractive schemes based on Newton's method. Therefore, this paper investigates the viscoplastic crack problem, and presents an implicit viscoplastic algorithm using the CTO concept in a boundary element framework for path-domain independent J integrals. Applications are presented with two numerical examples for viscoplastic crack problems and J integrals.

  16. Examining Students' Generalizations of the Tangent Concept: A Theoretical Perspective

    Science.gov (United States)

    Çekmez, Erdem; Baki, Adnan

    2016-01-01

    The concept of a tangent is important in understanding many topics in mathematics and science. Earlier studies on students' understanding of the concept of a tangent have reported that they have various misunderstandings and experience difficulties in transferring their knowledge about the tangent line from Euclidean geometry into calculus. In…

  17. Tangent hyperbolic circular frequency diverse array radars

    Directory of Open Access Journals (Sweden)

    Sarah Saeed

    2016-03-01

    Full Text Available Frequency diverse array (FDA with uniform frequency offset (UFO has been in spot light of research for past few years. Not much attention has been devoted to non-UFOs in FDA. This study investigates tangent hyperbolic (TH function for frequency offset selection scheme in circular FDAs (CFDAs. Investigation reveals a three-dimensional single-maximum beampattern, which promises to enhance system detection capability and signal-to-interference plus noise ratio. Furthermore, by utilising the versatility of TH function, a highly configurable type array system is achieved, where beampatterns of three different configurations of FDA can be generated, just by adjusting a single function parameter. This study further examines the utility of the proposed TH-CFDA in some practical radar scenarios.

  18. Image denoising using local tangent space alignment

    Science.gov (United States)

    Feng, JianZhou; Song, Li; Huo, Xiaoming; Yang, XiaoKang; Zhang, Wenjun

    2010-07-01

    We propose a novel image denoising approach, which is based on exploring an underlying (nonlinear) lowdimensional manifold. Using local tangent space alignment (LTSA), we 'learn' such a manifold, which approximates the image content effectively. The denoising is performed by minimizing a newly defined objective function, which is a sum of two terms: (a) the difference between the noisy image and the denoised image, (b) the distance from the image patch to the manifold. We extend the LTSA method from manifold learning to denoising. We introduce the local dimension concept that leads to adaptivity to different kind of image patches, e.g. flat patches having lower dimension. We also plug in a basic denoising stage to estimate the local coordinate more accurately. It is found that the proposed method is competitive: its performance surpasses the K-SVD denoising method.

  19. Incremental Support Vector Learning for Ordinal Regression.

    Science.gov (United States)

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2015-07-01

    Support vector ordinal regression (SVOR) is a popular method to tackle ordinal regression problems. However, until now there were no effective algorithms proposed to address incremental SVOR learning due to the complicated formulations of SVOR. Recently, an interesting accurate on-line algorithm was proposed for training ν -support vector classification (ν-SVC), which can handle a quadratic formulation with a pair of equality constraints. In this paper, we first present a modified SVOR formulation based on a sum-of-margins strategy. The formulation has multiple constraints, and each constraint includes a mixture of an equality and an inequality. Then, we extend the accurate on-line ν-SVC algorithm to the modified formulation, and propose an effective incremental SVOR algorithm. The algorithm can handle a quadratic formulation with multiple constraints, where each constraint is constituted of an equality and an inequality. More importantly, it tackles the conflicts between the equality and inequality constraints. We also provide the finite convergence analysis for the algorithm. Numerical experiments on the several benchmark and real-world data sets show that the incremental algorithm can converge to the optimal solution in a finite number of steps, and is faster than the existing batch and incremental SVOR algorithms. Meanwhile, the modified formulation has better accuracy than the existing incremental SVOR algorithm, and is as accurate as the sum-of-margins based formulation of Shashua and Levin.

  20. Digital Offshore Cadastre (DOC) - Pacific83 - Baseline Tangent Lines

    Data.gov (United States)

    Bureau of Ocean Energy Management, Department of the Interior — This data set contains baseline tangent lines and bay closing lines in ESRI Arc/Info export and Arc/View shape file formats for the BOEM Pacific Region. Baseline...

  1. Tangent bundles of Hantzsche-Wendt manifolds

    Science.gov (United States)

    Gaşior, A.; Szczepański, A.

    2013-08-01

    We formulate a condition for the existence of a SpinC-structure on an oriented flat manifold Mn with H2(Mn,R)=0. We prove that Mn has a SpinC-structure if and only if there exists a homomorphism ɛ:π1(Mn)→SpinC(n) such that λ∘ɛ=h, where h:π1(Mn)→SO(n) is a holonomy homomorphism and λ:SpinC(n)→SO(n) is a standard homomorphism defined. As an application we shall prove that all cyclic Hantzsche-Wendt manifolds do not have the SpinC-structure.

  2. Pullback incremental attraction

    Directory of Open Access Journals (Sweden)

    Kloeden Peter E.

    2014-01-01

    Full Text Available A pullback incremental attraction, a nonautonomous version of incremental stability, is introduced for nonautonomous systems that may have unbounded limiting solutions. Its characterisation by a Lyapunov function is indicated

  3. Incremental Distance Transforms (IDT)

    NARCIS (Netherlands)

    Schouten, Theo E.; van den Broek, Egon; Erçil, A.; Çetin, M.; Boyer, K.; Lee, S.-W.

    2010-01-01

    A new generic scheme for incremental implementations of distance transforms (DT) is presented: Incremental Distance Transforms (IDT). This scheme is applied on the cityblock, Chamfer, and three recent exact Euclidean DT (E2DT). A benchmark shows that for all five DT, the incremental implementation r

  4. Microwave dielectric tangent losses in KDP and DKDP crystals

    Indian Academy of Sciences (India)

    Trilok Chandra Upadhyay; Birendra Singh Semwal

    2003-03-01

    By adding cubic and quartic phonon anharmonic interactions in the pseudospin lattice coupled mode (PLCM) model for KDP-type crystals and using double-time temperature dependent Green’s function method, expressions for soft mode frequency, dielectric constant and dielectric tangent loss are obtained. Using model parameters given by Ganguliet al[9] the dielectric losses are calculated for KDP and DKDP crystals. In the microwave frequency range an increase in frequency (1–35 GHz) is followed by an increase in dielectric tangent loss (1–35) at 98 K and (1–15) × 10-2 at 333 K for KDP and DKDP crystals respectively. The dielectric tangent loss decreases from 0.052 to 0.042 for KDP crystals with increase in temperature from 130 to 170 K and for DKDP crystals it decreases from 0.0166 to 0.0074 with an increase in temperature from 230–343 K in their paraelectric phases at 10 GHz. This shows Curie–Weiss behavior of the dielectric tangent loss.

  5. First Year Mathematics Undergraduates' Settled Images of Tangent Line

    Science.gov (United States)

    Biza, Irene; Zachariades, Theodossios

    2010-01-01

    This study concerns 182 first year mathematics undergraduates' perspectives on the tangent line of function graph in the light of a previous study on Year 12 pupils' perspectives. The aim was the investigation of tangency images that settle after undergraduates' distancing from the notion for a few months and after their participation in…

  6. On sets without tangents and exterior sets of a conic

    CERN Document Server

    Van de Voorde, Geertrui

    2012-01-01

    A set without tangents in $\\PG(2,q)$ is a set of points S such that no line meets S in exactly one point. An exterior set of a conic $\\mathcal{C}$ is a set of points $\\E$ such that all secant lines of $\\E$ are external lines of $\\mathcal{C}$. In this paper, we first recall some known examples of sets without tangents and describe them in terms of determined directions of an affine pointset. We show that the smallest sets without tangents in $\\PG(2,5)$ are (up to projective equivalence) of two different types. We generalise the non-trivial type by giving an explicit construction of a set without tangents in $\\PG(2,q)$, $q=p^h$, $p>2$ prime, of size $q(q-1)/2-r(q+1)/2$, for all $0\\leq r\\leq (q-5)/2$. After that, a different description of the same set in $\\PG(2,5)$, using exterior sets of a conic, is given and we investigate in which ways a set of exterior points on an external line $L$ of a conic in $\\PG(2,q)$ can be extended with an extra point $Q$ to a larger exterior set of $\\mathcal{C}$. It turns out that ...

  7. THE TANGENT CONES ON CONSTRAINT QUALIFICATIONS IN OPTIMIZATION PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Huang Longguang

    2008-01-01

    This article proposes a few tangent cones, which are relative to the constraint qualifications of optimization problems. With the upper and lower directional derivatives of an objective function, the characteristics of cones on the constraint qualifications are presented. The interrelations among the constraint qualifications, a few cones involved,and level sets of upper and lower directional derivatives are derived.

  8. Giambelli-type formula for subbundles of the tangent bundle

    CERN Document Server

    Kazarian, M E

    1996-01-01

    Let us consider a generic $n$-dimensional subbundle $\\CV$ of the tangent bundle $TM$ on some given manifold $M$. Given $\\CV$ one can define different degeneracy loci $\\Si_{\\bold r}(\\CV),\\;\\bold r=(r_1\\leq r_2\\leq r_3 dimension of the subspace $\\CV^j(x)\\subset TM(x)$ spanned by all length $\\leq j$ commutators of vector fields tangent to $\\CV$ at $x$ is less than or equal to $r_j$. We calculate 'explicitly' the cohomology classes dual to $\\Si_{\\bold r}(\\CV)$ using determinantal formulas due to W.~Fulton and the expression for the Chern classes of the associated bundle of free Lie algebras in terms of the Chern classes of $\\CV$.

  9. Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment

    Institute of Scientific and Technical Information of China (English)

    张振跃; 查宏远

    2004-01-01

    We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized da-ta points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approxi-mation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data pointswith respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can bequite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimension-al Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.

  10. Tangent lines, inflections, and vertices of closed curves

    CERN Document Server

    Ghomi, Mohammad

    2012-01-01

    We show that every smooth closed curve C immersed in Euclidean 3-space satisfies the sharp inequality 2(P+I)+V >5 which relates the numbers P of pairs of parallel tangent lines, I of inflections (or points of vanishing curvature), and V of vertices (or points of vanishing torsion) of C. We also show that 2(P'+I)+V >3, where P' is the number of pairs of concordant parallel tangent lines. The proofs, which employ curve shortening flow with surgery, are based on corresponding inequalities for the numbers of double points, singularities, and inflections of closed curves in the real projective plane and the sphere which intersect every closed geodesic. These findings extend some classical results in curve theory including works of Moebius, Fenchel, and Segre, which is also known as Arnold's "tennis ball theorem".

  11. Computerized tomography using a modified orthogonal tangent correction algorithm.

    Science.gov (United States)

    Hsia, T C; Smith, S C; Lantz, B M

    1976-10-01

    A modified orthogonal tangent correction algorithm is presented for computerized tomography. The algorithm uses four X-rays scans spaced 45 degrees apart, to reconstruct a transverse axial image. The reconstruction procedure is interative in which image matrix elements are corrected by alternately matching the two sets of orthogonal scan data. The algorithm has been applied to phantom data as well as to video recorded fluoroscopic data.

  12. Experience on tangent delta norms adopted for repaired generator

    Energy Technology Data Exchange (ETDEWEB)

    Misra, N.N.; Sood, D.K. [National Thermal Power Corp. (India)

    2005-07-01

    The repair techniques of the generators are very crucial for avoiding prolonged forced outages. The crucial decisions based on sound knowledge and judgement becomes essential in many cases. The unit under discussions had failed on account of flash over in the Exciter end overhang windings. The failure resulted in damaged to the stator bars as well as generator core. The damaged end packets of the stator core were replaced at site. The total winding bars were removed from stator core and damaged bars were replaced with new bars. The rest of the bars were tested for tangent delta tests for reuse. Acceptance norms of 0.6% tip up from 0.2pu to 0.6pu of rated stator voltage were adopted. Some of the bars outside the acceptable limits of tangent delta were shifted close to neutral so that the standard norms of tan delta are met. This was felt necessary because lead-time for procurement of new bars was more than six months. The above-adopted norms for tangent delta will be of much use for the operating utilities. The unit under discussions was of 67.5 MW operating at 50 Hz, 0.85 pf lag and had logged 66160.46 operating hours before failure. (author)

  13. Experience on tangent delta norms adopted for repaired generators

    Energy Technology Data Exchange (ETDEWEB)

    Misra, N.N.; Sood, D.K. [National Thermal Power Corp., (India)

    2005-07-01

    Since repair measures and techniques for generators are critical in avoiding prolonged forced outages, repair decisions must be based on sound knowledge and judgement. This paper describes a the failure of an electric generator unit which failed due to flashover in the Exciter end overhang windings. During inspection, early symptoms of slot partial discharge were observed along with degradation of the outer corona coatings of stator bars. The unit was commissioned in 1988 and had 67.5 MW generating capacity operating at 50 Hz, 0.85 pf lag and had logged 66160.46 operating hours before failure. The failure caused damage to the stator bars and to the generator core. The damaged end packets of the stator core were replaced at site. The winding bars were removed from the stator core and damaged bars were also replaced. The remainder were tested for tangent delta tests for reuse. Acceptance norms of 0.6 per cent tip up from 0.2 pu to 0.6 pu of rotor stator voltage were adopted. Some of the bars outside the acceptable limits of tangent delta were shifted close to neutral in order to meet the standard norms of tan delta. These adopted norms for tangent delta will be of particular use for the operating utilities. It was concluded that the most probable cause of failure was the continuous degradation of the winding insulation due to high overhang vibration causing electrical and mechanical stresses. 2 tabs., 7 figs.

  14. Modeling tangent hyperbolic nanoliquid flow with heat and mass flux conditions

    Science.gov (United States)

    Hayat, T.; Ullah, I.; Alsaedi, A.; Ahmad, B.

    2017-03-01

    This attempt predicts the hydromagnetic flow of a tangent hyperbolic nanofluid originated by a non-linear impermeable stretching surface. The considered nanofluid model takes into account the Brownian diffusion and thermophoresis characteristics. An incompressible liquid is electrically conducted in the presence of a non-uniformly applied magnetic field. Heat and mass transfer phenomena posses flux conditions. Mathematical formulation is developed by utilizing the boundary layer approach. A system of ordinary differential equations is obtained by employing adequate variables. Convergence for obtained series solutions is checked and explicitly verified through tables and plots. Effects of numerous pertinent variables on velocity, temperature and concentration fields are addressed. Computations for surface drag coefficient, heat transfer rate and mass transfer rate are presented and inspected for the influence of involved variables. Temperature is found to enhance for a higher magnetic variable. Present and previous outcomes in limiting sense are also compared.

  15. Incremental algorithms on lists

    NARCIS (Netherlands)

    Jeuring, J.T.

    1991-01-01

    Incremental computations can improve the performance of interactive programs such as spreadsheet programs, program development environments, text editors, etc. Incremental algorithms describe how to compute a required value depending on the input, after the input has been edited. By considering the

  16. Path planning using a tangent graph for mobile robots among polygonal and curved obstacles

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yun-Hui; Arimoto, Suguru (Univ. of Tokyo (Japan))

    1992-08-01

    This article proposes a tangent graph for path planning of mobile robots among obstacles with a general boundary. The tangent graph is defined on the basis of the locally shortest path. It has the same data structure as the visibility graph, but its nodes represent common tangent points on obstacle boundaries, and its edges correspond to collision-free common tangents between the boundaries and convex boundary segments between the tangent points. The tangent graph requires O(K[sup 2]) memory, where K denotes the total number of convex segments of the obstacle boundaries. The tangent graph includes all locally shortest paths and is capable of coping with path planning not only among polygonal obstacles but also among curved obstacles.

  17. Plant Leaf Recognition through Local Discriminative Tangent Space Alignment

    Directory of Open Access Journals (Sweden)

    Chuanlei Zhang

    2016-01-01

    Full Text Available Manifold learning based dimensionality reduction algorithms have been payed much attention in plant leaf recognition as the algorithms can select a subset of effective and efficient discriminative features in the leaf images. In this paper, a dimensionality reduction method based on local discriminative tangent space alignment (LDTSA is introduced for plant leaf recognition based on leaf images. The proposed method can embrace part optimization and whole alignment and encapsulate the geometric and discriminative information into a local patch. The experiments on two plant leaf databases, ICL and Swedish plant leaf datasets, demonstrate the effectiveness and feasibility of the proposed method.

  18. Superconducting axisymmetric finite elements based on a gauged potential variational principle. Part 1: Formulation

    Science.gov (United States)

    Schuler, James J.; Felippa, Carlos A.

    1994-01-01

    The present work is part of a research program for the numerical simulation of electromagnetic (EM) fields within conventional Ginzburg-Landau (GL) superconductors. The final goal of this research is to formulate, develop and validate finite element (FE) models that can accurately capture electromagnetic thermal and material phase changes in a superconductor. The formulations presented here are for a time-independent Ginzburg-Landau superconductor and are derived from a potential-based variational principle. We develop an appropriate variational formulation of time-independent supercontivity for the general three-dimensional case and specialize it to the one-dimensional case. Also developed are expressions for the material-dependent parameters alpha and beta of GL theory and their dependence upon the temperature T. The one-dimensional formulation is then discretized for finite element purposes and the first variation of these equations is obtained. The resultant Euler equations contain nonlinear terms in the primary variables. To solve these equations, an incremental-iterative solution method is used. Expressions for the internal force vector, external force vector, loading vector and tangent stiffness matrix are therefore developed for use with the solution procedure.

  19. Incremental Alignment Manifold Learning

    Institute of Scientific and Technical Information of China (English)

    Zhi Han; De-Yu Meng; Zong-Sen Xu; Nan-Nan Gu

    2011-01-01

    A new manifold learning method, called incremental alignment method (IAM), is proposed for nonlinear dimensionality reduction of high dimensional data with intrinsic low dimensionality. The main idea is to incrementally align low-dimensional coordinates of input data patch-by-patch to iteratively generate the representation of the entire dataset. The method consists of two major steps, the incremental step and the alignment step. The incremental step incrementally searches neighborhood patch to be aligned in the next step, and the alignment step iteratively aligns the low-dimensional coordinates of the neighborhood patch searched to generate the embeddings of the entire dataset. Compared with the existing manifold learning methods, the proposed method dominates in several aspects: high efficiency, easy out-of-sample extension, well metric-preserving, and averting of the local minima issue. All these properties are supported by a series of experiments performed on the synthetic and real-life datasets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically argued and experimentally demonstrated.

  20. Construction of the Tangent to a Cycloid Proposed by Wallis and Fermat

    OpenAIRE

    Loredana Biacino

    2017-01-01

    In this paper some methods used in the XVII century for the construction of the tangents to a cycloid in a point are exposed: the kinematical method employed by Roberval, the classical geometrical  method used by Wallis and the Fermat’s construction as a consequence of his tangents method. Le Costruzioni della Tangente alla Cicloide Proposte da Wallis e da Fermat In questo lavoro sono esposti vari metodi in uso nel ‘600 per la costruzione della tangente ad una cicloide in un suo pu...

  1. On the Tangent Space to the Universal Teichmuller Space

    CERN Document Server

    Nag, S

    1992-01-01

    We find a remarkably simple relationship between the following two models of the tangent space to the Universal Teichm\\"uller Space: (1) The real-analytic model consisting of Zygmund class vector fields on the unit circle; (2) The complex-analytic model comprising 1-parameter families of schlicht functions on the exterior of the unit disc which allow quasiconformal extension. Indeed, the Fourier coefficients of the vector field in (1) turn out to be essentially the same as (the first variations of) the corresponding power series coefficients in (2). These identities have many applications; in particular, to conformal welding, to the almost complex structure of Teichm\\"uller space, to study of the Weil-Petersson metric, to variational formulas for period matrices, etc. These utilities are explored.

  2. The Cretaceous superchron geodynamo: observations near the tangent cylinder.

    Science.gov (United States)

    Tarduno, John A; Cottrell, Rory D; Smirnov, Alexei V

    2002-10-29

    If relationships exist between the frequency of geomagnetic reversals and the morphology, secular variation, and intensity of Earth's magnetic field, they should be best expressed during superchrons, intervals tens of millions of years long lacking reversals. Here we report paleomagnetic and paleointensity data from lavas of the Cretaceous Normal Polarity Superchron that formed at high latitudes near the tangent cylinder that surrounds the solid inner core. The time-averaged field recorded by these lavas is remarkably strong and stable. When combined with global results available from lower latitudes, these data define a time-averaged field that is overwhelmingly dominated by the axial dipole (octupole components are insignificant). These observations suggest that the basic features of the geomagnetic field are intrinsically related. Superchrons may reflect times when the nature of core-mantle boundary heat flux allows the geodynamo to operate at peak efficiency.

  3. Modified Einstein and Finsler Like Theories on Tangent Lorentz Bundles

    CERN Document Server

    Stavrinos, Panayiotis; Vacaru, Sergiu I.

    2014-01-01

    We study modifications of general relativity, GR, with nonlinear dispersion relations which can be geometrized on tangent Lorentz bundles. Such modified gravity theories, MGTs, can be modeled by gravitational Lagrange density functionals $f(\\mathbf{R},\\mathbf{T},F)$ with generalized/ modified scalar curvature $\\mathbf{R}$, trace of matter field tensors $\\mathbf{T}$ and modified Finsler like generating function $F$. In particular, there are defined extensions of GR with extra dimensional "velocity/ momentum" coordinates. For four dimensional models, we prove that it is possible to decouple and integrate in very general forms the gravitational fields for $f(\\mathbf{R},\\mathbf{T},F)$--modified gravity using nonholonomic 2+2 splitting and nonholonomic Finsler like variables $F$. We study the modified motion and Newtonian limits of massive test particles on nonlinear geodesics approximated with effective extra forces orthogonal to the four-velocity. We compute the constraints on the magnitude of extra-acceleration...

  4. A Characterisation of Tangent Subplanes of PG(2,q^3)

    CERN Document Server

    Barwick, S G

    2012-01-01

    In: S.G. Barwick and W.A. Jackson. Sublines and subplanes of PG(2,q^3) in the Bruck--Bose representation in PG(6,q). Finite Fields Th. App. 18 (2012) 93--107., the authors determine the representation of order-q-subplanes and order-q-sublines of PG(2,q^3) in the Bruck-Bose representation in PG(6,q). In particular, they showed that an order-q-subplane of PG(2,q^3) corresponds to a certain ruled surface in PG(6,q). In this article we show that the converse holds, namely that any ruled surface satisfying the required properties corresponds to a tangent order-q-subplane of PG(2,q^3).

  5. Quantum information entropies for a squared tangent potential well

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Shishan [Information and Engineering College, DaLian University, 116622 (China); Sun, Guo-Hua, E-mail: sunghdb@yahoo.com [Centro Universitario Valle de Chalco, Universidad Autónoma del Estado de México, Valle de Chalco Solidaridad, Estado de México, 56615 (Mexico); Dong, Shi-Hai, E-mail: dongsh2@yahoo.com [Departamento de Física, Escuela Superior de Física y Matemáticas, Instituto Politécnico Nacional, Unidad Profesional Adolfo López Mateos, Edificio 9, México D.F. 07738 (Mexico); Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States); Draayer, J.P., E-mail: draayer@sura.org [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States)

    2014-01-10

    The particle in a symmetrical squared tangent potential well is studied by examining its Shannon information entropy and standard deviations. The position and momentum information entropy densities ρ{sub s}(x), ρ{sub s}(p) and probability densities ρ(x), ρ(p) are illustrated with different potential range L and potential depth U. We present analytical position information entropies S{sub x} for the lowest two states. We observe that the sum of position and momentum entropies S{sub x} and S{sub p} expressed by Bialynicki-Birula–Mycielski (BBM) inequality is satisfied. Some eigenstates exhibit entropy squeezing in the position. The entropy squeezing in position will be compensated by an increase in momentum entropy. We also note that the S{sub x} increases with the potential range L, while decreases with the potential depth U. The variation of S{sub p} is contrary to that of S{sub x}.

  6. Incremental Similarity and Turbulence

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.; Hedevang, Emil; Schmiegel, Jürgen

    This paper discusses the mathematical representation of an empirically observed phenomenon, referred to as Incremental Similarity. We discuss this feature from the viewpoint of stochastic processes and present a variety of non-trivial examples, including those that are of relevance for turbulence...

  7. Incremental Bisimulation Abstraction Refinement

    DEFF Research Database (Denmark)

    Godskesen, Jens Christian; Song, Lei; Zhang, Lijun

    2013-01-01

    an abstraction refinement approach for the probabilistic computation tree logic (PCTL), which is based on incrementally computing a sequence of may- and must-quotient automata. These are induced by depth-bounded bisimulation equivalences of increasing depth. The approach is both sound and complete, since...

  8. Lightweight incremental application upgrade

    NARCIS (Netherlands)

    T. van der Storm (Tijs)

    2006-01-01

    textabstractI present a lightweight approach to incremental application upgrade in the context of component-based software development. The approach can be used to efficiently implement an automated update feature in a platform and programming language agnostic way. A formal release model is present

  9. Incremental Gaussian Processes

    DEFF Research Database (Denmark)

    Quiñonero-Candela, Joaquin; Winther, Ole

    2002-01-01

    In this paper, we consider Tipping's relevance vector machine (RVM) and formalize an incremental training strategy as a variant of the expectation-maximization (EM) algorithm that we call subspace EM. Working with a subset of active basis functions, the sparsity of the RVM solution will ensure th...

  10. Increments to life and mortality tempo

    Directory of Open Access Journals (Sweden)

    Griffith Feeney

    2006-01-01

    Full Text Available This paper introduces and develops the idea of "increments to life." Increments to life are roughly analogous to forces of mortality: they are quantities specified for each age and time by a mathematical function of two variables that may be used to describe, analyze and model changing length of life in populations. The rationale is three-fold. First, I wanted a general mathematical representation of Bongaart's "life extension" pill (Bongaarts and Feeney 2003 allowing for continuous variation in age and time. This is accomplished in sections 3-5, to which sections 1-2 are preliminaries. It turned out to be a good deal more difficult than I expected, partly on account of the mathematics, but mostly because it requires thinking in very unaccustomed ways. Second, I wanted a means of assessing the robustness of the Bongaarts-Feeney mortality tempo adjustment formula (Bongaarts and Feeney 2003 against variations in increments to life by age. Section 6 shows how the increments to life mathematics accomplishes this with an application to the Swedish data used in Bongaarts and Feeney (2003. In this application, at least, the Bongaarts-Feeney adjustment is robust. Third, I hoped by formulating age-variable increments to life to avoid the slight awkwardness of working with conditional rather than unconditional survival functions. This third aim has not been accomplished, but this appears to be because it was unreasonable to begin with. While it is possible to conceptualize length of life as completely described by an age-varying increments to life function, this is not consistent with the Bongaarts-Feeney mortality tempo adjustment. What seems to be needed, rather, is a model that incorporates two fundamentally different kinds of changes in mortality and length of life, one based on the familiar force of mortality function, the other based on the increments to life function. Section 7 considers heuristically what such models might look like.

  11. Quantum spin transport through Aharonov-Bohm ring with a tangent magnetic field

    Institute of Scientific and Technical Information of China (English)

    Li Zhi-Jian

    2005-01-01

    Quantum spin transport in a mesoscopic Aharonov-Bohm ring with two leads subject to a magnetic field with circular configuration is investigated by means of one-dimensional quantum waveguide theory. Within the framework magnetic flux or by the tangent magnetic field. In particular, the spin flips can be induced by hopping the AB magnetic flux or the tangent field.

  12. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2005-01-01

    This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.

  13. Precursors of extreme increments

    CERN Document Server

    Hallerberg, S; Holstein, D; Kantz, H; Hallerberg, Sarah; Altmann, Eduardo G.; Holstein, Detlef; Kantz, Holger

    2006-01-01

    We investigate precursors and predictability of extreme events in time series, which consist in large increments within successive time steps. In order to understand the predictability of this class of extreme events, we study analytically the prediction of extreme increments in AR(1)-processes. The resulting strategies are then applied to predict sudden increases in wind speed recordings. In both cases we evaluate the success of predictions via creating receiver operator characteristics (ROC-plots). Surprisingly, we obtain better ROC-plots for completely uncorrelated Gaussian random numbers than for AR(1)-correlated data. Furthermore, we observe an increase of predictability with increasing event size. Both effects can be understood by using the likelihood ratio as a summary index for smooth ROC-curves.

  14. Efficient incremental relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2013-07-01

    We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.

  15. Shellsort with three increments

    CERN Document Server

    Janson, Svante

    2008-01-01

    A perturbation technique can be used to simplify and sharpen A. C. Yao's theorems about the behavior of shellsort with increments $(h,g,1)$. In particular, when $h=\\Theta(n^{7/15})$ and $g=\\Theta(h^{1/5})$, the average running time is $O(n^{23/15})$. The proof involves interesting properties of the inversions in random permutations that have been $h$-sorted and $g$-sorted.

  16. Unified treatment of microscopic boundary conditions and efficient algorithms for estimating tangent operators of the homogenized behavior in the computational homogenization method

    Science.gov (United States)

    Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic

    2017-03-01

    This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.

  17. Formulation techniques for nanofluids.

    Science.gov (United States)

    Rivera-Solorio, Carlos I; Payán-Rodríguez, Luis A; García-Cuéllar, Alejandro J; Ramón-Raygoza, E D; L Cadena-de-la-Peña, Natalia; Medina-Carreón, David

    2013-11-01

    Fluids with suspended nanoparticles, commonly known as nanofluids, may be formulated to improve the thermal performance of industrial heat transfer systems and applications. Nanofluids may show enhanced thermal and electrical properties such as thermal conductivity, viscosity, heat transfer coefficient, dielectric strength, etc. However, stability problems may arise as nanoparticles usually have the tendency to agglomerate and sediment producing deterioration in the increment of these properties. In this review, we discuss patents that report advances in the formulation of nanofluids including: production methods, selection of components (nanoparticles, base fluid and surfactants), their chemical compositions and morphologies, and characterization techniques. Finally, current and future directions in the development of nanofluid formulation are discussed.

  18. Lateral Dynamics Of A Railway Truck On Flexible Tangent Track

    Science.gov (United States)

    Saha*, A. K.; Karmakar, R.; Bhattacharyya, R.

    A railway vehicle becomes unstable beyond a critical speed. Assessment of the critical speed is important for safety and passenger comfort. Bond graph model of a railway truck-wheelset system on flexible tangent track has been created with eighteen degrees of freedom considering six degrees of freedom for each wheelset and the truck unit without any linearity approximation for the wheelsets. Kalker's linear creep theory has been used for rail-wheel contact forces. The bond graph model of a single wheelset created earlier has been used for the front and rear wheelsets to model the truck-wheelset system. The model is created and simulated for a given set of nominal parameter values with rigid track condition. Truck-critical speeds and stability bahaviour are studied through simulations. Critical speed of a truck is found to be higher than that of a wheelset at the same axle load and conicity for nominal primary suspension and wheelbase. Contrary to the variation of critical speed of a single wheelset with increasing conicity critical speed of a truck decreases with increasing conicity.

  19. Tangent-Impulse Interception for a Hyperbolic Target

    Directory of Open Access Journals (Sweden)

    Dongzhe Wang

    2014-01-01

    Full Text Available The two-body interception problem with an upper-bounded tangent impulse for the interceptor on an elliptic parking orbit to collide with a nonmaneuvering target on a hyperbolic orbit is studied. Firstly, four special initial true anomalies whose velocity vectors are parallel to either of the lines of asymptotes for the target hyperbolic orbit are obtained by using Newton-Raphson method. For different impulse points, the solution-existence ranges of the target true anomaly for any conic transfer are discussed in detail. Then, the time-of-flight equation is solved by the secant method for a single-variable piecewise function about the target true anomaly. Considering the sphere of influence of the Earth and the upper bound on the fuel, all feasible solutions are obtained for different impulse points. Finally, a numerical example is provided to apply the proposed technique for all feasible solutions and the global minimum-time solution with initial coasting time.

  20. Incremental POP Learning

    Institute of Scientific and Technical Information of China (English)

    LIU Ben-yong

    2004-01-01

    In recently proposed partial oblique projection (POP) learning, a function space is decomposed into two complementary subspaces, so that functions belonging to one of which can be optimally estimated. This paper shows that when the decomposition is specially performed so that the above subspace becomes the largest, a special learning called SPOP learning is obtained and correspondingly an incremental learning is implemented, result of which equals exactly to that of batch learning including novel data. The effectiveness of the method is illustrated by experimental results.

  1. Tangent-impulse transfer from elliptic orbit to an excess velocity vector

    Institute of Scientific and Technical Information of China (English)

    Zhang Gang; Zhang Xiangyu; Cao Xibin

    2014-01-01

    The two-body orbital transfer problem from an elliptic parking orbit to an excess veloc-ity vector with the tangent impulse is studied. The direction of the impulse is constrained to be aligned with the velocity vector, then speed changes are enough to nullify the relative velocity. First, if one tangent impulse is used, the transfer orbit is obtained by solving a single-variable function about the true anomaly of the initial orbit. For the initial circular orbit, the closed-form solution is derived. For the initial elliptic orbit, the discontinuous point is solved, then the initial true anomaly is obtained by a numerical iterative approach; moreover, an alternative method is proposed to avoid the singularity. There is only one solution for one-tangent-impulse escape trajectory. Then, based on the one-tangent-impulse solution, the minimum-energy multi-tangent-impulse escape trajectory is obtained by a numerical optimization algorithm, e.g., the genetic method. Finally, several examples are provided to validate the proposed method. The numerical results show that the minimum-energy multi-tangent-impulse escape trajectory is the same as the one-tangent-impulse trajectory.

  2. Pursuit eye-movements in curve driving differentiate between future path and tangent point models.

    Directory of Open Access Journals (Sweden)

    Otto Lappi

    Full Text Available For nearly 20 years, looking at the tangent point on the road edge has been prominent in models of visual orientation in curve driving. It is the most common interpretation of the commonly observed pattern of car drivers looking through a bend, or at the apex of the curve. Indeed, in the visual science literature, visual orientation towards the inside of a bend has become known as "tangent point orientation". Yet, it remains to be empirically established whether it is the tangent point the drivers are looking at, or whether some other reference point on the road surface, or several reference points, are being targeted in addition to, or instead of, the tangent point. Recently discovered optokinetic pursuit eye-movements during curve driving can provide complementary evidence over and above traditional gaze-position measures. This paper presents the first detailed quantitative analysis of pursuit eye movements elicited by curvilinear optic flow in real driving. The data implicates the far zone beyond the tangent point as an important gaze target area during steady-state cornering. This is in line with the future path steering models, but difficult to reconcile with any pure tangent point steering model. We conclude that the tangent point steering models do not provide a general explanation of eye movement and steering during a curve driving sequence and cannot be considered uncritically as the default interpretation when the gaze position distribution is observed to be situated in the region of the curve apex.

  3. Tangent-impulse transfer from elliptic orbit to an excess velocity vector

    Directory of Open Access Journals (Sweden)

    Zhang Gang

    2014-06-01

    Full Text Available The two-body orbital transfer problem from an elliptic parking orbit to an excess velocity vector with the tangent impulse is studied. The direction of the impulse is constrained to be aligned with the velocity vector, then speed changes are enough to nullify the relative velocity. First, if one tangent impulse is used, the transfer orbit is obtained by solving a single-variable function about the true anomaly of the initial orbit. For the initial circular orbit, the closed-form solution is derived. For the initial elliptic orbit, the discontinuous point is solved, then the initial true anomaly is obtained by a numerical iterative approach; moreover, an alternative method is proposed to avoid the singularity. There is only one solution for one-tangent-impulse escape trajectory. Then, based on the one-tangent-impulse solution, the minimum-energy multi-tangent-impulse escape trajectory is obtained by a numerical optimization algorithm, e.g., the genetic method. Finally, several examples are provided to validate the proposed method. The numerical results show that the minimum-energy multi-tangent-impulse escape trajectory is the same as the one-tangent-impulse trajectory.

  4. Freezing increment in keratophakia.

    Science.gov (United States)

    Swinger, C A; Wisnicki, H J

    In homoplastic keratomileusis, keratophakia, and epikeratophakia, the corneal tissue that provides the final refractive lenticule undergoes a conformational change when frozen. Because corneal tissue is composed primarily of water, an assumed value of 9.08% (approximate volumic percentage expansion of water when frozen) is frequently used for the increase in thickness, or freezing increment, rather than measuring it directly. We evaluated 32 cases of clinical keratophakia and found the increase in thickness to average 37 +/- 21%. In this series of 32 cases, the percentage of patients with a greater than 4 D residual refractive error was 16%. If an assumed freezing increment of 9.08% had been used, the percentage would have been 28%, with two-thirds of these 28% manifesting a marked undercorrection. Because of a lack of studies documenting the behavior of corneal tissue following cryoprotection and freezing, it is suggested that measurements be taken during homoplastic surgery to minimize the potential for significant inaccuracy in obtaining the desired optic result.

  5. Loss tangent imaging: Theory and simulations of repulsive-mode tapping atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Proksch, Roger [Asylum Research, Santa Barbara, California 93117 (United States); Yablon, Dalia G. [ExxonMobil Research and Engineering, Annandale, New Jersey (United States)

    2012-02-13

    An expression for loss tangent measurement of a surface in amplitude modulation atomic force microscopy is derived using only the cantilever phase and the normalized cantilever amplitude. This provides a direct measurement of substrate compositional information that only requires tuning of the cantilever resonance to provide quantitative information. Furthermore, the loss tangent expression incorporates both the lost and stored energy into one term that represents a fundamental interpretation of the phase signal in amplitude modulation imaging. Numerical solutions of a cantilever tip interacting with a simple Voigt modeled surface agree with the derived loss tangent to within a few percent.

  6. Solution of D dimensional Dirac equation for hyperbolic tangent potential using NU method and its application in material properties

    Energy Technology Data Exchange (ETDEWEB)

    Suparmi, A., E-mail: soeparmi@staff.uns.ac.id; Cari, C., E-mail: cari@staff.uns.ac.id; Pratiwi, B. N., E-mail: namakubetanurpratiwi@gmail.com [Physics Department, Faculty of Mathematics and Science, Sebelas Maret University, Jl. Ir. Sutami 36A Kentingan Surakarta 57126 (Indonesia); Deta, U. A. [Physics Department, Faculty of Science and Mathematics Education and Teacher Training, Surabaya State University, Surabaya (Indonesia)

    2016-02-08

    The analytical solution of D-dimensional Dirac equation for hyperbolic tangent potential is investigated using Nikiforov-Uvarov method. In the case of spin symmetry the D dimensional Dirac equation reduces to the D dimensional Schrodinger equation. The D dimensional relativistic energy spectra are obtained from D dimensional relativistic energy eigen value equation by using Mat Lab software. The corresponding D dimensional radial wave functions are formulated in the form of generalized Jacobi polynomials. The thermodynamically properties of materials are generated from the non-relativistic energy eigen-values in the classical limit. In the non-relativistic limit, the relativistic energy equation reduces to the non-relativistic energy. The thermal quantities of the system, partition function and specific heat, are expressed in terms of error function and imaginary error function which are numerically calculated using Mat Lab software.

  7. Incremental activity modeling in multiple disjoint cameras.

    Science.gov (United States)

    Loy, Chen Change; Xiang, Tao; Gong, Shaogang

    2012-09-01

    Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as context-incoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TD-PGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station.

  8. [Incremental peritoneal dialysis - yes].

    Science.gov (United States)

    Neri, Loris

    2012-01-01

    The incremental modality at the start of peritoneal dialysis (Incr-DP) is implicit in the definition of adequacy, which is expressed as the sum of dialysis clearance and renal clearance.Theoretically, it is possible to demonstrate that with a glomerular filtration rate at the start of dialysis of 6 mL/min, the minimum Kt/V target of 1.70 indicated by the current guidelines is easily exceeded with both 2-exchange of CAPD (incremental CAPD) and APD of 3 or 4 weekly sessions (Incr-APD), with a daytime icodextrin dwell. The GSDP (Peritoneal Dialysis Study Group) census data suggest that Incr-DP favors the choice of peritoneal dialysis. Although limited to a few studies with a relatively small number of patients, data show that Incr- CAPD is associated with a better quality of life, the achievement of Kt/V targets, and satisfactory ultrafiltration. The clearance of medium molecules is equivalent in Incr-DP and full-dose PD as it depends on the duration of the dwell and not on the number of exchanges. The maintenance of body weight, protein intake and peritoneal permeability may be explained by the lower glucose load with Incr-DP. The preservation of residual renal function is similar to that recorded with full-dose PD, while the peritonitis rate seems to be lower. The favorable results reported in the literature and the indications of the most recent guidelines about the importance of reducing the exposure to glucose to a minimum and safeguarding the patient's quality of life in our opinion further justify the use of Incr-DP.

  9. Energy increment criteria for evaluation of train derailment

    Institute of Scientific and Technical Information of China (English)

    XIANG Jun; ZENG Qing-yuan

    2005-01-01

    The criteria for evaluation of train derailment were studied. The worldwide commonly used evaluation criteria for wheel derailment were summarized and their main problems were pointed out. The mechanism of train derailment was expounded on the basis of system dynamics stability concept. And the energy increment criteria were proposed to evaluate train derailment. By applying the criteria, the calculated results concerning 6 cases of freight train derailment on tangent railway line and 6 cases of freight train derailment on bridge were obtained, which are all in agreement with the practical situation. The safety, comfort and stability results concerning 3 cases of freight train running on bridge were analyzed. In addition, the running speed limits of freight train on the Yanconggou and Donggou bridge in the Beijing-Tonghua railway line of 50 km/h and 60 km/h, respectively, were proposed. And the running speed of freight train on the Nanjing Yangtze River Bridge can reach 70 km/h.

  10. CMOS VLSI Hyperbolic Tangent Function & its Derivative Circuits for Neuron Implementation

    Directory of Open Access Journals (Sweden)

    Hussein CHIBLE,

    2013-10-01

    Full Text Available The hyperbolic tangent function and its derivative are key essential element in analog signal processing and especially in analog VLSI implementation of neuron of artificial neural networks. The main conditions of these types of circuits are the small silicon area, and the low power consumption. The objective of this paper is to study and design CMOS VLSI hyperbolic tangent function and its derivative circuit for neural network implementation. A circuit is designed and the results are presented

  11. Tangent Curve Function Description of Mechanical Behaviour of Bulk Oilseeds: A Review

    Directory of Open Access Journals (Sweden)

    Sigalingging R.

    2015-01-01

    Full Text Available The application of tangent curve mathematical model for description of mechanical behaviour of selected bulk oilseeds, namely jatropha, sunflower, rape, garden pea, and common bean in linear compression was reviewed. Based on the review analysis, the tangent curve function has been developed using MathCAD 14 software which employs the Levenberg-Marquardt algorithm for data fitting optimal for tangent curve approximation. Linear compression parameters including force (N, deformation (mm, energy (J, and/or volume energy (J-m-3 can equally be determined by the tangent model. Additionally, the theoretical dependency between force and deformation characteristic curves can be defined by the force coefficient of mechanical behaviour (N and deformation coefficient of mechanical behaviour (mm-1 of the tangent model. In conclusion, the review results, however, shows that the tangent curve mathematical model which is dependent on experimental boundary conditions is potentialy useful for theoretical description of mechanical properties and deformation characteristics of bulk oilseeds in axial compression.

  12. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... visibility ranges and show that considering visibility ranges is crucial when considering incremental visible object extraction....

  13. Incremental Contingency Planning

    Science.gov (United States)

    Dearden, Richard; Meuleau, Nicolas; Ramakrishnan, Sailesh; Smith, David E.; Washington, Rich

    2003-01-01

    There has been considerable work in AI on planning under uncertainty. However, this work generally assumes an extremely simple model of action that does not consider continuous time and resources. These assumptions are not reasonable for a Mars rover, which must cope with uncertainty about the duration of tasks, the energy required, the data storage necessary, and its current position and orientation. In this paper, we outline an approach to generating contingency plans when the sources of uncertainty involve continuous quantities such as time and resources. The approach involves first constructing a "seed" plan, and then incrementally adding contingent branches to this plan in order to improve utility. The challenge is to figure out the best places to insert contingency branches. This requires an estimate of how much utility could be gained by building a contingent branch at any given place in the seed plan. Computing this utility exactly is intractable, but we outline an approximation method that back propagates utility distributions through a graph structure similar to that of a plan graph.

  14. Directed Incremental Symbolic Execution

    Science.gov (United States)

    Person, Suzette; Yang, Guowei; Rungta, Neha; Khurshid, Sarfraz

    2011-01-01

    The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves -- only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.

  15. An approach to probabilistic finite element analysis using a mixed-iterative formulation

    Science.gov (United States)

    Dias, J. B.; Nakazawa, S.

    1988-01-01

    An efficient algorithm for computing the response sensitivity of finite element problems based on a mixed-iterative formulation is proposed. This method does not involve explicit differentiation of the tangent stiffness array and can be used with formulations for which a consistent tangent stiffness is not readily available. The method has been successfully applied to probabilistic finite element analysis of problems using the proposed mixed formulation, and this exercise has provided valuable insights regarding the extension of the method to a more general class of problems to include material and geometric nonlinearities.

  16. Incremental localized boundary-domain integro-differential equations of elastic damage mechanics for inhomogeneous body

    OpenAIRE

    Mikhailov, SE

    2006-01-01

    Copyright @ 2006 Tech Science Press A quasi-static mixed boundary value problem of elastic damage mechanics for a continuously inhomogeneous body is considered. Using the two-operator Green-Betti formula and the fundamental solution of an auxiliary homogeneous linear elasticity with frozen initial, secant or tangent elastic coe±cients, a boundary-domain integro-differential formulation of the elasto-plastic problem with respect to the displacement rates and their gradients is derived. Usin...

  17. Backstepping design for incremental stability

    CERN Document Server

    Zamani, Majid

    2010-01-01

    Stability is arguably one of the core concepts upon which our understanding of dynamical and control systems has been built. The related notion of incremental stability, however, has received much less attention until recently, when it was successfully used as a tool for the analysis and design of intrinsic observers, output regulation of nonlinear systems, frequency estimators, synchronization of coupled identical dynamical systems, symbolic models for nonlinear control systems, and bio-molecular systems. However, most of the existing controller design techniques provide controllers enforcing stability rather than incremental stability. Hence, there is a growing need to extend existing methods or develop new ones for the purpose of designing incrementally stabilizing controllers. In this paper, we develop a backstepping design approach for incremental stability. The effectiveness of the proposed method is illustrated by synthesizing a controller rendering a synchronous generator incrementally stable.

  18. Composition Feature of the Element Tangent Stiffness Matrix of Geometrically Nonlinear 2D Frame Structures

    Directory of Open Access Journals (Sweden)

    Romanas Karkauskas

    2011-04-01

    Full Text Available The expressions of the finite element method tangent stiffness matrix of geometrically nonlinear constructions are not fully presented in publications. The matrixes of small displacements stiffness are usually presented only. To solve various problems of construction analysis or design and to specify the mode of the real deflection of construction, it is necessary to have a fully described tangent matrix analytical expression. This paper presents a technique of tangent stiffness matrix generation using discrete body total potential energy stationary conditions considering geometrically nonlinear 2D frame element taking account of interelement interaction forces only. The obtained vector-function derivative of internal forces considering nodal displacements is the tangent stiffness matrix. The analytical expressions having nodal displacements of matrixes forming the content of the 2D frame construction element tangent stiffness matrix are presented in the article. The suggested methodology has been checked making symbolical calculations in the medium of MatLAB calculation complex. The analytical expression of the stiffness matrix has been obtained.Article in Lithuanian

  19. Nondestructive relative permittivity and loss tangent measurements using a split-cylinder resonator

    Science.gov (United States)

    Janezic, Michael Daniel

    To keep pace with the expanding wireless and electronics industries, manufacturers are developing innovative materials for improving system performance, and there is a critical need to accurately characterize the electrical properties of these new materials at microwave frequencies. To address this need, this thesis develops a nondestructive method for measuring the relative permittivity and loss tangent of dielectric substrates using a split-cylinder resonator. Three theoretical models for the split-cylinder resonator are derived using mode-matching, least-squares boundary residual, and Hankel-transform methods, from which one can calculate the relative permittivity and loss tangent of a dielectric substrate from measurements of the split-cylinder resonator's TE0np resonant frequency and quality factor. Each of these models has several advantages over previously published models. First, the accuracy of the relative permittivity measurement is increased because each model accurately models the fringing fields that extend beyond the cylindrical-cavity sections. Second, to increase the accuracy of the loss tangent measurement, each model accurately separates the conductive metal losses of the split-cylinder resonator from the dielectric losses of the substrate. Finally, in contrast to previous models for the split-cylinder resonator that use only the TE011 resonant mode, each of the new models include the higher-order TE0np resonant modes, thereby broadening the frequency range over which one can make relative permittivity and loss tangent measurements. In a comparison of the three models, the mode-matching method was found to be superior on the basis of measurement accuracy and computational speed. Relative permittivity and loss tangent measurements for several dielectric materials are performed using a split-cylinder resonator and are in good agreement with measurements made using a circular-cylindrical cavity, split-post resonator, and dielectric post resonator

  20. Errors in estimating volume increments of forest trees

    Directory of Open Access Journals (Sweden)

    Magnani F

    2014-02-01

    Full Text Available Errors in estimating volume increments of forest trees. Periodic tree and stand increments are often estimated retrospectively from measurements of diameter and height growth of standing trees, through the application of various simplifications of the general formula for volume increment rates. In particular, the Hellrigl method and its various formulations have been often suggested in Italy. Like other retrospective approaches, the Hellrigl method is affected by a systematic error, resulting from the assumption as a reference term of conditions at one of the extremes of the period considered. The magnitude of the error introduced by different formulations has been assessed in the present study through their application to mensurational and increment measurements from the detailed growth analysis of 107 Picea abies trees. Results are compared with those obtained with a new equation, which makes reference to the interval mid-point. The newly proposed method makes it possible to drastically reduce the error in the estimate of periodic tree increments, and especially its systematic component. This appears of particular relevance for stand- and national level applications.

  1. Powers of the space forms curvature operator and geodesics of the tangent bundle

    OpenAIRE

    Saharova, Yelena; Yampolsky, Alexander

    2005-01-01

    It is well-known that if a curve is a geodesic line of the tangent (sphere) bundle with Sasaki metric of a locally symmetric Riemannian manifold then the projected curve has all its geodesic curvatures constant. In this paper we consider the case of tangent (sphere) bundle over the real, complex and quaternionic space form and give a unified proof of the following property: all geodesic curvatures of projected curve are zero starting from k_3,k_6 and k_{10} for the real, complex and quaternio...

  2. The Mathematical Analysis for Peristaltic Flow of Hyperbolic Tangent Fluid in a Curved Channel

    Institute of Scientific and Technical Information of China (English)

    S.Nadeem; E.N.Maraj

    2013-01-01

    In the present paper,we have investigated the peristaltic flow of hyperbolic tangent fluid in a curved channel.The governing equations of hyperbolic tangent fluid model for curved channel are derived including the effects of curvature.The highly nonlinear partial differential equations are simplified by using the wave frame transformation,long wave length and low Reynolds number assumptions.The reduced nonlinear partial differential equation is solved analytically with the help of homotopy perturbation method (HPM).The physical features of pertinent parameters have been discussed by plotting the graphs of pressure rise and stream functions.

  3. Tangent Resistance of Soil on Moldboard and the Mechanism of Resistance Reduction of Bionic Moldboard

    Institute of Scientific and Technical Information of China (English)

    Deng Shi-qiao; Ren Lu-quan; Liu Yan; Han Zhi-wu

    2005-01-01

    The tangent resistance on the interface of the soil-moldboard is an important component of the resistance to moving soil . We developed simplified mechanical models to analyze this resistance. We found that it is composed of two components, the frictional and adhesive resistances. These two components originate from the soil pore, which induced a capillary suction effect, and the soil-moldboard contact area produced tangent adhesive resistance. These two components varied differently with soil moisture. Thus we predicted that resistance reduction against soil exerted on the non-smooth bionic moldboard is mainly due to the elimination of capillary suction and the reduction of physical-chemical adsorption of soil.

  4. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  5. Algebraic formulation of higher gauge theory

    Science.gov (United States)

    Zucchini, Roberto

    2017-06-01

    In this paper, we present a purely algebraic formulation of higher gauge theory and gauged sigma models based on the abstract theory of graded commutative algebras and their morphisms. The formulation incorporates naturally Becchi - Rouet -Stora - Tyutin (BRST) symmetry and is also suitable for Alexandrov - Kontsevich - Schwartz-Zaboronsky (AKSZ) type constructions. It is also shown that for a full-fledged Batalin-Vilkovisky formulation including ghost degrees of freedom, higher gauge and gauged sigma model fields must be viewed as internal smooth functions on the shifted tangent bundle of a space-time manifold valued in a shifted L∞-algebroid encoding symmetry. The relationship to other formulations where the L∞-algebroid arises from a higher Lie groupoid by Lie differentiation is highlighted.

  6. Interaction of Tangent Conormal Waves for Higher-Order Nonlinear Strictly Hyperbolic Equations

    Institute of Scientific and Technical Information of China (English)

    尹会成; 仇庆久

    1994-01-01

    In this paper we deal with the interaction of three conormal waves for a class of third-order nonlinear strictly hyperbolic equations, in which two conormal waves are tangent. By the same argument, we may also discuss the similar problem for equation system of compressible fluid flow and obtain similar conclusions.

  7. Practical loss tangent imaging with amplitude-modulated atomic force microscopy

    Science.gov (United States)

    Proksch, Roger; Kocun, Marta; Hurley, Donna; Viani, Mario; Labuda, Aleks; Meinhold, Waiman; Bemis, Jason

    2016-04-01

    Amplitude-modulated (AM) atomic force microscopy (AFM), also known as tapping or AC mode, is a proven, reliable, and gentle imaging method with widespread applications. Previously, the contrast in AM-AFM has been difficult to quantify. AFM loss tangent imaging is a recently introduced technique that recasts AM mode phase imaging into a single term tan δ that includes both the dissipated and stored energy of the tip-sample interaction. It promises fast, versatile mapping of variations in near-surface viscoelastic properties. However, experiments to date have generally obtained values larger than expected for the viscoelastic loss tangent of materials. Here, we explore and discuss several practical considerations for AFM loss tangent imaging experiments. A frequent limitation to tapping in air is Brownian (thermal) motion of the cantilever. This fundamental noise source limits the accuracy of loss tangent estimation to approximately 0.01 phase transitions, even in the presence of such non-ideal interactions. These results help understand the limits and opportunities not only of this particular technique but also of AM mode with phase imaging in general.

  8. On Infinitesimal Conformal Transformations of the Tangent Bundles with the Synectic Lift of a Riemannian Metric

    Indian Academy of Sciences (India)

    Aydin Gezer

    2009-06-01

    The purpose of the present article is to investigate some relations between the Lie algebra of the infinitesimal fibre-preserving conformal transformations of the tangent bundle of a Riemannian manifold with respect to the synectic lift of the metric tensor and the Lie algebra of infinitesimal projective transformations of the Riemannian manifold itself.

  9. The isotropic-nematic phase transition of tangent hard-sphere chain fluids—Pure components

    NARCIS (Netherlands)

    Van Westen, T.; Oyarzun, B.; Vlugt, T.J.H.; Gross, J.

    2013-01-01

    An extension of Onsager's second virial theory is developed to describe the isotropic-nematic phase transition of tangent hard-sphere chain fluids. Flexibility is introduced by the rod-coil model. The effect of chain-flexibility on the second virial coefficient is described using an accurate, analyt

  10. Examining the Efficiency of Models Using Tangent Coordinates or Principal Component Scores in Allometry Studies.

    Science.gov (United States)

    Sigirli, Deniz; Ercan, Ilker

    2015-09-01

    Most of the studies in medical and biological sciences are related to the examination of geometrical properties of an organ or organism. Growth and allometry studies are important in the way of investigating the effects of diseases and the environmental factors effects on the structure of the organ or organism. Thus, statistical shape analysis has recently become more important in the medical and biological sciences. Shape is all geometrical information that remains when location, scale and rotational effects are removed from an object. Allometry, which is a relationship between size and shape, plays an important role in the development of statistical shape analysis. The aim of the present study was to compare two different models for allometry which includes tangent coordinates and principal component scores of tangent coordinates as dependent variables in multivariate regression analysis. The results of the simulation study showed that the model constructed by taking tangent coordinates as dependent variables is more appropriate than the model constructed by taking principal component scores of tangent coordinates as dependent variables, for all sample sizes.

  11. Scattering solutions of the Klein-Gordon equation for a step potential with hyperbolic tangent potential

    Science.gov (United States)

    Rojas, Clara

    2014-09-01

    We solve the Klein-Gordon equation for a step potential with hyperbolic tangent potential. The scattering solutions are derived in terms of hypergeometric functions. The reflection coefficient R and transmission coefficient T are calculated, we observed superradiance and transmission resonances.

  12. 75 FR 13614 - In the Matter of Talisman Enterprises, Inc., Tangent Solutions, Inc., Telepanel Systems, Inc...

    Science.gov (United States)

    2010-03-22

    ... From the Federal Register Online via the Government Publishing Office ] SECURITIES AND EXCHANGE COMMISSION In the Matter of Talisman Enterprises, Inc., Tangent Solutions, Inc., Telepanel Systems, Inc... there is a lack of current and accurate information concerning the securities of Talisman Enterprises...

  13. Fibonacci polynomials, generalized Stirling numbers, and Bernoulli, Genocchi and tangent numbers

    CERN Document Server

    Cigler, Johann

    2011-01-01

    We study matrices which transform the sequence of Fibonacci or Lucas polynomials with even index to those with odd index and vice versa. They turn out to be intimately related to generalized Stirling numbers and to Bernoulli, Genocchi and tangent numbers and give rise to various identities between these numbers. There is also a close connection with the Akiyama-Tanigawa algorithm.

  14. Enabling Incremental Query Re-Optimization

    Science.gov (United States)

    Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau

    2017-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658

  15. Enabling Incremental Query Re-Optimization.

    Science.gov (United States)

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  16. A combined Preisach–Hyperbolic Tangent model for magnetic hysteresis of Terfenol-D

    Energy Technology Data Exchange (ETDEWEB)

    Talebian, Soheil [Department of Mechanical Engineering, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Hojjat, Yousef, E-mail: yhojjat@modares.ac.ir [Department of Mechanical Engineering, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Ghodsi, Mojtaba [Department of Mechanical and Industrial Engineering, Sultan Qaboos University, Muscat (Oman); Karafi, Mohammad Reza [Department of Mechanical Engineering, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Mirzamohammadi, Shahed [Department of Mechanical Engineering, Shahid Rajaee University, Tehran (Iran, Islamic Republic of)

    2015-12-15

    This study presents a new model using the combination of Preisach and Hyperbolic Tangent models, to predict the magnetic hysteresis of Terfenol-D at different frequencies. Initially, a proper experimental setup was fabricated and used to obtain different magnetic hysteresis curves of Terfenol-D; such as major, minor and reversal loops. Then, it was shown that the Hyperbolic Tangent model is precisely capable of modeling the magnetic hysteresis of the Terfenol-D for both rate-independent and rate-dependent cases. Empirical equations were proposed with respect to magnetic field frequency which can calculate the non-dimensional coefficients needed by the model. These empirical equations were validated at new frequencies of 100 Hz and 300 Hz. Finally, the new model was developed through the combination of Preisach and Hyperbolic Tangent models. In the combined model, analytical relations of the Hyperbolic Tangent model for the first order reversal loops determined the weighting function of the Preisach model. This model reduces the required experiments and errors due to numerical differentiations generally needed for characterization of the Preisach function. In addition, it can predict the rate-dependent hysteresis as well as rate-independent hysteresis. - Highlights: • Different hysteresis curves of Terfenol-D are experimentally obtained at 0–200 Hz. • A new model is presented using combination of Preisach and Hyperbolic Tangent models. • The model predicts both rate-independent and rate-dependent hystereses of Terfenol-D. • The analytical model reduces the numerical errors and number of required experiments.

  17. 18 CFR 154.309 - Incremental expansions.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Incremental expansions... Changes § 154.309 Incremental expansions. (a) For every expansion for which incremental rates are charged... incremental facilities to be rolled-in to the pipeline's rates. For every expansion that has an at-risk...

  18. Incremental Observer Relative Data Extraction

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    2004-01-01

    -tree, is used to index visibility ranges of objects. We introduce a new operator for incremental Observer Relative data Extraction (iORDE). We propose the Volatile Access STructure (VAST), a lightweight main memory structure that is created on the fly and is maintained during visual data explorations. VAST...

  19. Incremental Trust in Grid Computing

    DEFF Research Database (Denmark)

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  20. Application of Incremental Sheet Forming

    Directory of Open Access Journals (Sweden)

    Karbowski Krzysztof

    2015-12-01

    Full Text Available This paper describes some manufacturing aspects and an example of application of the Incremental Sheet Forming (ISF technology which was used for production of the craniofacial prosthesis. The brief description of prosthesis designing was presented as well. The main topic of the paper is comparison of milling and ISF technologies for preparing the tools for prosthesis thermoforming.

  1. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...

  2. Evolution of cooperation driven by incremental learning

    Science.gov (United States)

    Li, Pei; Duan, Haibin

    2015-02-01

    It has been shown that the details of microscopic rules in structured populations can have a crucial impact on the ultimate outcome in evolutionary games. So alternative formulations of strategies and their revision processes exploring how strategies are actually adopted and spread within the interaction network need to be studied. In the present work, we formulate the strategy update rule as an incremental learning process, wherein knowledge is refreshed according to one's own experience learned from the past (self-learning) and that gained from social interaction (social-learning). More precisely, we propose a continuous version of strategy update rules, by introducing the willingness to cooperate W, to better capture the flexibility of decision making behavior. Importantly, the newly gained knowledge including self-learning and social learning is weighted by the parameter ω, establishing a strategy update rule involving innovative element. Moreover, we quantify the macroscopic features of the emerging patterns to inspect the underlying mechanisms of the evolutionary process using six cluster characteristics. In order to further support our results, we examine the time evolution course for these characteristics. Our results might provide insights for understanding cooperative behaviors and have several important implications for understanding how individuals adjust their strategies under real-life conditions.

  3. 关于切丛和单位切球丛的度量的一个注记%A Note on Some Metrics on Tangent Bundles and Unit Tangent Sphere Bundles

    Institute of Scientific and Technical Information of China (English)

    李兴校; 齐学荣

    2008-01-01

    In this paper we study a class of metrics with some compatible almost complex structures on the tangent bundle TM of a Riemannian manifold(M,g),which are parallel to those in [10].These metrics generalize the classical Sasaki metric and Cheeger-Gromoll metric.We prove that the tangent bundle TM endowed with each pair of the above metrics and the corresponding almost complex structures is a locally conformal almost K(a)ihler manifold.We also find that,when restricted to the unit tangent sphere bundle,these metrics and corresponding almost complex structures define new examples of contact metric structures.

  4. An algorithm for Path planning with polygon obstacles avoidance based on the virtual circle tangents

    Directory of Open Access Journals (Sweden)

    Zahraa Y. Ibrahim

    2016-12-01

    Full Text Available In this paper, a new algorithm called the virtual circle tangents is introduced for mobile robot navigation in an environment with polygonal shape obstacles. The algorithm relies on representing the polygonal shape obstacles by virtual circles, and then all the possible trajectories from source to target is constructed by computing the visible tangents between the robot and the virtual circle obstacles. A new method for searching the shortest path from source to target is suggested. Two states of the simulation are suggested, the first one is the off-line state and the other is the on-line state. The introduced method is compared with two other algorithms to study its performance.

  5. Tangent unit-vector fields: Nonabelian homotopy invariants and the Dirichlet energy

    KAUST Repository

    Majumdar, Apala

    2009-10-01

    Let O be a closed geodesic polygon in S2. Maps from O into S2 are said to satisfy tangent boundary conditions if the edges of O are mapped into the geodesics which contain them. Taking O to be an octant of S2, we evaluate the infimum Dirichlet energy, E (H), for continuous tangent maps of arbitrary homotopy type H. The expression for E (H) involves a topological invariant - the spelling length - associated with the (nonabelian) fundamental group of the n-times punctured two-sphere, π1 (S2 - {s1, ..., sn}, *). These results have applications for the theoretical modelling of nematic liquid crystal devices. To cite this article: A. Majumdar et al., C. R. Acad. Sci. Paris, Ser. I 347 (2009). © 2009 Académie des sciences.

  6. Improved Generalization in Recurrent Neural Networks Using the Tangent Plane Algorithm

    Directory of Open Access Journals (Sweden)

    P May

    2014-01-01

    Full Text Available The tangent plane algorithm for real time recurrent learning (TPA-RTRL is an effective online training method for fully recurrent neural networks. TPA-RTRL uses the method of approaching tangent planes to accelerate the learning processes. Compared to the original gradient descent real time recurrent learning algorithm (GD-RTRL it is very fast and avoids problems like local minima of the search space. However, the TPA-RTRL algorithm actively encourages the formation of large weight values that can be harmful to generalization. This paper presents a new TPA-RTRL variant that encourages small weight values to decay to zero by using a weight elimination procedure built into the geometry of the algorithm. Experimental results show that the new algorithm gives good generalization over a range of network sizes whilst retaining the fast convergence speed of the TPA-RTRL algorithm.

  7. Effects of nanoparticles on the peristaltic motion of tangent hyperbolic fluid model in an annulus

    Directory of Open Access Journals (Sweden)

    S. Nadeem

    2015-12-01

    Full Text Available In the present article, effects of nanoparticles on the peristaltic flow of tangent hyperbolic fluid in an annulus are described. The two-dimensional equations of tangent hyperbolic fluid are solved by using the assumptions of low Reynolds number and long wavelength. Analytical solution is obtained with the help of homotopy perturbation and Adomian decomposition method for velocity, temperature and nanoparticles concentration. Solutions are discussed through graphs. Solutions for pressure rise, temperature, nanoparticles concentration, pressure gradient and streamlines are plotted for various emerging parameters. It is found that the temperature profile increases with increase in Brownian motion and thermophoresis parameter. It is also found that the size of the trapped bolus in triangular wave is smaller as compared to other waves. Further, the comparison of both analytical solutions is presented.

  8. Reductions of locally conformal symplectic structures and de Rham cohomology tangent to a foliation

    CERN Document Server

    Domitrz, Wojciech

    2008-01-01

    We propose a produre of reduction a locally conformal symplectic structure. This procedure of reduction can be applied to wide class of submanifolds. There are no local obstructions for this procedure. But there are global obstructions. We find a necessary and sufficient condition when this reduction holds in terms of the special kind of de Rham cohomology class (tangent to the characteristic foliation) of the Lee form.

  9. MHD flow of tangent hyperbolic fluid over a stretching cylinder: Using Keller box method

    Energy Technology Data Exchange (ETDEWEB)

    Malik, M.Y.; Salahuddin, T., E-mail: taimoor_salahuddin@yahoo.com; Hussain, Arif; Bilal, S.

    2015-12-01

    A numerical solution of MHD flow of tangent hyperbolic fluid model over a stretching cylinder is obtained in this paper. The governing boundary layer equation of tangent hyperbolic fluid is converted into an ordinary differential equation using similarity transformations, which is then solved numerically by applying the implicit finite difference Keller box method. The effects of various parameters on velocity profiles are analyzed and discussed in detail. The values of skin friction coefficient are tabulated and plotted in order to understand the flow behavior near the surface of the cylinder. For validity of the model a comparison of the present work with the literature has been made. - Highlights: • Non-Newtonian (tangent hyperbolic) fluid is taken by using boundary layer approximation. • MHD effects are assumed. • To solve the highly non-linear equations by numerical approach (Keller box Method). • Keller box method is one of the best computational methods capable of solving different engineering problems in fluid mechanics. • Keller box method is an implicit method and has truncation error of order h{sup 2}.

  10. The intrinsic geometry of the osculating structures that underlie the Heisenberg calculus (or Why the tangent space in sub-Riemannian geometry is a group)

    CERN Document Server

    van Erp, Erik

    2010-01-01

    We explore the geometry that underlies the osculating structures of the Heisenberg calculus. For a smooth manifold M with a distribution H in TM analysts have developed explicit (and rather complicated) coordinate formulas to define the nilpotent groups that are central to the calculus. Our aim is, specifically, to gain insight in the intrinsic structures that underlie these coordinate formulas. There are two key ideas. First, we construct a certain generalization of the notion of tangent vectors, called "parabolic arrows", involving a mix of first and second order derivatives. Parabolic arrows are the natural elements for the nilpotent groups of the osculating structure. Secondly, we formulate the natural notion of exponential map for the fiber bundle of parabolic arrows, and show that it explains the coordinate formulas of osculating structures. The result is a conceptual simplification and unification of the treatment of the Heisenberg calculus found in the analytic literature. As a bonus we obtain insight...

  11. Incremental multiple objective genetic algorithms.

    Science.gov (United States)

    Chen, Qian; Guan, Sheng-Uei

    2004-06-01

    This paper presents a new genetic algorithm approach to multiobjective optimization problems--incremental multiple objective genetic algorithms (IMOGA). Different from conventional MOGA methods, it takes each objective into consideration incrementally. The whole evolution is divided into as many phases as the number of objectives, and one more objective is considered in each phase. Each phase is composed of two stages. First, an independent population is evolved to optimize one specific objective. Second, the better-performing individuals from the single-objecive population evolved in the above stage and the multiobjective population evolved in the last phase are joined together by the operation of integration. The resulting population then becomes an initial multiobjective population, to which a multiobjective evolution based on the incremented objective set is applied. The experiment results show that, in most problems, the performance of IMOGA is better than that of three other MOGAs, NSGA-II, SPEA, and PAES. IMOGA can find more solutions during the same time span, and the quality of solutions is better.

  12. Applicability of the mα-tangent Method to Estimate Plastic Limit Loads of Elbows and Branch Junctions

    Energy Technology Data Exchange (ETDEWEB)

    Gim, Jae-Min; Kim, Sang-Hyun; Bae, Kyung-Dong; Kim, Yun-Jae [Korea Univ., Seoul (Korea, Republic of); Kim, Jong-Sung [Sejong Univ., Seoul (Korea, Republic of)

    2017-06-15

    In this study, the limit loads calculated by the mα-tangent method based on the linear finite element analysis are compared with the closed form solutions that are proposed by various authors. The objects of the analysis is to select the elbow and the branch pipe which are representative structure of piping system. The applicability of the mα-tangent method are investigated by applying it to cases with various geometries. The internal pressure and the in-plane bending moment are considered and the mα-tangent method is in good agreement with the existing solutions in case of elbows. However, the limit loads calculated by the mα-tangent method for branch junctions do not agree well with the existing solutions and do not show any tendency. The reason is a biased result due to the stress concentration of the discontinuous parts.

  13. 14 CFR 1274.918 - Incremental funding.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Incremental funding. 1274.918 Section 1274... COMMERCIAL FIRMS Other Provisions and Special Conditions § 1274.918 Incremental funding. Incremental Funding... Agreement, as required, until it is fully funded. Any work beyond the funding limit will be at the...

  14. 14 CFR 1260.53 - Incremental funding.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Incremental funding. 1260.53 Section 1260.53 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION GRANTS AND COOPERATIVE AGREEMENTS General Special Conditions § 1260.53 Incremental funding. Incremental Funding October 2000...

  15. Nuevos enfoques en aprendizaje incremental

    OpenAIRE

    del Campo-Ávila, José

    2007-01-01

    Actualmente el volumen de datos que se genera en diferentes ámbitos es muy elevado, llegando incluso a ser difícil de almacenar. Realizar tareas de aprendizaje automático ante tal cantidad de información está provocando que sean necesarios nuevos algoritmos. En esta tesis se presentan distintas aportaciones en el ámbito del aprendizaje incremental, las cuales, fundamentalmente, están dirigidas a mejorarlo usando algoritmos basados en cotas de concentración y sistemas multiclasificadores.

  16. Incremental deformation: A literature review

    Directory of Open Access Journals (Sweden)

    Nasulea Daniel

    2017-01-01

    Full Text Available Nowadays the customer requirements are in permanent changing and according with them the tendencies in the modern industry is to implement flexible manufacturing processes. In the last decades, metal forming gained attention of the researchers and considerable changes has occurred. Because for a small number of parts, the conventional metal forming processes are expensive and time-consuming in terms of designing and manufacturing preparation, the manufacturers and researchers became interested in flexible processes. One of the most investigated flexible processes in metal forming is incremental sheet forming (ISF. ISF is an advanced flexible manufacturing process which allows to manufacture complex 3D products without expensive dedicated tools. In most of the cases it is needed for an ISF process the following: a simple tool, a fixing device for sheet metal blank and a universal CNC machine. Using this process it can be manufactured axis-symmetric parts, usually using a CNC lathe but also complex asymmetrical parts using CNC milling machines, robots or dedicated equipment. This paper aim to present the current status of incremental sheet forming technologies in terms of process parameters and their influences, wall thickness distribution, springback effect, formability, surface quality and the current main research directions.

  17. A New Incremental Support Vector Machine Algorithm

    Directory of Open Access Journals (Sweden)

    Wenjuan Zhao

    2012-10-01

    Full Text Available Support vector machine is a popular method in machine learning. Incremental support vector machine algorithm is ideal selection in the face of large learning data set. In this paper a new incremental support vector machine learning algorithm is proposed to improve efficiency of large scale data processing. The model of this incremental learning algorithm is similar to the standard support vector machine. The goal concept is updated by incremental learning. Each training procedure only includes new training data. The time complexity is independent of whole training set. Compared with the other incremental version, the training speed of this approach is improved and the change of hyperplane is reduced.

  18. Uniform B-Spline Curve Interpolation with Prescribed Tangent and Curvature Vectors.

    Science.gov (United States)

    Okaniwa, Shoichi; Nasri, Ahmad; Lin, Hongwei; Abbas, Abdulwahed; Kineri, Yuki; Maekawa, Takashi

    2012-09-01

    This paper presents a geometric algorithm for the generation of uniform cubic B-spline curves interpolating a sequence of data points under tangent and curvature vectors constraints. To satisfy these constraints, knot insertion is used to generate additional control points which are progressively repositioned using corresponding geometric rules. Compared to existing schemes, our approach is capable of handling plane as well as space curves, has local control, and avoids the solution of the typical linear system. The effectiveness of the proposed algorithm is illustrated through several comparative examples. Applications of the method in NC machining and shape design are also outlined.

  19. Analyses of Creepages and Their Sensitivities for a Single Wheelset Moving on a Tangent Track

    Institute of Scientific and Technical Information of China (English)

    Jin Xuesong; Zhang Weihua

    1996-01-01

    Creep forces depend greatly on creepages in the contact area forming between wheel and rail. The creepages are completely determined by the state of a wheelset moving on a track. In this paper the contact state of a single rigid wheelset moving on a tangent rigid rail ,creepages and their sensitivities to some parameters of contact geometry are analyzed by semi-analytical method and numerical method, respectively. Some important ideas will be provided for the studies done on the interactions between wheels and rails at high speed.

  20. Application of geometric dimensioning and tolerancing for sharp corner and tangent contact lens seats

    Science.gov (United States)

    Hopkins, C. L.; Burge, J. H.

    2011-10-01

    This paper outlines methods for dimensioning and tolerancing lens seats that mate with spherical lens surfaces. The two types of seats investigated are sharp corner and tangent contact. The goal is to be able to identify which seat dimensions influence lens tilt and displacement and develop a quantifiable way to assign tolerances to those dimensions to meet tilt and displacement requirements. After looking at individual seats, methods are then applied to multiple lenses with examples. All geometric dimensioning and tolerancing is according to ASME Y14.5M - 1994.

  1. Inspiration of induced magnetic field on nano hyperbolic tangent fluid in a curved channel

    Science.gov (United States)

    Nadeem, S.; Shahzadi, Iqra

    2016-01-01

    In this research, peristaltic flow of nano hyperbolic tangent fluid is investigated in a curved channel. The model used for the nanofluid includes the effects of thermophoresis and Brownian motion. The resulting equations are assembled in wave frame of reference under the effects of curvature. Influence of induced magnetic field is studied. Long wavelength and low Reynolds number supposition are treated. The travelling wave front of peristaltic flow is chosen sinusoidal (extension /reduction). Analytical solutions are computed by homotopy perturbation method. Results of substantial quantities are explained with particular attention to rheological aspects.

  2. Inspiration of induced magnetic field on nano hyperbolic tangent fluid in a curved channel

    Directory of Open Access Journals (Sweden)

    S. Nadeem

    2016-01-01

    Full Text Available In this research, peristaltic flow of nano hyperbolic tangent fluid is investigated in a curved channel. The model used for the nanofluid includes the effects of thermophoresis and Brownian motion. The resulting equations are assembled in wave frame of reference under the effects of curvature. Influence of induced magnetic field is studied. Long wavelength and low Reynolds number supposition are treated. The travelling wave front of peristaltic flow is chosen sinusoidal (extension /reduction. Analytical solutions are computed by homotopy perturbation method. Results of substantial quantities are explained with particular attention to rheological aspects.

  3. Elasto-viscoplastic consistent tangent operator concept-based implicit boundary element methods

    Institute of Scientific and Technical Information of China (English)

    刘勇; 梁利华; GlaucioH.Paulino

    2000-01-01

    An elasto-viscoplastic consistent tangent operator (CTO) concept-based implicit algorithm for nonlinear boundary element methods is presented. Both kinematic and isotropic strain hardening are considered. The elasto-viscoplastic radial return algorithm (RRA) and the elasto-viscoplastic CTO and its related scheme are developed. In addition, the limit cases (e.g. elastoplastic problem) of vis-coplastic RRA and CTO are discussed. Finally, numerical examples, which are compared with the latest FEM results of Ibrahimbegovic et al. and ABAQUS results, are provided.

  4. Elasto-viscoplastic consistent tangent operator concept-based implicit boundary element methods

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    An elasto-viscoplastic consistent tangent operator (CTO) concept-based implicit algorithm for nonlinear boundary element methods is presented. Both kinematic and isotropic strain hardening are considered. The elasto-viscoplastic radial return algorithm (RRA) and the elasto-viscoplastic CTO and its related scheme are developed. In addition, the limit cases (e.g. elastoplastic problem) of viscoplastic RRA and CTO are discussed. Finally, numerical examples, which are compared with the latest FEM results of Ibrahimbegovic et al. and ABAQUS results, are provided.

  5. Magnetohydrodynamic peristaltic flow of a hyperbolic tangent fluid in a vertical asymmetric channel with heat transfer

    Institute of Scientific and Technical Information of China (English)

    Sohail Nadeem; Safia Akram

    2011-01-01

    In the present paper we discuss the magnetohydrodynamic (MHD) peristaltic flow of a hyperbolic tangent fluid model in a vertical asymmetric channel under a zero Reynolds number and long wavelength approximation. Exact solution of the temperature equation in the absence of dissipation term has been computed and the analytical expression for stream function and axial pressure gradient are established. The flow is analyzed in a wave frame of reference moving with the velocity of wave. The expression for pressure rise has been computed numerically. The physical features of pertinent parameters are analyzed by plotting graphs and discussed in detail.

  6. A Comparative Study of Failure with Incremental Forming

    Science.gov (United States)

    Wu, S. H.; Song, N. N.; Pires, F. M. Andrade

    2016-08-01

    Incremental forming (ISF) is an innovative flexible sheet metal forming process which can be used to manufacture complex shapes from various materials. Due to its flexibility, it has attracted more and more attention over recent decades. Localized deformation and shear through the thickness are essential characteristics of ISF. These lead to specific failure modes and formability of ISF that are different from the conventional stamping process. In this contribution, three continuum damage models (Lemaitre, Gurson, extended GTN models) are formulated and fully coupled with the finite element simulation in a commercial software ABAQUS to predict failure in incremental forming. A comparative investigation of these three damage models has been carried out to analyze both the deformation behavior and failure mechanisms.

  7. Numerical Investigation of Influence of Tangent Pitch and Slanting Flow of Guide Vanes on the Axial Compressor Stage Parameters

    Directory of Open Access Journals (Sweden)

    D. V. Arkhipov

    2015-01-01

    Full Text Available The flow redistribution in the axial stage through the stator axis blade deformation can create favorable conditions for raising stage efficiency and combined actions for axial compressor elements especially in ambient conditions. For this purpose, the axis deformation impact on the gas-dynamic stability margin and the coefficient of efficiency of axial compressor has been numerically investigated.The influence of guide vane (GV axis was considered with invariable rotor blades and different variants of stator. The GV axis form was changed on the arc of a circle in the range of ± 15% guide vane height in circumferential direction and in the axial direction in the range ± 10% of guide vane height, increments ± 2.5%.As an object, for investigation was chosen a numerical 3D model of transonic stage of axial compressor with the following values of basic parameters: circumferential speed in the rotor blade trips of 345 m/s, relative diameter of the hub being 0.7, and coefficient of discharge being 0.5. The stage was profiling by classic low Cu*r=const. Rotor and stator profiles for all variants under investigation were the same in the same radii.As to initial radial axis guide vane, the losses of total pressure in stator become substantially less throughout the height of blade in case there is a guide vane axis bending in axial direction in line of flow. Bending of the axis in the circumferential direction against the rotation leads to reducing total pressure losses especially in hub and shroud regions, and in the flow core there is no change.In future, the effects of a tangent pitch and a slanting flow can be of interest in case of the simultaneous bending in both directions, as well as when studying the influence of bending of the guide vanes, which are a part of a sector of stages and a multi-stage compressor in a wide range of operating conditions.

  8. Property Differencing for Incremental Checking

    Science.gov (United States)

    Yang, Guowei; Khurshid, Sarfraz; Person, Suzette; Rungta, Neha

    2014-01-01

    This paper introduces iProperty, a novel approach that facilitates incremental checking of programs based on a property di erencing technique. Speci cally, iProperty aims to reduce the cost of checking properties as they are initially developed and as they co-evolve with the program. The key novelty of iProperty is to compute the di erences between the new and old versions of expected properties to reduce the number and size of the properties that need to be checked during the initial development of the properties. Furthermore, property di erencing is used in synergy with program behavior di erencing techniques to optimize common regression scenarios, such as detecting regression errors or checking feature additions for conformance to new expected properties. Experimental results in the context of symbolic execution of Java programs annotated with properties written as assertions show the e ectiveness of iProperty in utilizing change information to enable more ecient checking.

  9. Incremental Observer Relative Data Extraction

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    2004-01-01

    or a Panorama, where an observer is data space this approach is far from optimal. A more scalable approach is to make the observer-aware database system and to restrict the communication between the database and visualization systems to the relevant data. In this paper VR-tree, an extension of the R......The visual exploration of large databases calls for a tight coupling of database and visualization systems. Current visualization systems typically fetch all the data and organize it in a scene tree that is then used to render the visible data. For immersive data explorations in a Cave......-tree, is used to index visibility ranges of objects. We introduce a new operator for incremental Observer Relative data Extraction (iORDE). We propose the Volatile Access STructure (VAST), a lightweight main memory structure that is created on the fly and is maintained during visual data explorations. VAST...

  10. Comparison of the Tangent Linear Properties of Tracer Transport Schemes Applied to Geophysical Problems.

    Science.gov (United States)

    Kent, James; Holdaway, Daniel

    2015-01-01

    A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.

  11. Changes in the tangent modulus of rabbit septal and auricular cartilage following electromechanical reshaping.

    Science.gov (United States)

    Lim, Amanda; Protsenko, Dmitry E; Wong, Brian J F

    2011-09-01

    Transforming decades' old methodology, electromechanical reshaping (EMR) may someday replace traditionally destructive surgical techniques with a less invasive means of cartilage reshaping for reconstructive and esthetic facial surgery. Electromechanical reshaping is essentially accomplished through the application of voltage to a mechanically deformed cartilage specimen. While the capacity of the method for effective reshaping has been consistently shown, its associated effects on cartilage mechanical properties are not fully comprehended. To begin to explore the mechanical effect of EMR on cartilage, the tangent moduli of EMR-treated rabbit septal and auricular cartilage were calculated and compared to matched control values. Between the two main EMR parameters, voltage and application time, the former was varied from 2-8 V and the latter held constant at 2 min for septal cartilage, 3 min for auricular cartilage. Flat platinum electrodes were used to apply voltage, maintaining the flatness of the specimens for more precise mechanical testing through a uniaxial tension test of constant strain rate 0.01 mm/s. Above 2 V, both septal and auricular cartilage demonstrated a slight reduction in stiffness, quantified by the tangent modulus. A thermal effect was observed above 5 V, a newly identified EMR application threshold to avoid the dangers associated with thermoforming cartilage. Optimizing EMR application parameters and understanding various side effects bridge the gap between EMR laboratory research and clinical use, and the knowledge acquired through this mechanical study may be one additional support for that bridge.

  12. Characterization of Tangent Cones of Noncollapsed Limits with Lower Ricci Bounds and Applications

    CERN Document Server

    Colding, Tobias Holck

    2011-01-01

    Consider a limit space $(M_\\alpha,g_\\alpha,p_\\alpha)\\stackrel{GH}{\\rightarrow} (Y,d_Y,p)$, where the $M_\\alpha^n$ have a lower Ricci curvature bound and are volume noncollapsed. The tangent cones of $Y$ at a point $p\\in Y$ are known to be metric cones $C(X)$, however they need not be unique. Let $\\bar\\Omega_{Y,p}\\subseteq\\cM_{GH}$ be the closed subset of compact metric spaces $X$ which arise as cross sections for the tangents cones of $Y$ at $p$. In this paper we study the properties of $\\bar\\Omega_{Y,p}$. In particular, we give necessary and sufficient conditions for an open smooth family $\\Omega\\equiv (X_s,g_s)$ of closed manifolds to satisfy $\\bar\\Omega =\\bar\\Omega_{Y,p}$ for {\\it some} limit $Y$ and point $p\\in Y$ as above, where $\\bar\\Omega$ is the closure of $\\Omega$ in the set of metric spaces equipped with the Gromov-Hausdorff topology. We use this characterization to construct examples which exhibit fundamentally new behaviors. The first application is to construct limit spaces $(Y^n,d_Y,p)$ with $n\\...

  13. Eight-Scale Image Contrast Enhancement Based on Adaptive Inverse Hyperbolic Tangent Algorithm

    Directory of Open Access Journals (Sweden)

    Cheng-Yi Yu

    2014-10-01

    Full Text Available The Eight-Scale parameter adjustment is a natural extension of Adaptive Inverse Hyperbolic Tangent (AIHT algorithm. It has long been known that the Human Vision System (HVS heavily depends on detail and edge in the understanding and perception of scenes. The main goal of this study is to produce a contrast enhancement technique to recover an image from blurring and darkness, and at the same time to improve visual quality. Eight-scale coefficient adjustments can provide a further local refinement in detail under the AIHT algorithm. The proposed Eight-Scale Adaptive Inverse Hyperbolic Tangent (8SAIHT method uses the sub-band to calculate the local mean and local variance before the AIHT algorithm is applied. This study also shows that this approach is convenient and effective in the enhancement processes for various types of images. The 8SAIHT is also capable of adaptively enhancing the local contrast of the original image while simultaneously extruding more on object details.

  14. Catalog of observed tangents to the spiral arms in the Milky Way galaxy

    CERN Document Server

    Vallee, Jacques P

    2014-01-01

    From the sun's location in the Galactic disk, one can use different arm tracers (CO, HII, thermal or ionized or relativistic electrons, masers, cold or hot dust, etc) to locate a tangent to each spiral arm in the disk of the Milky Way galaxy. We present a Master catalog of the astronomically observed tangents to the Galaxy's spiral arms, using different arm tracers from the literature. Some arm tracers can have slightly divergent results from several papers, so a mean is taken - see Appendix for CO, HII, and masers. The Master catalog of means currently consists of 63 mean tracer entries, spread over many arms (Carina, Crux-Centaurus, Norma, Perseus origin, near 3-kpc, Scutum, Sagittarius), stemming from 107 original arm tracer entries. Additionally, we updated and revised a previous a previous statistical analysis of the angular offset and linear separation from the mid-arm, for each different mean arm tracer. Given enough arm tracers, and summing and averaging over all spiral arms, one could determine if ar...

  15. Numerical prediction of the incremental melting and solidification process

    Institute of Scientific and Technical Information of China (English)

    Jun Wang; Chengchang Jia; Sheng Yin

    2003-01-01

    A mathematical formulation is applied to represent the phenomena in the incremental melting and solidification process (IMSP), and the temperature and electromagnetic fields and the depth of steel liquid phase are calculated by a finite difference technique using the control volume method. The result shows that the predicted values are in good agreement with the observations. In accordance with the calculated values for different kinds of materials and different size of molds, the technological parameter of the IMS process such as the power supply and the descending speed rate can be determined.

  16. Evolving Classifiers: Methods for Incremental Learning

    CERN Document Server

    Hulley, Greg

    2007-01-01

    The ability of a classifier to take on new information and classes by evolving the classifier without it having to be fully retrained is known as incremental learning. Incremental learning has been successfully applied to many classification problems, where the data is changing and is not all available at once. In this paper there is a comparison between Learn++, which is one of the most recent incremental learning algorithms, and the new proposed method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has shown good incremental learning capabilities on benchmark datasets on which the new ILUGA method has been tested. ILUGA has also shown good incremental learning ability using only a few classifiers and does not suffer from catastrophic forgetting. The results obtained for ILUGA on the Optical Character Recognition (OCR) and Wine datasets are good, with an overall accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT for the difficult multi-class OCR dataset.

  17. Coordinate-invariant incremental Lyapunov functions

    CERN Document Server

    Zamani, Majid

    2011-01-01

    The notion of incremental stability was proposed by several researchers as a strong property of dynamical and control systems. In this type of stability, the focus is on the convergence of trajectories with respect to themselves, rather than with respect to an equilibrium point or a particular trajectory. Similarly to stability, Lyapunov functions play an important role in the study of incremental stability. In this paper, we propose coordinate-invariant notions of incremental Lyapunov function and provide the description of incremental stability in terms of existence of the proposed Lyapunov functions. Moreover, we develop a backstepping design approach providing a recursive way of constructing controllers as well as incremental Lyapunov functions. The effectiveness of our method is illustrated by synthesizing a controller rendering a single-machine infinite-bus electrical power system incrementally stable.

  18. Minimal Change and Bounded Incremental Parsing

    CERN Document Server

    Wiren, M

    1994-01-01

    Ideally, the time that an incremental algorithm uses to process a change should be a function of the size of the change rather than, say, the size of the entire current input. Based on a formalization of ``the set of things changed'' by an incremental modification, this paper investigates how and to what extent it is possible to give such a guarantee for a chart-based parsing framework and discusses the general utility of a minimality notion in incremental processing.

  19. Incremental Supervised Subspace Learning for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Subspace learning algorithms have been well studied in face recognition. Among them, linear discriminant analysis (LDA) is one of the most widely used supervised subspace learning method. Due to the difficulty of designing an incremental solution of the eigen decomposition on the product of matrices, there is little work for computing LDA incrementally. To avoid this limitation, an incremental supervised subspace learning (ISSL) algorithm was proposed, which incrementally learns an adaptive subspace by optimizing the maximum margin criterion (MMC). With the dynamically added face images, ISSL can effectively constrain the computational cost. Feasibility of the new algorithm has been successfully tested on different face data sets.

  20. An arc tangent function demodulation method of fiber-optic Fabry-Perot high-temperature pressure sensor

    Science.gov (United States)

    Ren, Qianyu; Li, Junhong; Hong, Yingping; Jia, Pinggang; Xiong, Jijun

    2017-09-01

    A new demodulation algorithm of the fiber-optic Fabry-Perot cavity length based on the phase generated carrier (PGC) is proposed in this paper, which can be applied in the high-temperature pressure sensor. This new algorithm based on arc tangent function outputs two orthogonal signals by utilizing an optical system, which is designed based on the field-programmable gate array (FPGA) to overcome the range limit of the original PGC arc tangent function demodulation algorithm. The simulation and analysis are also carried on. According to the analysis of demodulation speed and precision, the simulation of different numbers of sampling points, and measurement results of the pressure sensor, the arc tangent function demodulation method has good demodulation results: 1 MHz processing speed of single data and less than 1% error showing practical feasibility in the fiber-optic Fabry-Perot cavity length demodulation of the Fabry-Perot high-temperature pressure sensor.

  1. Incremental Support Vector Machine Framework for Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuichi Motai

    2007-01-01

    Full Text Available Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  2. Recognition of Pitman shorthand text using tangent feature values at word level

    Indian Academy of Sciences (India)

    P Nagabhushan; S Murali

    2003-12-01

    Recognition of text recorded in Pitman shorthand language (PSL) is an interesting research problem. Automatic reading of PSL and generating equivalent English text is very challenging. The most important task involved here is the accurate recognition of Pitman stroke patterns, which constitute “text” in PSL. The paper describes automatic recognition of the strokes of the PSL at word level. A pen-down to pen-up sequence makes a stroke, which is a composition of primitives. The words are separated based on pen-down and pen-up points. The features that form a word (a stroke) are grouped first. Next, primitives and their sequence are identified and passed to a recognizer which identifies the word. A tangent-based vector through the contour of a stroke identifies the consonant primitives. Any other marks close to the stroke but not associated with the contour of a stroke represent the vowel markers.

  3. Radiative flow of a tangent hyperbolic fluid with convective conditions and chemical reaction

    Science.gov (United States)

    Hayat, Tasawar; Qayyum, Sajid; Ahmad, Bashir; Waqas, Muhammad

    2016-12-01

    The objective of present paper is to examine the thermal radiation effects in the two-dimensional mixed convection flow of a tangent hyperbolic fluid near a stagnation point. The analysis is performed in the presence of heat generation/absorption and chemical reaction. Convective boundary conditions for heat and mass transfer are employed. The resulting partial differential equations are reduced into nonlinear ordinary differential equations using appropriate transformations. Series solutions of momentum, energy and concentration equations are computed. The characteristics of various physical parameters on the distributions of velocity, temperature and concentration are analyzed graphically. Numerical values of skin friction coefficient, local Nusselt and Sherwood numbers are computed and examined. It is observed that larger values of thermal and concentration Biot numbers enhance the temperature and concentration distributions.

  4. Tangent Bifurcation of Band Edge Plane Waves, Dynamical Symmetry Breaking and Vibrational Localization

    CERN Document Server

    Flach, S

    1995-01-01

    We study tangent bifurcation of band edge plane waves in nonlinear Hamiltonian lattices. The lattice is translationally invariant. We argue for the breaking of permutational symmetry by the new bifurcated periodic orbits. The case of two coupled oscillators is considered as an example for the perturbation analysis, where the symmetry breaking can be traced using Poincare maps. Next we consider a lattice and derive the dependence of the bifurcation energy on the parameters of the Hamiltonian function in the limit of large system sizes. A necessary condition for the occurence of the bifurcation is the repelling of the band edge plane wave's frequency from the linear spectrum with increasing energy. We conclude that the bifurcated orbits will consequently exponentially localize in the configurational space.

  5. ON THE INCREMENTS DISTRIBUTION OF STOCK PRICES

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In this paper,the models of increment distributions of stock price are constructed with two approaches. The first approach is based on limit theorems of random summation. The second approach is based on the statistical analysis of the increment distribution of the logarithms of stock prices.

  6. Webpage Segments Classification with Incremental Knowledge Acquisition

    Science.gov (United States)

    Guo, Wei; Kim, Yang Sok; Kang, Byeong Ho

    This paper suggests an incremental information extraction method for social network analysis of web publications. For this purpose, we employed an incremental knowledge acquisition method, called MCRDR (Multiple Classification Ripple-Down Rules), to classify web page segments. Our experimental results show that our MCRDR-based web page segments classification system successfully supports easy acquisition and maintenance of information extraction rules.

  7. Turns and Increments: A Comparative Perspective

    Science.gov (United States)

    Luke, Kang-kwong; Thompson, Sandra A.; Ono, Tsuyoshi

    2012-01-01

    Recent years have seen a surge of interest in "increments" among students of conversational interaction. This article first outlines "incrementing" as an analytical problem (i.e., as turn constructional unit [TCU] extensions) by tracing its origins back to Sacks, Schegloff, and Jefferson's (1974) famous turn-taking article. Then, the article…

  8. Unmanned Maritime Systems Incremental Acquisition Approach

    Science.gov (United States)

    2016-12-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION...REPORT TYPE AND DATES COVERED MBA professional report 4. TITLE AND SUBTITLE UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH 5. FUNDING...explore and understand the issues involved in the DOD’s acquisition process for Unmanned Maritime Systems (UMS) in order to recommend a new acquisition

  9. Characterization of standard embeddings between complex Grassmannians by means of varieties of minimal rational tangents

    Institute of Scientific and Technical Information of China (English)

    MOK Ngaiming

    2008-01-01

    In 1993,Tsai proved that a proper holomorphic mapping f : Ω→Ω' from an irreducible bounded symmetric domain Ω of rank ≥ 2 into a bounded symmetric domain Ω' is necessarily totally geodesic provided that r' : = rank(Ω') ≤ rank(Ω) := r,proving a conjecture of the author's motivated by Hermitian metric rigidity.As a first step in the proof,Tsai showed that df preserves almost everywhere the set of tangent vectors of rank 1.Identifying bounded symmetric domains as open subsets of their compact duals by means of the Borel embedding,this means that the germ of f at a general point preserves the varieties of minimal rational tangents (VMRTs).In another completely different direction Hwang-Mok established with very few exceptions the Cartan-Fubini extension priniciple for germs of local biholomorphisms between Fano manifolds of Picard num-ber 1,showing that the germ of map extends to a global biholomorphism provided that it preserves VMRTs.We propose to isolate the problem of characterization of special holomorphic embeddings between Fano manifolds of Picard number 1,especially in the case of classical manifolds such as ratio-nal homogeneous spaces of Picard number 1,by a non-equidimensional analogue of the Cartan-Fubini extension principle.As an illustration we show along this line that standard embeddings between com-plex Grassmahn manifolds of rank ≤ 2 can be characterized by the VMRT-preserving property and a non-degeneracy condition,giving a new proof of a result of Neretin's which on the one hand paves the way for far-reaching generalizations to the context of rational homogeneous spaces and more generally Fano manifolds of Picard number 1,on the other hand should be applicable to the study of proper holomorphic mappings between bounded domains carrying some form of geometric structures.

  10. Characterization of standard embeddings between complex Grassmannians by means of varieties of minimal rational tangents

    Institute of Scientific and Technical Information of China (English)

    MOK; Ngaiming

    2008-01-01

    In 1993,Tsal proved that a proper holomorphic mapping f:Ω→Ω’ from an irreducible bounded symmetric domainΩof rank≥2 into a bounded symmetric domainΩ’ is necessarily totally geodesic provided that r’:=rank(Ω’)≤rank(Ω):= r,proving a conjecture of the author’s motivated by Hermitian metric rigidity.As a first step in the proof,Tsai showed that df preserves almost everywhere the set of tangent vectors of rank 1.Identifying bounded symmetric domains as open subsets of their compact duals by means of the Borel embedding,this means that the germ of f at a general point preserves the varieties of minimal rational tangents(VMRTs). In another completely different direction Hwang-Mok established with very few exceptions the Cartan- Fubini extension priniciple for germs of local biholomorphisms between Fano manifolds of Picard num- ber 1,showing that the germ of map extends to a global biholomorphism provided that it preserves VMRTs.We propose to isolate the problem of characterization of special holomorphic embeddings between Fano manifolds of Picard number 1,especially in the case of classical manifolds such as ratio- nal homogeneous spaces of Picard number 1,by a non-equidimensional analogue of the Cartan-Fubini extension principle.As an illustration we show along this line that standard embeddings between com- plex Grassmann manifolds of rank≤2 can be characterized by the VMRT-preserving property and a non-degeneracy condition,giving a new proof of a result of Neretin’s which on the one hand paves the way for far-reaching generalizations to the context of rational homogeneous spaces and more generally Fano manifolds of Picard number 1,on the other hand should be applicable to the study of proper holomorphic mappings between bounded domains carrying some form of geometric structures.

  11. CATALOG OF OBSERVED TANGENTS TO THE SPIRAL ARMS IN THE MILKY WAY GALAXY

    Energy Technology Data Exchange (ETDEWEB)

    Vallée, Jacques P., E-mail: jacques.vallee@nrc-cnrc.gc.ca [Herzberg Astrophysics, National Research Council Canada, National Science Infrastructure portfolio, 5071 West Saanich Road, Victoria, BC, V9E 2E7 (Canada)

    2014-11-01

    From the Sun's location in the Galactic disk, one can use different arm tracers (CO, H I, thermal or ionized or relativistic electrons, masers, cold and hot dust, etc.) to locate a tangent to each spiral arm in the disk of the Milky Way. We present a master catalog of the astronomically observed tangents to the Galaxy's spiral arms, using different arm tracers from the literature. Some arm tracers can have slightly divergent results from several papers, so a mean value is taken—see the Appendix for CO, H II, and masers. The catalog of means currently consists of 63 mean tracer entries, spread over many arms (Carina, Crux-Centaurus, Norma, Perseus origin, near 3 kpc, Scutum, Sagittarius), stemming from 107 original arm tracer entries. Additionally, we updated and revised a previous statistical analysis of the angular offset and linear separation from the mid-arm for each different mean arm tracer. Given enough arm tracers, and summing and averaging over all four spiral arms, one could determine if arm tracers have separate and parallel lanes in the Milky Way. This statistical analysis allows a cross-cut of a Galactic spiral arm to be made, confirming a recent discovery of a linear separation between arm tracers. Here, from the mid-arm's CO to the inner edge's hot dust, the arm halfwidth is about 340 pc; doubling would yield a full arm width of 680 pc. We briefly compare these observations with the predictions of many spiral arm theories, notably the density wave theory.

  12. Accurate Analysis Method on Tangent Stiffness Matrix for Space Beam Element%空间梁单元切线刚度矩阵的精确分析方法

    Institute of Scientific and Technical Information of China (English)

    刘树堂

    2014-01-01

    为有效进行空间刚架结构后屈曲分析,提出一种新的空间梁单元切线刚度矩阵的精确分析方法。首先用直接法建立梁单元杆端力与杆端位移的增量关系式,然后根据矩阵微分理论求出单元杆端力关于杆端位移的导数,在求导结果表达式中令杆端位移增量为0,即可得到梁单元切线刚度矩阵。对六层和二十层空间刚架结构进行了后屈曲分析。结果表明:所得的空间梁单元切线刚度矩阵具有足够精度,可有效用于大型空间刚架结构的后屈曲分析。%In order to effectively conduct the post‐buckling analysis for space frame , a new accurate analysis method for the tangent stiffness matrix of space beam element was proposed . Firstly ,the incremental force and displacement of the member ends for space beam element was established using direct equilibrium method ,and then derivation of the member‐end force was determined with regard to the member‐end displacement according to the matrix differentiation theory and the increment of member‐end displacement of the derivation expression was set equal to zero ,so that the tangent stiffness matrix for space beam element was obtained .The post‐buckling analyses for a six‐storey space frame and a twenty‐storey frame were done .The results show that the present tangent stiffness matrix for space beam element has enough precision ,and can be applied to the post‐buckling analysis for large space frame .

  13. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  14. Screening of mucoadhesive vaginal gel formulations

    Directory of Open Access Journals (Sweden)

    Ana Ochoa Andrade

    2014-12-01

    Full Text Available Rational design of vaginal drug delivery formulations requires special attention to vehicle properties that optimize vaginal coating and retention. The aim of the present work was to perform a screening of mucoadhesive vaginal gels formulated with carbomer or carrageenan in binary combination with a second polymer (carbomer, guar or xanthan gum. The gels were characterised using in vitroadhesion, spreadability and leakage potential studies, as well as rheological measurements (stress and frequency sweep tests and the effect of dilution with simulated vaginal fluid (SVF on spreadability. Results were analysed using analysis of variance and multiple factor analysis. The combination of polymers enhanced adhesion of both primary gelling agents, carbomer and carrageenan. From the rheological point of view all formulations presented a similar behaviour, prevalently elastic and characterised by loss tangent values well below 1. No correlation between rheological and adhesion behaviour was found. Carbomer and carrageenan gels containing the highest percentage of xanthan gum displayed good in vitro mucoadhesion and spreadability, minimal leakage potential and high resistance to dilution. The positive results obtained with carrageenan-xanthan gum-based gels can encourage the use of natural biocompatible adjuvants in the composition of vaginal products, a formulation field that is currently under the synthetic domain.

  15. Dynamic Geometry Software and Tracing Tangents in the Context of the Mean Value Theorem: Technique and Theory Production

    Science.gov (United States)

    Martínez-Hernández, Cesar; Ulloa-Azpeitia, Ricardo

    2017-01-01

    Based on the theoretical elements of the instrumental approach to tool use known as Task-Technique-Theory (Artigue, 2002), this paper analyses and discusses the performance of graduate students enrolled in a Teacher Training program. The latter performance relates to tracing tangent lines to the curve of a quadratic function in Dynamic Geometry…

  16. Aspects regarding the Calculation of the Dielectric Loss Angle Tangent between the Windings of a Rated 40 MVA Transformer

    Directory of Open Access Journals (Sweden)

    Cristinel Popescu

    2015-09-01

    Full Text Available The paper aims to identify how to determine the dielectric loss angle tangent of the electric transformers from the transformer stations. Autors of the paper managed a case study on the dielectric established between high respectively medium voltage windings of an electrical rated 40 MVA transformer.

  17. New cases of integrable systems with dissipation on tangent bundles of two- and three-dimensional spheres

    Science.gov (United States)

    Shamolin, M. V.

    2016-12-01

    Integrability in elementary functions is demonstrated for some classes of dynamic systems on tangent bundles of two- and three-dimensional spheres. The force fields possess the so-called variable dissipation with a zero mean and generalize those considered earlier.

  18. When is the Tangent Sphere Bundle with Arbitrary Constant Radius Einstein%何时任意常半径的切球丛是Einstein的

    Institute of Scientific and Technical Information of China (English)

    陈冬梅; 胡自胜

    2009-01-01

    The paper studies the tangent bundle with arbitrary constant radius r and derives a necessary and sufficient condition for such tangent sphere bundle to be Einstein.%研究具有任意常半径r的切球丛,得到该切球丛是Einstein的一个充分必要条件.

  19. Incremental Discriminant Analysis in Tensor Space

    Science.gov (United States)

    Chang, Liu; Weidong, Zhao; Tao, Yan; Qiang, Pu; Xiaodan, Du

    2015-01-01

    To study incremental machine learning in tensor space, this paper proposes incremental tensor discriminant analysis. The algorithm employs tensor representation to carry on discriminant analysis and combine incremental learning to alleviate the computational cost. This paper proves that the algorithm can be unified into the graph framework theoretically and analyzes the time and space complexity in detail. The experiments on facial image detection have shown that the algorithm not only achieves sound performance compared with other algorithms, but also reduces the computational issues apparently. PMID:26339229

  20. Micromorphic continua: non-redundant formulations

    Science.gov (United States)

    Romano, Giovanni; Barretta, Raffaele; Diaco, Marina

    2016-11-01

    The kinematics of generalized continua is investigated and key points concerning the definition of overall tangent strain measure are put into evidence. It is shown that classical measures adopted in the literature for micromorphic continua do not obey a constraint qualification requirement, to be fulfilled for well-posedness in optimization theory, and are therefore termed redundant. Redundancy of continua with latent microstructure and of constrained Cosserat continua is also assessed. A simplest, non-redundant, kinematic model of micromorphic continua, is proposed by dropping the microcurvature field. The equilibrium conditions and the related variational linear elastostatic problem are formulated and briefly discussed. The simplest model involves a reduced number of state variables and of elastic constitutive coefficients, when compared with other models of micromorphic continua, being still capable of enriching the Cauchy continuum model in a significant way.

  1. Development Media Interactive Learning to Build Concept Training Guild Two Tangent Circles

    Directory of Open Access Journals (Sweden)

    Vivin Nur Afidah

    2014-06-01

    Full Text Available Pengembangan Media Pembelajaran Interaktif untuk Membangun Pemahaman Konsep Garis Singgung Persekutuan Dua Lingkaran Abstract: Media developed interactive learning is interactive learning media that is designed based on the characteristics of learning with the help of computer / Computer Assisted Instruction (CAI type tutorial. Designed by cognitive load theory that minimizes processing extraneous cognitive load, adjust the intrinsic cognitive load processing, and developing processing germane cognitive load. The research follows the development of 4D development model that define (defining, design (design, develop (development, and disseminate (dissemination. The conclusion is (1 the results of the validation by experts and practitioners assess media interactive learning developed valid, (2 the results of teacher and student activity observation showed media interactive learning meet the criteria of practicality, (3 the results of the quiz show most students attain a minimum level of understanding high , (4 the results of tests mastery of teaching materials showed most students achieve minimum mastery level is high, (5 the results of quizzes and tests mastery of teaching materials and student questionnaire responses showed interactive learning media has met the criteria effectively. specified that the understanding and mastery of the material communion of two circles tangent meet the minimum categories of high and students' response to the use of interactive learning media showed a positive response. Key Words: development, interactive learning media, tangents, circle Abstrak: Media pembelajaran interaktif yang dikembangkan adalah media pembelajaran interaktif yang dirancang berdasarkan karakteristik pembelajaran dengan bantuan komputer/Computer Assisted Instruction (CAI tipe tutorial. Dirancang berdasarkan teori beban kognitif yaitu meminimalkan pemrosesan beban kognitif  extraneous, mengatur pemrosesan beban kognitif intrinsic, dan

  2. Efficient Incremental Checkpointing of Java Programs

    DEFF Research Database (Denmark)

    Lawall, Julia Laetitia; Muller, Gilles

    2000-01-01

    This paper investigates the optimization of language-level checkpointing of Java programs. First, we describe how to systematically associate incremental checkpoints with Java classes. While being safe, the genericness of this solution induces substantial execution overhead. Second, to solve...

  3. VT Tax Increment Financing (TIF) Districts

    Data.gov (United States)

    Vermont Center for Geographic Information — Tax Increment Financing (TIF) Districts is established by a municipality around an area that requires public infrastructure to encourage public and private real...

  4. Separate and combined effects of gabapentin and [INCREMENT]9-tetrahydrocannabinol in humans discriminating [INCREMENT]9-tetrahydrocannabinol.

    Science.gov (United States)

    Lile, Joshua A; Wesley, Michael J; Kelly, Thomas H; Hays, Lon R

    2016-04-01

    The aim of the present study was to examine a potential mechanism of action of gabapentin to manage cannabis-use disorders by determining the interoceptive effects of gabapentin in cannabis users discriminating [INCREMENT]-tetrahydrocannabinol ([INCREMENT]-THC) using a pharmacologically selective drug-discrimination procedure. Eight cannabis users learned to discriminate 30 mg oral [INCREMENT]-THC from placebo and then received gabapentin (600 and 1200 mg), [INCREMENT]-THC (5, 15, and 30 mg), and placebo alone and in combination. Self-report, task performance, and physiological measures were also collected. [INCREMENT]-THC served as a discriminative stimulus, produced positive subjective effects, elevated heart rate, and impaired psychomotor performance. Both doses of gabapentin substituted for the [INCREMENT]-THC discriminative stimulus and engendered subjective and performance-impairing effects that overlapped with those of [INCREMENT]-THC when administered alone. When administered concurrently, gabapentin shifted the discriminative-stimulus effects of [INCREMENT]-THC leftward/upward, and combinations of [INCREMENT]-THC and gabapentin generally produced larger effects on cannabinoid-sensitive outcomes relative to [INCREMENT]-THC alone. These results suggest that one mechanism by which gabapentin might facilitate cannabis abstinence is by producing effects that overlap with those of cannabinoids.

  5. Large Deviations for Processes with Independent Increments.

    Science.gov (United States)

    1984-10-01

    generating function of the increments exists and thus the sample paths of such stochastic processes lie in the space of functions of bounded variation . The...BV[O,1], the space of functions of bounded variation and the topology is that of weak*-convergence. Varadhan (1966) studied the LDP for similar...increments and no Gaussian component which are considered as elements of BV[0,1], the space of functions of bounded variation . The final section

  6. Initial versus tangent stiffness-based Rayleigh damping in inelastic time history seismic analyses

    CERN Document Server

    Jehel, Pierre; Ibrahimbegovic, Adnan

    2013-01-01

    In the inelastic time history analyses of structures in seismic motion, part of the seismic energy that is imparted to the structure is absorbed by the inelastic structural model, and Rayleigh damping is commonly used in practice as an additional energy dissipation source. It has been acknowledged that Rayleigh damping models lack physical consistency and that, in turn, it must be carefully used to avoid encountering unintended consequences as the appearance of artificial damping. There are concerns raised by the mass proportional part of Rayleigh damping, but they are not considered in this paper. As far as the stiffness proportional part of Rayleigh damping is concerned, either the initial structural stiffness or the updated tangent stiffness can be used. The objective of this paper is to provide a comprehensive comparison of these two types of Rayleigh damping models so that a practitioner (i) can objectively choose the type of Rayleigh damping model that best fits her/his needs and (ii) is provided with u...

  7. A H-Infinity Control for Path Tracking with Fuzzy Hyperbolic Tangent Model

    Directory of Open Access Journals (Sweden)

    Guangsi Shi

    2016-01-01

    Full Text Available To achieve the goal of driver-less underground mining truck, a fuzzy hyperbolic tangent model is established for path tracking on an underground articulated mining truck. Firstly, the sample data of parameters are collected by the driver controlling articulated vehicle at a speed of 3 m/s, including both the lateral position deviation and the variation of heading angle deviation. Then, according to the improved adaptive BP neural network model and deriving formula of mediation rate of error estimator by the method of Cauchy robust, the weights are identified. Finally, H-infinity control controller is designed to control steering angle. The results of hardware-in-the-loop simulation show that lateral position deviation, heading angle deviation, and steering angle of the vehicle can be controlled, respectively, at 0.024 m, 0.08 rad, and 0.21 rad. All the deviations are asymptotically stable, and error control is in less than 2%. The method is demonstrated to be effective and reliable in path tracking for the underground vehicles.

  8. Three dimensional peristaltic flow of hyperbolic tangent fluid in non-uniform channel having flexible walls

    Directory of Open Access Journals (Sweden)

    M. Ali Abbas

    2016-03-01

    Full Text Available In this present analysis, three dimensional peristaltic flow of hyperbolic tangent fluid in a non-uniform channel has been investigated. We have considered that the pressure is uniform over the whole cross section and the interial effects have been neglected. For this purpose we consider laminar flow under the assumptions of long wavelength (λ→∞ and creeping flow (Re→0 approximations. The attained highly nonlinear equations are solved with the help of Homotopy perturbation method. The influence of various physical parameters of interest is demonstrated graphically for wall tension, mass characterization, damping nature of the wall, wall rigidity, wall elastance, aspect ratio and the Weissenberg number. In this present investigation we found that the magnitude of the velocity is maximum in the center of the channel whereas it is minimum near the walls. Stream lines are also drawn to discuss the trapping mechanism for all the physical parameters. Comparison has also been presented between Newtonian and non-Newtonian fluid.

  9. Driver Gaze Behavior Is Different in Normal Curve Driving and when Looking at the Tangent Point.

    Directory of Open Access Journals (Sweden)

    Teemu Itkonen

    Full Text Available Several steering models in the visual science literature attempt to capture the visual strategies in curve driving. Some of them are based on steering points on the future path (FP, others on tangent points (TP. It is, however, challenging to differentiate between the models' predictions in real-world contexts. Analysis of optokinetic nystagmus (OKN parameters is one useful measure, as the different strategies predict measurably different OKN patterns. Here, we directly test this prediction by asking drivers to either a "drive as they normally would" or b to "look at the TP". The design of the experiment is similar to a previous study by Kandil et al., but uses more sophisticated methods of eye-movement analysis. We find that the eye-movement patterns in the "normal" condition are indeed markedly different from the "tp" condition, and consistent with drivers looking at waypoints on the future path. This is the case for both overall fixation distribution, as well as the more informative fixation-by-fixation analysis of OKN. We find that the horizontal gaze speed during OKN corresponds well to the quantitative prediction of the future path models. The results also definitively rule out the alternative explanation that the OKN is produced by an involuntary reflex even while the driver is "trying" to look at the TP. The results are discussed in terms of the sequential organization of curve driving.

  10. Adaptive Inverse Hyperbolic Tangent Algorithm for Dynamic Contrast Adjustment in Displaying Scenes

    Directory of Open Access Journals (Sweden)

    Chein-I Chang

    2010-01-01

    Full Text Available Contrast has a great influence on the quality of an image in human visual perception. A poorly illuminated environment can significantly affect the contrast ratio, producing an unexpected image. This paper proposes an Adaptive Inverse Hyperbolic Tangent (AIHT algorithm to improve the display quality and contrast of a scene. Because digital cameras must maintain the shadow in a middle range of luminance that includes a main object such as a face, a gamma function is generally used for this purpose. However, this function has a severe weakness in that it decreases highlight contrast. To mitigate this problem, contrast enhancement algorithms have been designed to adjust contrast to tune human visual perception. The proposed AIHT determines the contrast levels of an original image as well as parameter space for different contrast types so that not only the original histogram shape features can be preserved, but also the contrast can be enhanced effectively. Experimental results show that the proposed algorithm is capable of enhancing the global contrast of the original image adaptively while extruding the details of objects simultaneously.

  11. Adaptive Inverse Hyperbolic Tangent Algorithm for Dynamic Contrast Adjustment in Displaying Scenes

    Directory of Open Access Journals (Sweden)

    Wang Chuin-Mu

    2010-01-01

    Full Text Available Abstract Contrast has a great influence on the quality of an image in human visual perception. A poorly illuminated environment can significantly affect the contrast ratio, producing an unexpected image. This paper proposes an Adaptive Inverse Hyperbolic Tangent (AIHT algorithm to improve the display quality and contrast of a scene. Because digital cameras must maintain the shadow in a middle range of luminance that includes a main object such as a face, a gamma function is generally used for this purpose. However, this function has a severe weakness in that it decreases highlight contrast. To mitigate this problem, contrast enhancement algorithms have been designed to adjust contrast to tune human visual perception. The proposed AIHT determines the contrast levels of an original image as well as parameter space for different contrast types so that not only the original histogram shape features can be preserved, but also the contrast can be enhanced effectively. Experimental results show that the proposed algorithm is capable of enhancing the global contrast of the original image adaptively while extruding the details of objects simultaneously.

  12. Singularities of plane complex curves and limits of Kähler metrics with cone singularities. I: Tangent Cones

    Directory of Open Access Journals (Sweden)

    Borbon Martin de

    2017-02-01

    Full Text Available The goal of this article is to provide a construction and classification, in the case of two complex dimensions, of the possible tangent cones at points of limit spaces of non-collapsed sequences of Kähler-Einstein metrics with cone singularities. The proofs and constructions are completely elementary, nevertheless they have an intrinsic beauty. In a few words; tangent cones correspond to spherical metrics with cone singularities in the projective line by means of the Kähler quotient construction with respect to the S1-action generated by the Reeb vector field, except in the irregular case ℂβ₁×ℂβ₂ with β₂/ β₁ ∉ Q.

  13. Assessing the tangent linear behaviour of common tracer transport schemes and their use in a linearised atmospheric general circulation model

    Directory of Open Access Journals (Sweden)

    Daniel Holdaway

    2015-09-01

    Full Text Available The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5. All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have non-linear behaviour. The piecewise parabolic method (PPM with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  14. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    Science.gov (United States)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  15. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    Science.gov (United States)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  16. Tangent modulus in numerical integration of constitutive relations and its influence on convergence of N-R method

    Directory of Open Access Journals (Sweden)

    Poruba Z.

    2009-06-01

    Full Text Available For the numerical solution of elasto-plastic problems with use of Newton-Raphson method in global equilibrium equation it is necessary to determine the tangent modulus in each integration point. To reach the parabolic convergence of Newton-Raphson method it is convenient to use so called algorithmic tangent modulus which is consistent with used integration scheme. For more simple models for example Chaboche combined hardening model it is possible to determine it in analytical way. In case of more robust macroscopic models it is in many cases necessary to use the approximation approach. This possibility is presented in this contribution for radial return method on Chaboche model. An example solved in software Ansys corresponds to line contact problem with assumption of Coulomb's friction. The study shows at the end that the number of iteration of N-R method is higher in case of continuum tangent modulus and many times higher with use of modified N-R method, initial stiffness method.

  17. Internal Physical Features of a Land Surface Model Employing a Tangent Linear Model

    Science.gov (United States)

    Yang, Runhua; Cohn, Stephen E.; daSilva, Arlindo; Joiner, Joanna; Houser, Paul R.

    1997-01-01

    The Earth's land surface, including its biomass, is an integral part of the Earth's weather and climate system. Land surface heterogeneity, such as the type and amount of vegetative covering., has a profound effect on local weather variability and therefore on regional variations of the global climate. Surface conditions affect local weather and climate through a number of mechanisms. First, they determine the re-distribution of the net radiative energy received at the surface, through the atmosphere, from the sun. A certain fraction of this energy increases the surface ground temperature, another warms the near-surface atmosphere, and the rest evaporates surface water, which in turn creates clouds and causes precipitation. Second, they determine how much rainfall and snowmelt can be stored in the soil and how much instead runs off into waterways. Finally, surface conditions influence the near-surface concentration and distribution of greenhouse gases such as carbon dioxide. The processes through which these mechanisms interact with the atmosphere can be modeled mathematically, to within some degree of uncertainty, on the basis of underlying physical principles. Such a land surface model provides predictive capability for surface variables including ground temperature, surface humidity, and soil moisture and temperature. This information is important for agriculture and industry, as well as for addressing fundamental scientific questions concerning global and local climate change. In this study we apply a methodology known as tangent linear modeling to help us understand more deeply, the behavior of the Mosaic land surface model, a model that has been developed over the past several years at NASA/GSFC. This methodology allows us to examine, directly and quantitatively, the dependence of prediction errors in land surface variables upon different vegetation conditions. The work also highlights the importance of accurate soil moisture information. Although surface

  18. Growth increments in teeth of Diictodon (Therapsida

    Directory of Open Access Journals (Sweden)

    J. Francis Thackeray

    1991-09-01

    Full Text Available Growth increments circa 0.02 mm in width have been observed in sectioned tusks of Diictodon from the Late Permian lower Beaufort succession of the South African Karoo, dated between about 260 and 245 million years ago. Mean growth increments show a decline from relatively high values in the Tropidostoma/Endothiodon Assemblage Zone, to lower values in the Aulacephalodon/Cistecephaluszone, declining still further in the Dicynodon lacerficeps/Whaitsia zone at the end of the Permian. These changes coincide with gradual changes in carbon isotope ratios measured from Diictodon tooth apatite. It is suggested that the decline in growth increments is related to environmental changes associated with a decline in primary production which contributed to the decline in abundance and ultimate extinction of Diictodon.

  19. Incremental Passivity and Incremental Passivity-Based Output Regulation for Switched Discrete-Time Systems.

    Science.gov (United States)

    Jiao Li; Jun Zhao

    2017-05-01

    This paper investigates incremental passivity and output regulation for switched discrete-time systems. We develop the results in two parts. First of all, a concept of incremental passivity is proposed to describe the overall incremental passivity property of a switched discrete-time system in the absence of the classic incremental passivity property of the subsystems. A condition for incremental passivity is given. A certain negative output feedback is designed to produce asymptotic stability. Incremental passivity is shown to be preserved under feedback interconnection. The second part of this paper is concerned with an application of the incremental passivity theory to the output regulation problem for switched discrete-time systems. The key idea is to construct a switched internal model with incremental passivity, which closely links the solvability of the output regulation problem. A characteristic of the switched internal model is that it does not necessarily switch synchronously with the controlled plant, which greatly increases the freedom of design. Once such a switched internal model is established, the output regulation problem is then solved by construction of the feedback interconnection between the controlled plant and the switched internal model. The main usefulness of the strategy is to get rid of the solvability of the output regulation problem for the subsystems.

  20. Teraflop-scale Incremental Machine Learning

    CERN Document Server

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-using previous solutions, learning programming idioms and discovery of frequent subprograms. Experiments with two training sequences demonstrate that our approach to incremental learning is effective.

  1. Theory of Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Martins, P.A.F.; Bay, Niels; Skjødt, Martin

    2008-01-01

    This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in......-plane contact friction and is focused on the extreme modes of deformation that are likely to be found in single point incremental forming processes. The overall investigation is supported by experimental work performed by the authors and data retrieved from the literature....

  2. Concepts of incremental updating and versioning

    CSIR Research Space (South Africa)

    Cooper, Antony K

    2004-07-01

    Full Text Available . BACKGROUND TO THE PROBLEM OF INCREMENTAL UPDATING AND VERSIONING As well as a number of Commissions, the International Cartographic Association (ICA) had one Working Group, on Incremental Updating and Versioning, and this presentation describes some... of the work undertaken recently by the Working Group (WG). The WG was voted for a Commission by the General Assembly held at the 21st ICC in Durban, South Africa. The basic problem being addressed by the Commission is that a user compiles their data base...

  3. The balanced scorecard: an incremental approach model to health care management.

    Science.gov (United States)

    Pineno, Charles J

    2002-01-01

    The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.

  4. Modified partially wide tangents technique in post-mastectomy radiotherapy for patients with left-sided breast cancer

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qian; CHEN Jia-yi; HU Wei-gang; GUO Xiao-mao

    2010-01-01

    Background The role of internal mammary nodes (IMN) irradiation for breast cancer patients after mastectomy remains controversial. This study aimed to compare different techniques for radiation of the chest wall (CW) and IMN post-mastectomy for left-breast cancer patients in terms of dose homogeneity within planning target volume (PTV) and dose to critical structures.Methods Thirty patients underwent CT simulation, while CW, IMN, left lung, heart and contralateral breast were contoured. Three three-dimensional conformal radiotherapy (3D-CRT) techniques, namely, standard tangents, partially wide tangents (PWT), and modified PWT techniques plus intensity modulated radiotherapy (IMRT) technique have been used to radiate CW and IMN. In addition to the target coverage and dose homogeneity, we also evaluated the dose to the critical structures including heart, left lung and contralateral breast.Results All three 3D-CRT techniques provided satisfactory coverage regarding total PTV. The PWT and the modified PWT gave better coverage of IMN PTV with V47.5 of (96.83±4.56)% and (95.19±3.90)% compared to standard tangents ((88.16±7.77)%), P <0.05. The standard tangents also contributed the biggest IMN VD105%, VD110%, VD115% and VD120%. The lowest mean dose of the heart was achieved by the modified PWT ((8.47±2.30) Gy), compared with PWT ((11.97±3.54)Gy) and standard tangents ((11.18±2.53) Gy). The mean dose of lung and contralateral breast with the modified PWT was significantly lower than those with PWT. Comparing IMRT with the modified PWT, both techniques provided satisfactory coverage. The conformity indexes (CI) with IMRT (CI1: 0.71±0.02; CI2: 0.64±0.02) were better than those with the modified PWT (CI1: 0.50±0.02; CI2: 0.45±0.02). The mean dose, V5, V10 and V5-10 of heart and left lung with the modified PWT were significantly lower than those with the IMRT. The mean dose and VD2% of contralateral breast with the modified PWT were not significantly different

  5. Incremental Integrity Checking: Limitations and Possibilities

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2005-01-01

    to query containment, we show that no procedure exists that always returns the best incremental test (aka simplification of integrity constraints), and this according to any reasonable criterion measuring the checking effort. In spite of this theoretical limitation, we develop an effective procedure...

  6. Incremental Pressing Technique in Explosive Charge

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A pressing technique has become available that might be useful for compressing granular explosives. If the height-diameter ratio of the charge is unfavorable,the high quality charge can not be obtained with the common single-action pressing. This paper presents incremental pressing technique, which can obtain the charge with higher overall density and more uniform density.

  7. Incremental Evaluation of Higher Order Attributes

    NARCIS (Netherlands)

    Bransen, Jeroen; Dijkstra, Atze; Swierstra, Doaitse

    2015-01-01

    Compilers, amongst other programs, often work with data that (slowly) changes over time. When the changes between subsequent runs of the compiler are small, one would hope the compiler to incrementally update its results, resulting in much lower running times. However, the manual construction of an

  8. Robust Background Subtraction with Incremental Eigen Models

    NARCIS (Netherlands)

    Gritti, T.

    2008-01-01

    In this report we first describe a background subtraction algorithm based on the use of eigen space decomposition. We then present a method to achive incremental modelling, which allows for faster computation and saving in memory requirements. We discuss the performanceof the algorithm and the issu

  9. The Cognitive Underpinnings of Incremental Rehearsal

    Science.gov (United States)

    Varma, Sashank; Schleisman, Katrina B.

    2014-01-01

    Incremental rehearsal (IR) is a flashcard technique that has been developed and evaluated by school psychologists. We discuss potential learning and memory effects from cognitive psychology that may explain the observed superiority of IR over other flashcard techniques. First, we propose that IR is a form of "spaced practice" that…

  10. Phylogeny Constructed by Using Dsing Diversity Increment

    Institute of Scientific and Technical Information of China (English)

    Shi Feng; Li Na-na; Li Yuan-xiang; Zhou Huai-bei

    2003-01-01

    A new approach based on the concept of the diversity increment is applied to reconstruct a phylogeny. The phylogeny of the Eutherian orders use concatenated H-stranded amino acid sequences, and the result is consistent with the commonly accepted one for the Eutherians.

  11. Value-Driven Incremental Development (Poster)

    Science.gov (United States)

    2014-10-27

    rework during development. Multi-dimensional Analysis What is the design implication of a release decision? Architecting for Incremental Assurance...measures are needed to make good release decisions? Selected FY14 Results • Improved rework analysis by making architectural dependency information...studies and surveys with organizations revealed architectural rework occurs in such context and can be managed by better quantification of technical

  12. Incremental pattern matching for regular expressions

    NARCIS (Netherlands)

    Jalali, Arash; Ghamarian, Amir Hossein; Rensink, Arend; Fish, Andrew; Lambers, Leen

    2012-01-01

    Graph pattern matching lies at the heart of any graph transformation-based system. Incremental pattern matching is one approach proposed for reducingthe overall cost of pattern matching over successive transformations by preserving the matches that stay relevant after a rule application. An importan

  13. Embedded Incremental Feature Selection for Reinforcement Learning

    Science.gov (United States)

    2012-05-01

    Classical reinforcement learning techniques become impractical in domains with large complex state spaces. The size of a domain’s state space is...require all the provided features. In this paper we present a feature selection algorithm for reinforcement learning called Incremental Feature

  14. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan;

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  15. Crystallization Formulation Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Crystallization Formulation Lab fills a critical need in the process development and optimization of current and new explosives and energetic formulations. The...

  16. Incrementally Exploiting Sentential Association for Email Classification

    Institute of Scientific and Technical Information of China (English)

    Li Qu; He Yu; Feng Jianlin; Feng Yucai

    2006-01-01

    A novel association-based algorithm EmailInClass is proposed for incremental Email classification. In view of the fact that the basic semantic unit in an Email is actually a sentence, and the words within the same sentence are typically more semantically related than the words that just appear in the same Email, EmailInClass views a sentence rather than an Email as a transaction. Extensive experiments conducted on benchmark corpora Enron reveal that the effectiveness of EmailInClass is superior to the non-incremental alternatives such as NaiveBayes and SAT-MOD. In addition, the classification rules generated by EmailInClass are human readable and revisable.

  17. A research on high-temperature permittivity and loss tangent of low-loss dielectric by resonant-cavity technique

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Resonant-cavity technique was introduced to measure the permittivity and loss tangent of low-loss dielectrics. The dielectric properties at 9-10 GHz are measured accurately at the temperature up to 800 ℃by the resonant cavity technique. The only electrical parameters that need to be measured are quality factors (Q) and resonant length (L) of resonant cavity loaded and unloaded with dielectric sample. Moreover, the error caused by thermal expansion effect was resolved by error analysis and experimental calibration.

  18. Consequence of nanofluid on peristaltic transport of a hyperbolic tangent fluid model in the occurrence of apt (tending) magnetic field

    Science.gov (United States)

    Akram, Safia; Nadeem, S.

    2014-05-01

    In the current study, sway of nanofluid on peristaltic transport of a hyperbolic tangent fluid model in the incidence of tending magnetic field has been argued. The governing equations of a nanofluid are first modeled and then simplified under lubrication approach. The coupled nonlinear equations of temperature and nano particle volume fraction are solved analytically using a homotopy perturbation technique. The analytical solution of the stream function and pressure gradient are carried out using perturbation technique. The graphical results of the problem under discussion are also being brought under consideration to see the behavior of various physical parameters.

  19. Incremental Centrality Algorithms for Dynamic Network Analysis

    Science.gov (United States)

    2013-08-01

    the incremental betweenness centrality algorithm. The total upper bound for the space these data structures consume is 111 ((3*|AffectedSinks...when p = 0.4. In [153], the author examines the ethnocentrism phenomenon which refers to the tendency to behave differently towards strangers based...amount of memory consumed , but, among these two factors, what makes the real difference in how much memory is needed is the number of nodes in a

  20. Laboratory setup for incremental forming at IPL

    DEFF Research Database (Denmark)

    Young, Dave; Andreasen, Jan Lasson

    In late 2003 the department enjoyed a visit from Professor Jack Jeswiet from Queen’s University Kingston, Ontario, Canada. Jack as one of the pioneers in Single_Point_Incremental_Forming convinced us to start activities in this field at IPL, joining knowledge and facilities of the forming...... for generating tool paths using available CAD/CAM facilties 3. Handing over knowledge from Queen’s to the laboratory staff of IPL....

  1. PROFITABILITY OF INCREMENTAL EXPENDITURE ON FIBRE PROMOTION

    OpenAIRE

    Hill, Debbie J.; Piggott, Roley R.; Griffith, Garry R.

    1996-01-01

    In this paper the impact of changes in wool promotion expenditure and changes in expenditure on the promotion of competing fibres are examined using an equilibrium displacement model. The emphasis is on examining impacts on producer profits net of promotion expenditure and on benefit-cost ratios measuring changes in producer surplus relative to changes in promotion expenditure. It was found, for example, that incremental expenditure on apparel wool promotion on the domestic market is unprofit...

  2. Molecular energies from an incremental fragmentation method

    Science.gov (United States)

    Meitei, Oinam Romesh; Heßelmann, Andreas

    2016-02-01

    The systematic molecular fragmentation method by Collins and Deev [J. Chem. Phys. 125, 104104 (2006)] has been used to calculate total energies and relative conformational energies for a number of small and extended molecular systems. In contrast to the original approach by Collins, we have tested the accuracy of the fragmentation method by utilising an incremental scheme in which the energies at the lowest level of the fragmentation are calculated on an accurate quantum chemistry level while lower-cost methods are used to correct the low-level energies through a high-level fragmentation. In this work, the fragment energies at the lowest level of fragmentation were calculated using the random-phase approximation (RPA) and two recently developed extensions to the RPA while the incremental corrections at higher levels of the fragmentation were calculated using standard density functional theory (DFT) methods. The complete incremental fragmentation method has been shown to reproduce the supermolecule results with a very good accuracy, almost independent on the molecular type, size, or type of decomposition. The fragmentation method has also been used in conjunction with the DFT-SAPT (symmetry-adapted perturbation theory) method which enables a breakdown of the total nonbonding energy contributions into individual interaction energy terms. Finally, the potential problems of the method connected with the use of capping hydrogen atoms are analysed and two possible solutions are supplied.

  3. Mathematical analysis of the heart rate performance curve during incremental exercise testing.

    Science.gov (United States)

    Rosic, G; Pantovic, S; Niciforovic, J; Colovic, V; Rankovic, V; Obradovic, Z; Rosic, Mirko

    2011-03-01

    In this study we performed laboratory treadmill protocols of increasing load. Heart rate was continuously recorded and blood lactate concentration was measured for determination of lactate threshold by means of LTD-max and LT4.0 methods.Our results indicate that the shape of heart rate performance curve (HRPC) during incremental testing depends on the applied exercise protocol (change of initial speed and the step of running speed increase, with the constant stage duration). Depending on the applied protocol, the HRPC can be described by linear, polynomial (S-shaped), and exponential mathematical expression.We presented mathematical procedure for estimation of heart rate threshold points at the level of LTD-max and LT4.0, by means of exponential curve and its relative deflection from the initial trend line (tangent line to exponential curve at the point of starting heart rate). The relative deflection of exponential curve from the initial trend line at the level of LTD-max and/or LT4.0 can be defined, based on the slope of the initial trend line. Using originally developed software that allows mathematical analysis of heart rate-load relation, LTD-max and/or LT4.0 can be estimated without direct measurement of blood lactate concentration.

  4. Variations in breast tangent radiotherapy: a survey of practice in New South Wales and the Australian Capital Territory

    Energy Technology Data Exchange (ETDEWEB)

    Veness, M.J.; Delaney, G.; Berry, M. [Liverpool Hospital, Liverpool, NSW (Australia). Department of Radiation Oncology

    1999-08-01

    The breast is a complex anatomical structure where achieving a homogeneous dose distribution with radiation treatment is difficult. Despite obvious similarities in the approach to such treatment (using tangents) there is variation in the process of simulation, planning and treatment between radiation oncologists. Previous Australasian studies in the treatment of lung cancer, prostate cancer and Hodgkin`s disease highlighted considerable variation in many areas of treatment. As part of a multicentre breast phantom study involving 10 radiation oncology departments throughout New South Wales (NSW) and the Australian Capital Territory (ACT), a 22-question survey was distributed. The aim of the survey was to assess the extent of variation in the approach to the simulation, planning and treatment of early breast cancer using tangents. Responses from 10 different radiation oncology departments revealed variation in most areas of the survey. There is no reason to assume similar variations do not occur Australasia wide. Studies involving overseas radiation oncologists also reveal a wide variation in treating early breast cancer. The consequences of such variations remain unclear. Copyright (1999) Blackwell Science Pty Ltd 15 refs., 1 tab.

  5. ATTENUATION OF DIFFRACTED MULTIPLES WITH AN APEX-SHIFTED TANGENT-SQUARED RADON TRANSFORM IN IMAGE SPACE

    Directory of Open Access Journals (Sweden)

    Alvarez Gabriel

    2006-12-01

    Full Text Available In this paper, we propose a method to attenuate diffracted multiples with an apex-shifted tangent-squared Radon transform in angle domain common image gathers (ADCIG . Usually, where diffracted multiples are a problem, the wave field propagation is complex and the moveout of primaries and multiples in data space is irregular. The method handles the complexity of the wave field propagation by wave-equation migration provided that migration velocities are reasonably accurate. As a result, the moveout of the multiples is well behaved in the ADCIGs. For 2D data, the apex-shifted tangent-squared Radon transform maps the 2D space image into a 3D space-cube model whose dimensions are depth, curvature and apex-shift distance.
    Well-corrected primaries map to or near the zero curvature plane and specularly-reflected multiples map to or near the zero apex-shift plane. Diffracted multiples map elsewhere in the cube according to their curvature and apex-shift distance. Thus, specularly reflected as well as diffracted multiples can be attenuated simultaneously. This approach is illustrated with a segment of a 2D seismic line over a large salt body in the Gulf of Mexico. It is shown that ignoring the apex shift compromises the attenuation of the diffracted multiples, whereas the approach proposed attenuates both the specularly-reflected and the diffracted multiples without compromising the primaries.

  6. Systematic Luby Transform codes as incremental redundancy scheme

    CSIR Research Space (South Africa)

    Grobler, TL

    2011-09-01

    Full Text Available Systematic Luby Transform (fountain) codes are investigated as a possible incremental redundancy scheme for EDGE. The convolutional incremental redundancy scheme currently used by EDGE is replaced by the fountain approach. The results...

  7. Evaluation of Flood Routing Techniques for Incremental Damage Assessment

    OpenAIRE

    Jayyousi, Enan Fakhri

    1994-01-01

    Incremental damage assessment is a tool used to assess the justification for expensive modifications of inadequate dams. The input data to incremental damage assessment are the output from the breach analysis and flood routing. For this reason, flood routing should be conducted carefully. Distorted results from the flood routing technique or unstable modeling of the problem will distort the results of an incremental damage assessment, because an error in the estimated incremental stage will c...

  8. 48 CFR 3452.232-71 - Incremental funding.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Incremental funding. 3452... 3452.232-71 Incremental funding. As prescribed in 3452.771, insert the following provision in solicitations: Incremental Funding (AUG 1987) (a) Sufficient funds are not presently available to cover...

  9. IT Supporting Strategy Formulation

    NARCIS (Netherlands)

    Achterbergh, J.M.I.M.

    2005-01-01

    This overview approaches information and communication technology (ICT) for competitive intelligence from the perspective of strategy formulation. It provides an ICT architecture for supporting the knowledge processes producing relevant knowledge for strategy formulation. To determine what this arch

  10. IT Supporting Strategy Formulation

    NARCIS (Netherlands)

    Achterbergh, J.M.I.M.

    2005-01-01

    This overview approaches information and communication technology (ICT) for competitive intelligence from the perspective of strategy formulation. It provides an ICT architecture for supporting the knowledge processes producing relevant knowledge for strategy formulation. To determine what this arch

  11. Compressive tracking with incremental multivariate Gaussian distribution

    Science.gov (United States)

    Li, Dongdong; Wen, Gongjian; Zhu, Gao; Zeng, Qiaoling

    2016-09-01

    Various approaches have been proposed for robust visual tracking, among which compressive tracking (CT) yields promising performance. In CT, Haar-like features are efficiently extracted with a very sparse measurement matrix and modeled as an online updated naïve Bayes classifier to account for target appearance change. The naïve Bayes classifier ignores overlap between Haar-like features and assumes that Haar-like features are independently distributed, which leads to drift in complex scenario. To address this problem, we present an extended CT algorithm, which assumes that all Haar-like features are correlated with each other and have multivariate Gaussian distribution. The mean vector and covariance matrix of multivariate normal distribution are incrementally updated with constant computational complexity to adapt to target appearance change. Each frame is associated with a temporal weight to expend less modeling power on old observation. Based on temporal weight, an update scheme with changing but convergent learning rate is derived with strict mathematic proof. Compared with CT, our extended algorithm achieves a richer representation of target appearance. The incremental multivariate Gaussian distribution is integrated into the particle filter framework to achieve better tracking performance. Extensive experiments on the CVPR2013 tracking benchmark demonstrate that our proposed tracker achieves superior performance both qualitatively and quantitatively over several state-of-the-art trackers.

  12. ENERGY SYSTEM CONTRIBUTIONS DURING INCREMENTAL EXERCISE TEST

    Directory of Open Access Journals (Sweden)

    Rômulo Bertuzzi

    2013-09-01

    Full Text Available The main purpose of this study was to determine the relative contributions of the aerobic and glycolytic systems during an incremental exercise test (IET. Ten male recreational long-distance runners performed an IET consisting of three-minute incremental stages on a treadmill. The fractions of the contributions of the aerobic and glycolytic systems were calculated for each stage based on the oxygen uptake and the oxygen energy equivalents derived by blood lactate accumulation, respectively. Total metabolic demand (WTOTAL was considered as the sum of these two energy systems. The aerobic (WAER and glycolytic (WGLYCOL system contributions were expressed as a percentage of the WTOTAL. The results indicated that WAER (86-95% was significantly higher than WGLYCOL (5-14% throughout the IET (p < 0.05. In addition, there was no evidence of the sudden increase in WGLYCOL that has been previously reported to support to the "anaerobic threshold" concept. These data suggest that the aerobic metabolism is predominant throughout the IET and that energy system contributions undergo a slow transition from low to high intensity

  13. Nonparametric regression with martingale increment errors

    CERN Document Server

    Delattre, Sylvain

    2010-01-01

    We consider the problem of adaptive estimation of the regression function in a framework where we replace ergodicity assumptions (such as independence or mixing) by another structural assumption on the model. Namely, we propose adaptive upper bounds for kernel estimators with data-driven bandwidth (Lepski's selection rule) in a regression model where the noise is an increment of martingale. It includes, as very particular cases, the usual i.i.d. regression and auto-regressive models. The cornerstone tool for this study is a new result for self-normalized martingales, called ``stability'', which is of independent interest. In a first part, we only use the martingale increment structure of the noise. We give an adaptive upper bound using a random rate, that involves the occupation time near the estimation point. Thanks to this approach, the theoretical study of the statistical procedure is disconnected from usual ergodicity properties like mixing. Then, in a second part, we make a link with the usual minimax th...

  14. Approach for Estimating Exposures and Incremental Health ...

    Science.gov (United States)

    Approach for Estimating Exposures and Incremental Health Effects from Lead During Renovation, Repair, and Painting Activities in Public and Commercial Buildings” (Technical Approach Document). Also available for public review and comment are two supplementary documents: the detailed appendices for the Technical Approach Document and a supplementary report entitled “Developing a Concentration-Response Function for Pb Exposure and Cardiovascular Disease-Related Mortality.” Together, these documents describes an analysis for estimating exposures and incremental health effects created by renovations of public and commercial buildings (P&CBs). This analysis could be used to identify and evaluate hazards from renovation, repair, and painting activities in P&CBs. A general overview of how this analysis can be used to inform EPA’s hazard finding is described in the Framework document that was previously made available for public comment (79 FR 31072; FRL9910-44). The analysis can be used in any proposed rulemaking to estimate the reduction in deleterious health effects that would result from any proposed regulatory requirements to mitigate exposure from P&CB renovation activities. The Technical Approach Document describes in detail how the analyses under this approach have been performed and presents the results – expected changes in blood lead levels and health effects due to lead exposure from renovation activities.

  15. Incremental Nonnegative Matrix Factorization for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  16. Incremental learning for automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Benz, Zachary O.; Basilico, Justin Derrick; Davis, Warren Leon,; Dixon, Kevin R.; Jones, Brian S.; Martin, Nathaniel; Wendt, Jeremy Daniel

    2013-12-01

    People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

  17. Torsion formulation of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Lledo, M A; Sommovigo, L, E-mail: Maria.Lledo@ific.uv.e, E-mail: Luca.Sommovigo@mfn.unipmn.i [Departament de Fisica Teorica, Universitat de Valencia, and IFIC (Centro mixto CSIC-UVEG) C/Dr Moliner, 50, E-46100 Burjassot (Valencia) (Spain)

    2010-03-21

    We explain precisely what it means to have a connection with torsion as a solution of the Einstein equations. While locally the theory remains the same, the new formulation allows for topologies that would have been excluded in the standard formulation of gravity. In this formulation it is possible to couple arbitrary torsion to gauge fields without breaking the gauge invariance.

  18. Numerical Studies and Equipment Development for Single Point Incremental Forming

    Science.gov (United States)

    Marabuto, S. R.; Sena, J. I. V.; Afonso, D.; Martins, M. A. B. E.; Coelho, R. M.; Ferreira, J. A. F.; Valente, R. A. F.; de Sousa, R. J. Alves

    2011-05-01

    This paper summarizes the achievements obtained so far in the context of a research project carried out at the University of Aveiro, Portugal on both numerical and experimental viewpoints concerning Single Point Incremental Forming (SPIF). On the experimental side, the general guidelines on the development of a new SPIF machine are detailed. The innovation features are related to the choice of a six-degrees-of-freedom, parallel kinematics machine, with a high payload, to broad the range of materials to be tested, and allowing for a higher flexibility on tool-path generation. On the numerical side, preliminary results on simulation of SPIF processes resorting to an innovative solid-shell finite element are presented. The final target is an accurate and fast simulation of SPIF processes by means of numerical methods. Accuracy is obtained through the use of a finite element accounting for three-dimensional stress and strain fields. The developed formulation allows for an unlimited number of integration points through its thickness direction, which promotes accuracy without loss of CPU efficiency. Preliminary results and designs are shown and discussions over the obtained solutions are provided in order to further improve the research framework.

  19. Orbit determination using incremental phase and TDOA of X-ray pulsar

    Institute of Scientific and Technical Information of China (English)

    Rong JIAO; Lu-ping XU‡; Hua ZHANG; Cong LI

    2016-01-01

    X-ray pulsars offer stable, periodic X-ray pulse sequences that can be used in spacecraft positioning systems. A method using X-ray pulsars to determine the initial orbit of a satellite is presented in this paper. This method suggests only one detector to be equipped on the satellite and assumes that the detector observes three pulsars in turn. To improve the performance, the use of incremental phase in one observation duration is proposed, and the incremental phase is combined with the time dif-ference of arrival (TDOA). Then, a weighted least squares (WLS) algorithm is formulated to calculate the initial orbit. Numerical simulations are performed to assess the proposed orbit determination method.

  20. Electron-beam initiated polymerization of acrylate compositions 1 : FTIR monitoring of incremental irradiation

    CERN Document Server

    Patacz, C; Coqueret, X

    2000-01-01

    The electron-beam induced polymerization of some representative formulations including acrylate functional oligomers and diluents has been investigated by means of FTIR spectroscopy applied to films that were cured under a nitrogen flow. In order to gain a deeper insight into the reactivity of the polymerizable systems, the conversion-dose relationship was examined with emphasis on the following points : depth cure profile of the films, and the additivity of effects of incremental radiation doses on monomer conversion. It was shown to be possible to reproduce the actual polymerization profile from discontinuous measurements. This remarkable result is tentatively explained by the geometry of the samples causing limited thermal effects and by the minor influence of possible inhibition and post-polymerization that could influence each of the incremental transformations compared to a single large dose treatment. This method provides a fine tool for revealing differences in kinetic behavior between polymerizable m...

  1. A simple extension of contraction theory to study incremental stability properties

    DEFF Research Database (Denmark)

    Jouffroy, Jerome

    Contraction theory is a recent tool enabling to study the stability of nonlinear systems trajectories with respect to one another, and therefore belongs to the class of incremental stability methods. In this paper, we extend the original definition of contraction theory to incorporate...... in an explicit manner the control input of the considered system. Such an extension, called universal contraction, is quite analogous in spirit to the well-known Input-to-State Stability (ISS). It serves as a simple formulation of incremental ISS, external stability, and detectability in a differential setting....... The hierarchical combination result of contraction theory is restated in this framework, and a differential small-gain theorem is derived from results already available in Lyapunov theory....

  2. Internal friction between fluid particles of MHD tangent hyperbolic fluid with heat generation: Using coefficients improved by Cash and Karp

    Science.gov (United States)

    Salahuddin, T.; Khan, Imad; Malik, M. Y.; Khan, Mair; Hussain, Arif; Awais, Muhammad

    2017-05-01

    The present work examines the internal resistance between fluid particles of tangent hyperbolic fluid flow due to a non-linear stretching sheet with heat generation. Using similarity transformations, the governing system of partial differential equations is transformed into a coupled non-linear ordinary differential system with variable coefficients. Unlike the current analytical works on the flow problems in the literature, the main concern here is to numerically work out and find the solution by using Runge-Kutta-Fehlberg coefficients improved by Cash and Karp (Naseer et al., Alexandria Eng. J. 53, 747 (2014)). To determine the relevant physical features of numerous mechanisms acting on the deliberated problem, it is sufficient to have the velocity profile and temperature field and also the drag force and heat transfer rate all as given in the current paper.

  3. Theoretical Analysis of Shear Thinning Hyperbolic Tangent Fluid Model for Blood Flow in Curved Artery with Stenosis

    Directory of Open Access Journals (Sweden)

    Sohail Nadeem

    2016-01-01

    Full Text Available In this paper, we have considered the blood flow in a curved channel with abnormal development of stenosis in an axis-symmetric manner. The constitutive equations for incompressible and steady non-Newtonian tangent hyperbolic fluid have been modeled under the mild stenosis case. A perturbation technique and homotopy perturbation technique have been used to obtain analytical solutions for the wall shear stress, resistance impedance to flow, wall shear stress at the stenosis throat and velocity profile. The obtained results have been discussed for different tapered arteries i.e., diverging tapering, converging tapering, non-tapered arteries with the help of different parameters of interest and found that tapering dominant the curvature of the curved channel.

  4. Incremental Collaborative Filtering Considering Temporal Effects

    CERN Document Server

    Wang, Yongji; Wu, Hu; Wu, Jingzheng

    2012-01-01

    Recommender systems require their recommendation algorithms to be accurate, scalable and should handle very sparse training data which keep changing over time. Inspired by ant colony optimization, we propose a novel collaborative filtering scheme: Ant Collaborative Filtering that enjoys those favorable characteristics above mentioned. With the mechanism of pheromone transmission between users and items, our method can pinpoint most relative users and items even in face of the sparsity problem. By virtue of the evaporation of existing pheromone, we capture the evolution of user preference over time. Meanwhile, the computation complexity is comparatively small and the incremental update can be done online. We design three experiments on three typical recommender systems, namely movie recommendation, book recommendation and music recommendation, which cover both explicit and implicit rating data. The results show that the proposed algorithm is well suited for real-world recommendation scenarios which have a high...

  5. Generalization of Brownian Motion with Autoregressive Increments

    CERN Document Server

    Fendick, Kerry

    2011-01-01

    This paper introduces a generalization of Brownian motion with continuous sample paths and stationary, autoregressive increments. This process, which we call a Brownian ray with drift, is characterized by three parameters quantifying distinct effects of drift, volatility, and autoregressiveness. A Brownian ray with drift, conditioned on its state at the beginning of an interval, is another Brownian ray with drift over the interval, and its expected path over the interval is a ray with a slope that depends on the conditioned state. This paper shows how Brownian rays can be applied in finance for the analysis of queues or inventories and the valuation of options. We model a queue's net input process as a superposition of Brownian rays with drift and derive the transient distribution of the queue length conditional on past queue lengths and on past states of the individual Brownian rays comprising the superposition. The transient distributions of Regulated Brownian Motion and of the Regulated Brownian Bridge are...

  6. Disrupting incrementalism in health care innovation.

    Science.gov (United States)

    Soleimani, Farzad; Zenios, Stefanos

    2011-08-01

    To build enabling innovation frameworks for health care entrepreneurs to better identify, evaluate, and pursue entrepreneurial opportunities. Powerful frameworks have been developed to enable entrepreneurs and investors identify which opportunity areas are worth pursuing and which start-up ideas have the potential to succeed. These frameworks, however, have not been clearly defined and interpreted for innovations in health care. Having a better understanding of the process of innovation in health care allows physician entrepreneurs to innovate more successfully. A review of academic literature was conducted. Concepts and frameworks related to technology innovation were analyzed. A new set of health care specific frameworks was developed. These frameworks were then applied to innovations in various health care subsectors. Health care entrepreneurs would greatly benefit from distinguishing between incremental and disruptive innovations. The US regulatory and reimbursement systems favor incrementalism with a greater chance of success for established players. Small companies and individual groups, however, are more likely to thrive if they adopt a disruptive strategy. Disruption in health care occurs through various mechanisms as detailed in this article. While the main mechanism of disruption might vary across different health care subsectors, it is shown that disruptive innovations consistently require a component of contrarian interpretation to guarantee considerable payoff. If health care entrepreneurs choose to adopt an incrementalist approach, they need to build the risk of disruption into their models and also ascertain that they have a very strong intellectual property (IP) position to weather competition from established players. On the contrary, if they choose to pursue disruption in the market, albeit the competition will be less severe, they need to recognize that the regulatory and reimbursement hurdles are going to be very high. Thus, they would benefit

  7. Relaxed incremental variational approach for the modeling of damage-induced stress hysteresis in arterial walls.

    Science.gov (United States)

    Schmidt, Thomas; Balzani, Daniel

    2016-05-01

    In this paper, a three-dimensional relaxed incremental variational damage model is proposed, which enables the description of complex softening hysteresis as observed in supra-physiologically loaded arterial tissues, and which thereby avoids a loss of convexity of the underlying formulation. The proposed model extends the relaxed formulation of Balzani and Ortiz [2012. Relaxed incremental variational formulation for damage at large strains with application to fiber-reinforced materials and materials with truss-like microstructures. Int. J. Numer. Methods Eng. 92, 551-570], such that the typical stress-hysteresis observed in arterial tissues under cyclic loading can be described. This is mainly achieved by constructing a modified one-dimensional model accounting for cyclic loading in the individual fiber direction and numerically homogenizing the response taking into account a fiber orientation distribution function. A new solution strategy for the identification of the convexified stress potential is proposed based on an evolutionary algorithm which leads to an improved robustness compared to solely Newton-based optimization schemes. In order to enable an efficient adjustment of the new model to experimentally observed softening hysteresis, an adjustment scheme using a surrogate model is proposed. Therewith, the relaxed formulation is adjusted to experimental data in the supra-physiological domain of the media and adventitia of a human carotid artery. The performance of the model is then demonstrated in a finite element example of an overstretched artery. Although here three-dimensional thick-walled atherosclerotic arteries are considered, it is emphasized that the formulation can also directly be applied to thin-walled simulations of arteries using shell elements or other fiber-reinforced biomembranes.

  8. THIRD ORDER SHEAR DEFORMATION MODEL FOR LAMINATED SHELLS WITH FINITE ROTATIONS:FORMULATION AND CONSISTENT LINEARIZATION

    Institute of Scientific and Technical Information of China (English)

    Mohamed BALAH; Hamdan Naser AL-GHAMEDY

    2004-01-01

    The paper presents an approach for the formulation of general laminated shells based on a third order shear deformation theory. These shells undergo finite (unlimited in size) rotations and large overall motions but with small strains. A singularity-free parametrization of the rotation field is adopted. The constitutive equations, derived with respect to laminate curvilinear coordinates,are applicable to shell elements with an arbitrary number of orthotropic layers and where the material principal axes can vary from layer to layer. A careful consideration of the consistent linearization procedure pertinent to the proposed parametrization of finite rotations leads to symmetric tangent stiffness matrices. The matrix formulation adopted here makes it possible to implement the present formulation within the framework of the finite element method as a straightforward task.

  9. Relation between incremental lines and tensile strength of coronal dentin.

    Science.gov (United States)

    Inoue, Toshiko; Saito, Makoto; Yamamoto, Masato; Nishimura, Fumio; Miyazaki, Takashi

    2012-01-01

    In one aspect, this study examined the tensile strength of coronal dentin, as a function of the location of incremental lines, in two types of teeth: human molar versus bovine incisor. In another aspect, tensile strength in coronal dentin was examined with tensile loading in two different orientations to the incremental lines: parallel versus perpendicular. There were four experimental groups in this study: HPa, human molar dentin with tensile orientation parallel to the incremental lines; HPe, human molar dentin with tensile orientation perpendicular to the incremental lines; BPa, bovine incisor dentin with tensile orientation parallel to the incremental lines; BPe, bovine incisor dentin with tensile orientation perpendicular to the incremental lines. Tensile strengths of the parallel group (HPa and BPa) were significantly higher (pincremental lines, was thus confirmed in coronal dentin. However, there were no differences in anisotropy effect between the two tooth types.

  10. Explosive Formulation Pilot Plant

    Data.gov (United States)

    Federal Laboratory Consortium — The Pilot Plant for Explosive Formulation supports the development of new explosives that are comprised of several components. This system is particularly beneficial...

  11. Impact of Incremental Sampling Methodology (ISM) on Metals Bioavailability

    Science.gov (United States)

    2016-05-01

    ER D C TR -1 6- 4 Impact of Incremental Sampling Methodology (ISM) on Metals Bioavailability En gi ne er R es ea rc h an d D ev el...Incremental Sampling Methodology (ISM) on Metals Bioavailability Jay Clausen, Laura Levitt, Timothy Cary, Nancy Parker, and Sam Beal U.S. Army Engineer...Bioavailability Assessment” ERDC TR-16-4 ii Abstract This study assessed the impact of the incremental sampling methodology (ISM) on metals bioavailability

  12. Review of Incremental Forming of Sheet Metal Components

    Directory of Open Access Journals (Sweden)

    Nimbalkar D.H

    2013-09-01

    Full Text Available Incremental sheet forming has demonstrated its great potential to form complex three dimensional parts without using a component specific tooling. The die-less nature in incremental forming provides competitive alternative for economically and effectively fabricating low volume functional sheet products. The process locally deforms sheet metal using a moving tool head achieving higher forming limit than those conventional sheet metal stamping process. Incremental sheet metal forming has the potential to revolutionize sheet metal forming, making it accessible to all level of manufacturing. This paper describes the current state of the art of Incremental sheet metal forming.

  13. iBOA: The Incremental Bayesian Optimization Algorithm

    CERN Document Server

    Pelikan, Martin; Goldberg, David E

    2008-01-01

    This paper proposes the incremental Bayesian optimization algorithm (iBOA), which modifies standard BOA by removing the population of solutions and using incremental updates of the Bayesian network. iBOA is shown to be able to learn and exploit unrestricted Bayesian networks using incremental techniques for updating both the structure as well as the parameters of the probabilistic model. This represents an important step toward the design of competent incremental estimation of distribution algorithms that can solve difficult nearly decomposable problems scalably and reliably.

  14. Incremental Training for SVM-Based Classification with Keyword Adjusting

    Institute of Scientific and Technical Information of China (English)

    SUN Jin-wen; YANG Jian-wu; LU Bin; XIAO Jian-guo

    2004-01-01

    This paper analyzed the theory of incremental learning of SVM (support vector machine) and pointed out it is a shortage that the support vector optimization is only considered in present research of SVM incremental learning.According to the significance of keyword in training, a new incremental training method considering keyword adjusting was proposed, which eliminates the difference between incremental learning and batch learning through the keyword adjusting.The experimental results show that the improved method outperforms the method without the keyword adjusting and achieve the same precision as the batch method.

  15. Functional tolerance theory in incremental growth design

    Institute of Scientific and Technical Information of China (English)

    YANG Bo; YANG Tao; ZE Xiangbo

    2007-01-01

    The evolutionary tolerance design strategy and its characteristics are studied on the basis of automation technology in the product structure design.To guarantee a successful transformation from the functional requirement to geometry constraints between parts,and finally to dimension constraints,a functional tolerance design theory in the process of product growth design is put forward.A mathematical model with a correlated sensitivity function between cost and the tolerance is created,in which the design cost,the manufacturing cost,the usage cost,and the depreciation cost of the product are regarded as control constraints of the tolerance allocation.Considering these costs,a multifactor-cost function to express quality loss of the product is applied into the model.In the mathematical model,the minimum cost is used as the objective function; a reasonable process capability index,the assembly function,and assembly quality are taken as the constraints; and depreciation cost in the objective function is expressed as the discount rate-terminology in economics.Thus,allocation of the dimension tolerance as the function and cost over the whole lifetime of the product is realized.Finally,a design example is used to demonstrate the successful application of the proposed functional tolerance theory in the incremental growth design of the product.

  16. A hybrid incremental projection method for thermal-hydraulics applications

    Science.gov (United States)

    Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong

    2016-07-01

    A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.

  17. Formulations in first encounters

    NARCIS (Netherlands)

    A. Hak (Tony); F. de Boer (Fijgje)

    1994-01-01

    markdownabstractThe paper describes and compares the use and function of the formulation--decision pair in three types of diagnostic interviewing. The investigatory type of interviewing, which typically occurs in the medical interview, is characterized by the absence of formulations. In the explora

  18. 26 CFR 1.41-8 - Alternative incremental credit.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Alternative incremental credit. 1.41-8 Section 1... Credits Against Tax § 1.41-8 Alternative incremental credit. (a) Determination of credit. At the election of the taxpayer, the credit determined under section 41(a)(1) equals the amount determined under...

  19. Efficient simulation and process mechanics of incremental sheet forming

    NARCIS (Netherlands)

    Hadoush, Ashraf Moh’d Hasan

    2010-01-01

    Single Point Incremental Forming (SPIF) is a displacement controlled process performed on a CNC machine. A clamped blank is incrementally deformed by the movement of a small-sized tool that follows a prescribed lengthy tool path. The strain achieved by the SPIF process is higher than the strain achi

  20. Efficient simulation and process mechanics of incremental sheet forming

    NARCIS (Netherlands)

    Hadoush, A.

    2010-01-01

    Single Point Incremental Forming (SPIF) is a displacement controlled process performed on a CNC machine. A clamped blank is incrementally deformed by the movement of a small-sized tool that follows a prescribed lengthy tool path. The strain achieved by the SPIF process is higher than the strain

  1. Model Reduction for Nonlinear Systems by Incremental Balanced Truncation

    NARCIS (Netherlands)

    Besselink, Bart; van de Wouw, Nathan; Scherpen, Jacquelien M. A.; Nijmeijer, Henk

    2014-01-01

    In this paper, the method of incremental balanced truncation is introduced as a tool for model reduction of nonlinear systems. Incremental balanced truncation provides an extension of balanced truncation for linear systems towards the nonlinear case and differs from existing nonlinear balancing tech

  2. Lifetime costs of lung transplantation : Estimation of incremental costs

    NARCIS (Netherlands)

    VanEnckevort, PJ; Koopmanschap, MA; Tenvergert, EM; VanderBij, W; Rutten, FFH

    1997-01-01

    Despite an expanding number of centres which provide lung transplantation, information about the incremental costs of lung transplantation is scarce. From 1991 until 1995, in The Netherlands a technology assessment was performed which provided information about the incremental costs of lung transpla

  3. Model Reduction for Nonlinear Systems by Incremental Balanced Truncation

    NARCIS (Netherlands)

    Besselink, Bart; van de Wouw, Nathan; Scherpen, Jacquelien M. A.; Nijmeijer, Henk

    2014-01-01

    In this paper, the method of incremental balanced truncation is introduced as a tool for model reduction of nonlinear systems. Incremental balanced truncation provides an extension of balanced truncation for linear systems towards the nonlinear case and differs from existing nonlinear balancing tech

  4. Creating Helical Tool Paths for Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Skjødt, Martin; Hancock, Michael H.; Bay, Niels

    2007-01-01

    Single point incremental forming (SPIF) is a relatively new sheet forming process. A sheet is clamped in a rig and formed incrementally using a rotating single point tool in the form of a rod with a spherical end. The process is often performed on a CNC milling machine and the tool movement...

  5. Lifetime costs of lung transplantation : Estimation of incremental costs

    NARCIS (Netherlands)

    VanEnckevort, PJ; Koopmanschap, MA; Tenvergert, EM; VanderBij, W; Rutten, FFH

    1997-01-01

    Despite an expanding number of centres which provide lung transplantation, information about the incremental costs of lung transplantation is scarce. From 1991 until 1995, in The Netherlands a technology assessment was performed which provided information about the incremental costs of lung transpla

  6. Incrementality in naming and reading complex numerals : Evidence from eyetracking

    NARCIS (Netherlands)

    Korvorst, M.H.W.; Roelofs, A.P.A.; Levelt, W.J.M.

    2006-01-01

    Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported

  7. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    Science.gov (United States)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-04-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.

  8. Effects of transverse magnetic field with variable thermal conductivity on tangent hyperbolic fluid with exponentially varying viscosity

    Directory of Open Access Journals (Sweden)

    T. Salahuddin

    2015-12-01

    Full Text Available The purpose of present analysis is to examine the effects of temperature dependent viscosity and thermal conductivity on MHD stagnation point flow over a stretching cylinder. The momentum and the temperature equations are modeled by using tangent hyperbolic fluid and the effect of viscous dissipation is also considered. The requisite partial differential equations are metamorphosed into ordinary differential equations by using similarity transformations. The succeeding ordinary differential equations are solved by using shooting method. The physical behavior of non-dimensional parameters for momentum and temperature profiles is deliberated through graphs. The numerical values of skin friction coefficient and local Nusselt number are calculated in order to recognize the behavior of fluid near the surface. The comparison with previous literature is completed in order to check the accuracy of the present work. It is found the velocity reduces with increasing power law index, Weissenberg number, Hartmann number and variable viscosity parameter. With the increasing values of curvature parameter, velocity is found to increase. Variable thermal conductivity parameter and Prandtl number shows opposite behavior for temperature profile.

  9. Improvement of dielectric tunability and loss tangent of (Ba,Sr)TiO3 thin films with K doping

    Institute of Scientific and Technical Information of China (English)

    Zhang Wei-Jie; Dai Jian-Ming; Zhu Xue-Bin; Chang Qing; Liu Qiang-Chun; Sun Yu-Ping

    2012-01-01

    Ba0.6Sr0.4TiO3 thin films doped with K were deposited on Pt/Ti/SiO2/Si substrates by the chemical solution deposition method.The structure,surface morphology and the dielectric and tunable properties of Ba0.6Sr0.4TiO3 thin films have been studied in detail. The K content in Ba0.6Sr0.4TiO3 thin films has a strong influence on the material's properties including surface morphology and the dielectric and tunable properties.It was found that the Curie temperature of K-doped Ba0.6Sr0.4TiO3 films shifts to a higher value compared with that of undoped Ba0.6Sr0.4TiO3 thin films,which leads to a dielectric enhancement of K-doped Ba0.6Sr0.4TiO3 films at room temperature.At the optimized content of 0.02 mol,the dielectric loss tangent is reduced significantly from 0.057 to 0.020.Meanwhile,the tunability is enhanced obviously from 26% to 48% at the measured frequency of 1 MHz and the maximum value of the figure of merit is 23.8.This suggests that such films have potential applications for tunable devices.

  10. HUNTING PHENOMENON STUDY OF RAILWAY CONVENTIONAL TRUCK ON TANGENT TRACKS DUE TO CHANGE IN RAIL WHEEL GEOMETRY

    Directory of Open Access Journals (Sweden)

    KARIM H. ALI ABOOD

    2011-04-01

    Full Text Available A mathematical dynamic model of railway conventional truck is presented with 12 degrees of freedom equations of motion. The presented dynamic system consists of conventional truck attached with two single wheelsets in which equipped with lateral, longitudinal and vertical linear stiffness and damping primary and secondary suspensions. This investigated model governs lateral displacement, vertical displacement, roll yaw angles of each of wheelset and the lateral displacement, vertical displacement, roll and yaw angle of conventional truck. Kalker's linear theory has been adopted to evaluate the creep forces which are introduced on rail wheels due to rail wheel contact. The railway truck mathematical equations of motion are solved using fourth order Rung-Kutta method which requires that differential equations to be transformed into a set of first order differential equations. The transformed state space equations are simulated with computer aided simulation to represent the dynamic behavior and time solution of dynamics of conventional truck moving on tangent tracks. Influences of the geometric parameters of the rail wheel such as wheel conicity and nominal rolling radius on the dynamic stability of the system are investigated. It is concluded that the geometric parameters of the rail wheel have different effects on the hunting instability and on the change of the critical hunting velocity of the system. In addition critical hunting velocity of rail trucks is proportional inversely with the square roots of wheel conicity but high critical hunting velocity obtained by increasing the nominal rolling radius of the rail wheel.

  11. Free Convection Flow and Heat Transfer of Tangent Hyperbolic past a Vertical Porous Plate with Partial Slip

    Directory of Open Access Journals (Sweden)

    V. Ramachandra Prasad

    2016-01-01

    Full Text Available This article presents the nonlinear free convection boundary layer flow and heat transfer of an incompressible Tangent Hyperbolic non-Newtonian fluid from a vertical porous plate with velocity slip and thermal jump effects. The transformed conservation equations are solved numerically subject to physically appropriate boundary conditions using a second-order accurate implicit finite-difference Keller Box technique. The numerical code is validated with previous studies. The influence of a number of emerging non-dimensional parameters, namely the Weissenberg number (We, the power law index (n, Velocity slip (Sf, Thermal jump (ST, Prandtl number (Pr and dimensionless tangential coordinate ( on velocity and temperature evolution in the boundary layer regime are examined in detail. Furthermore, the effects of these parameters on surface heat transfer rate and local skin friction are also investigated. Validation with earlier Newtonian studies is presented and excellent correlation achieved. It is found that velocity, skin friction and heat transfer rate (Nusselt number is increased with increasing Weissenberg number (We, whereas the temperature is decreased. Increasing power law index (n enhances velocity and heat transfer rate but decreases temperature and skin friction. An increase in Thermal jump (ST is observed to decrease velocity, temperature, local skin friction and Nusselt number. Increasing Velocity slip (Sf is observed to increase velocity and heat transfer rate but decreases temperature and local skin friction. An increasing Prandtl number, (Pr, is found to decrease both velocity and temperature. The study is relevant to chemical materials processing applications.

  12. Efficiency of Oral Incremental Rehearsal versus Written Incremental Rehearsal on Students' Rate, Retention, and Generalization of Spelling Words

    Science.gov (United States)

    Garcia, Dru; Joseph, Laurice M.; Alber-Morgan, Sheila; Konrad, Moira

    2014-01-01

    The purpose of this study was to examine the efficiency of an incremental rehearsal oral versus an incremental rehearsal written procedure on a sample of primary grade children's weekly spelling performance. Participants included five second and one first grader who were in need of help with their spelling according to their teachers. An…

  13. Incrementality in Planning of Speech During Speaking and Reading Aloud: Evidence from Eye-Tracking.

    Science.gov (United States)

    Ganushchak, Lesya Y; Chen, Yiya

    2016-01-01

    Speaking is an incremental process where planning and articulation interleave. While incrementality has been studied in reading and online speech production separately, it has not been directly compared within one investigation. This study set out to compare the extent of planning incrementality in online sentence formulation versus reading aloud and how discourse context may constrain the planning scope of utterance preparation differently in these two modes of speech planning. Two eye-tracking experiments are reported: participants either described pictures of transitive events (Experiment 1) or read aloud the written descriptions of those events (Experiment 2). In both experiments, the information status of an object character was manipulated in the discourse preceding each picture or sentence. In the Literal condition, participants heard a story where object character was literally mentioned (e.g., fly). In the No Mention condition, stories did not literally mention nor prime the object character depicted on the picture or written in the sentence. The target response was expected to have the same structure and content in all conditions (The frog catches the fly). During naming, the results showed shorter speech onset latencies in the Literal condition than in the No Mention condition. However, no significant differences in gaze durations were found. In contrast, during reading, there were no significant differences in speech onset latencies but there were significantly longer gaze durations to the target picture/word in the Literal than in the No Mention condition. Our results shot that planning is more incremental during reading than during naming and that discourse context can be helpful during speaker but may hinder during reading aloud. Taken together our results suggest that on-line planning of response is affected by both linguistic and non-linguistic factors.

  14. Assessment of strategy formulation

    DEFF Research Database (Denmark)

    Acur, Nuran; Englyst, Linda

    2006-01-01

    . Practical implications – The integration of three different strategy assessment approaches has been made to obtain a holistic, multi-perspective reflection on strategy formulation. Such reflection is assumed to enable managers to proactively evaluate the potential outcome and performance of their chosen......Purpose – Today, industrial firms need to cope with competitive challenges related to innovation, dynamic responses, knowledge sharing, etc. by means of effective and dynamic strategy formulation. In light of these challenges, the purpose of the paper is to present and evaluate an assessment tool...... for strategy formulation processes that ensures high quality in process and outcome. Design/methodology/approach – A literature review was conducted to identify success criteria for strategy formulation processes. Then, a simple questionnaire and assessment tool was developed and used to test the validity...

  15. Stochastic sensitivity analysis of noise-induced order-chaos transitions in discrete-time systems with tangent and crisis bifurcations

    Science.gov (United States)

    Bashkirtseva, Irina; Ryashko, Lev

    2017-02-01

    We study noise-induced order-chaos transitions in discrete-time systems with tangent and crisis bifurcations. To study these transitions parametrically, we suggest a generalized mathematical technique using stochastic sensitivity functions and confidence domains for randomly forced equilibria, cycles, and chaotic attractors. This technique is demonstrated in detail for the simple one-dimensional stochastic system, in which points of crisis and tangent bifurcations are borders of the order window lying between two chaotic parametric zones. A stochastic phenomenon of the extension and shift of this window towards crisis bifurcation point, under increasing noise, is presented and analyzed. Shifts of borders of this order window are found as functions of the noise intensity. By our analytical approach based on stochastic sensitivity functions, we construct a parametric diagram of chaotic and regular regimes for the stochastically forced system.

  16. A Hyperbolic Tangent Adaptive PID + LQR Control Applied to a Step-Down Converter Using Poles Placement Design Implemented in FPGA

    Directory of Open Access Journals (Sweden)

    Marcelo Dias Pedroso

    2013-01-01

    Full Text Available This work presents an adaptive control that integrates two linear control strategies applied to a step-down converter: Proportional Integral Derivative (PID and Linear Quadratic Regulator (LQR controls. Considering the converter open loop transfer function and using the poles placement technique, the designs of the two controllers are set so that the operating point of the closed loop system presents the same natural frequency. With poles placement design, the overshoot problems of the LQR controller are avoided. To achieve the best performance of each controller, a hyperbolic tangent weight function is applied. The limits of the hyperbolic tangent function are defined based on the system error range. Simulation results using the Altera DSP Builder software in a MATLAB/SIMULINK environment of the proposed control schemes are presented.

  17. A three-dimensional nonlinear Timoshenko beam based on the core-congruential formulation

    Science.gov (United States)

    Crivelli, Luis A.; Felippa, Carlos A.

    1992-01-01

    A three-dimensional, geometrically nonlinear two-node Timoshenkoo beam element based on the total Larangrian description is derived. The element behavior is assumed to be linear elastic, but no restrictions are placed on magnitude of finite rotations. The resulting element has twelve degrees of freedom: six translational components and six rotational-vector components. The formulation uses the Green-Lagrange strains and second Piola-Kirchhoff stresses as energy-conjugate variables and accounts for the bending-stretching and bending-torsional coupling effects without special provisions. The core-congruential formulation (CCF) is used to derived the discrete equations in a staged manner. Core equations involving the internal force vector and tangent stiffness matrix are developed at the particle level. A sequence of matrix transformations carries these equations to beam cross-sections and finally to the element nodal degrees of freedom. The choice of finite rotation measure is made in the next-to-last transformation stage, and the choice of over-the-element interpolation in the last one. The tangent stiffness matrix is found to retain symmetry if the rotational vector is chosen to measure finite rotations. An extensive set of numerical examples is presented to test and validate the present element.

  18. Incremental Interpretation Applications, Theory and Relationship to Dynamic Semantics

    CERN Document Server

    Milward, D; Milward, David; Cooper, Robin

    1995-01-01

    Why should computers interpret language incrementally? In recent years psycholinguistic evidence for incremental interpretation has become more and more compelling, suggesting that humans perform semantic interpretation before constituent boundaries, possibly word by word. However, possible computational applications have received less attention. In this paper we consider various potential applications, in particular graphical interaction and dialogue. We then review the theoretical and computational tools available for mapping from fragments of sentences to fully scoped semantic representations. Finally, we tease apart the relationship between dynamic semantics and incremental interpretation.

  19. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  20. Single point incremental forming of shape memory polymer foam

    Directory of Open Access Journals (Sweden)

    Mohammadi Amirahmad

    2015-01-01

    Full Text Available In this study single point incremental forming of shape memory foam is investigated. The shape memory effect makes the foam an attractive material for the fabrication of recoverable components such as reusable dies for low pressure forming processes. The mechanical properties of the polymer have been investigated by means of tensile testing at elevated temperature. The maximum achievable formability for this material for single point incremental forming has been characterized both at room and elevated temperature. After an explorative study on simple benchmark cases, heat assisted single point incremental forming has been used to study the possibility of manufacturing a recoverable die for custom-made orthopaedic shoe insoles.

  1. Formulations and nebulizer performance.

    Science.gov (United States)

    O'Riordan, Thomas G

    2002-11-01

    To deliver a drug by nebulization, the drug must first be dispersed in a liquid (usually aqueous) medium. After application of a dispersing force (either a jet of gas or ultrasonic waves), the drug particles are contained within the aerosol droplets, which are then inhaled. Some drugs readily dissolve in water, whereas others need a cosolvent such as ethanol or propylene glycol. Some drugs are delivered as suspensions, and the efficiency of nebulizers can be different for solutions and suspensions. Solutions are delivered more efficiently with most devices. In general, conventional ultrasonic nebulizers should not be used to aerosolize suspensions, because of low efficiency. Newer strategies to improve the delivery of non-water-soluble drugs include the use of liposomes and the milling of the drug into very small "nanoparticles." In addition to the active therapeutic ingredient and solvents, drug formulations may include buffers (the solubility of some medications is influenced by pH), stabilizers, and, in the case of multi-dose preparations, antibacterial agents. Though formulations are designed to optimize drug solubility and stability, changes in formulation can also affect inhaled mass, particle size, and treatment time, though the differences between nebulizer brands probably have a greater impact than differences in formulation. Ultrasonic and jet nebulizers may damage protein and other complex agents through heat or shear stress. Additives to multi-dose formulations, especially antimicrobial and chelating agents, may cause adverse events, so there is a trend towards single-use, preservative-free vials.

  2. Incremental Sampling Algorithms for Robust Propulsion Control Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Aurora Flight Sciences proposes to develop a system for robust engine control based on incremental sampling, specifically Rapidly-Expanding Random Tree (RRT)...

  3. 48 CFR 3432.771 - Provision for incremental funding.

    Science.gov (United States)

    2010-10-01

    ... funding. 3432.771 Section 3432.771 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION ACQUISITION REGULATION GENERAL CONTRACTING REQUIREMENTS CONTRACT FINANCING Contract Funding 3432.771 Provision for incremental funding. The contracting officer shall insert the provision in...

  4. An incremental clustering algorithm based on Mahalanobis distance

    Science.gov (United States)

    Aik, Lim Eng; Choon, Tan Wee

    2014-12-01

    Classical fuzzy c-means clustering algorithm is insufficient to cluster non-spherical or elliptical distributed datasets. The paper replaces classical fuzzy c-means clustering euclidean distance with Mahalanobis distance. It applies Mahalanobis distance to incremental learning for its merits. A Mahalanobis distance based fuzzy incremental clustering learning algorithm is proposed. Experimental results show the algorithm is an effective remedy for the defect in fuzzy c-means algorithm but also increase training accuracy.

  5. Kinetic Modeling of Incremental Ambulatory Peritoneal Dialysis Exchanges.

    Science.gov (United States)

    Guest, Steven; Leypoldt, John K; Cassin, Michelle; Schreiber, Martin

    2017-01-01

    ♦ BACKGROUND: Incremental peritoneal dialysis (PD), the gradual introduction of dialysate exchanges at less than full-dose therapy, has been infrequently described in clinical reports. One concern with less than full-dose dialysis is whether urea clearance targets are achievable with an incremental regimen. In this report, we used a large database of PD patients, across all membrane transport types, and performed urea kinetic modeling determinations of possible incremental regimens for an individual membrane type. ♦ METHODS: Using a modified 3-pore model of peritoneal transport, various incremental manual continuous ambulatory PD (CAPD) exchanges employing glucose and/or icodextrin were evaluated. Peritoneal urea clearances from those simulations were added to residual kidney urea clearance for patients with various glomerular filtration rates (GFRs), and the total weekly urea clearance was then compared to the total weekly urea Kt/V target of 1.7. All 4 peritoneal membrane types were modeled. For each simulated prescription, net ultrafiltration and carbohydrate absorption were also calculated. ♦ RESULTS: Incremental CAPD regimens of 2 exchanges a day met adequacy targets if the GFR was 6 mL/min/1.73 m(2) in all membrane types. For regimens employing 3 exchanges a day, Kt/V targets were achieved at GFR levels of 4 to 5 mL/min/1.73 m(2) in high transporters to low transporters but higher tonicity 2.5% glucose solutions or icodextrin were required in some regimens. ♦ CONCLUSIONS: This work demonstrates that with incremental CAPD regimens, urea kinetic targets are achievable in most new starts to PD with residual kidney function. Incremental PD may be a less intrusive, better accepted initial treatment regime and a cost-effective way to initiate chronic dialysis in the incident patient. The key role of intrinsic kidney function in incremental regimens is highlighted in this analysis and would warrant conscientious monitoring. Copyright © 2017 International

  6. Incremental dementia-related expenditures in a medicaid population.

    Science.gov (United States)

    Bharmal, Murtuza F; Dedhiya, Seema; Craig, Bruce A; Weiner, Michael; Rosenman, Marc; Sands, Laura P; Modi, Ankita; Doebbeling, Caroline; Thomas, Joseph

    2012-01-01

    With the growing number of older adults, understanding expenditures associated with treating medical conditions that are more prevalent among older adults is increasingly important. The objectives of this research were to estimate incremental medical encounters and incremental Medicaid expenditures associated with dementia among Indiana Medicaid recipients 40 years or older in 2004. A retrospective cohort design analyzing Indiana Medicaid administrative claims files was used. Individuals at least 40 years of age with Indiana Medicaid eligibility during 2004 were included. Patients with dementia were identified via diagnosis codes in claims files between July 2001 and December 2004. Adjusted annual incremental medical encounters and expenditures associated with dementia in 2004 were estimated using negative binomial regression and zero-inflated negative binomial regression models. A total of 18,950 individuals (13%) with dementia were identified from 145,684 who were 40 years or older. The unadjusted mean total annualized Medicaid expenditures for the cohort with dementia ($28,758) were significantly higher than the mean expenditures for the cohort without dementia ($14,609). After adjusting for covariates, Indiana Medicaid incurred annualized incremental expenditures of $9,829 per recipient with dementia. Much of the annual incremental expenditure associated with dementia was driven by the higher number of days in nursing homes and resulting nursing-home expenditures. Drug expenditures accounted for the second largest component of the incremental expenditures. On the basis of disease prevalence and per recipient annualized incremental expenditures, projected incremental annualized Indiana Medicaid spending associated with dementia for persons 40 or more years of age was $186 million. Dementia is associated with significant expenditures among Medicaid recipients. Disease management initiatives designed to reduce nursing-home use among recipients with dementia may

  7. Logistics Modernization Program Increment 2 (LMP Inc 2)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Logistics Modernization Program Increment 2 (LMP Inc 2) Defense Acquisition Management...Year U.S.C- United States Code USD(AT&L) - Under Secretary of Defense for Acquisition, Technology, & Logistics LMP Inc 2 2016 MAR UNCLASSIFIED...Phone: DSN Fax: Date Assigned: July 17, 2014 Program Information Program Name Logistics Modernization Program Increment 2 (LMP Inc 2) DoD Component

  8. Incremental Code Clone Detection and Elimination for Erlang Programs

    OpenAIRE

    Li, Huiqing; Thompson, Simon

    2011-01-01

    A well-known bad code smell in refactoring and software maintenance is the existence of code clones, which are code fragments that are identical or similar to one another. This paper describes an approach to incrementally detecting 'similar' code based on the notion of least-general common abstraction, or anti-unification, as well as a framework for user-controlled incremental elimination of code clones within the context of Erlang programs. The clone detection algorithm proposed in this pape...

  9. Towards an incremental maintenance of cyclic association rules

    CERN Document Server

    Ahmed, Eya ben

    2010-01-01

    Recently, the cyclic association rules have been introduced in order to discover rules from items characterized by their regular variation over time. In real life situations, temporal databases are often appended or updated. Rescanning the whole database every time is highly expensive while existing incremental mining techniques can efficiently solve such a problem. In this paper, we propose an incremental algorithm for cyclic association rules maintenance. The carried out experiments of our proposal stress on its efficiency and performance.

  10. Lubrication in tablet formulations.

    Science.gov (United States)

    Wang, Jennifer; Wen, Hong; Desai, Divyakant

    2010-05-01

    Theoretical aspects and practical considerations of lubrication in tablet compression are reviewed in this paper. Properties of the materials that are often used as lubricants, such as magnesium stearate, in tablet dosage form are summarized. The manufacturing process factors that may affect tablet lubrication are discussed. As important as the lubricants in tablet formulations are, their presence can cause some changes to the tablet physical and chemical properties. Furthermore, a detailed review is provided on the methodologies used to characterize lubrication process during tablet compression with relevant process analytical technologies. Finally, the Quality-by-Design considerations for tablet formulation and process development in terms of lubrication are discussed.

  11. Incremental short daily home hemodialysis: a case series.

    Science.gov (United States)

    Toth-Manikowski, Stephanie M; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-07-05

    Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients' residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. From 2011 to 2015, we initiated 5 incident hemodialysis patients on an incremental home hemodialysis regimen. The biochemical parameters of all patients remained stable on the incremental hemodialysis regimen and they consistently achieved standard Kt/Vurea targets. Of the two patients with follow-up >6 months, residual kidney function was preserved for ≥2 years. Importantly, the patients were able to transition to home hemodialysis without automatically requiring 5 sessions per week at the outset and gradually increased the number of treatments and/or dialysate volume as the residual kidney function declined. An incremental home hemodialysis regimen can be safely prescribed and may improve acceptability of home hemodialysis. Reducing hemodialysis frequency by even one treatment per week can reduce the number of fistula or graft cannulations or catheter connections by >100 per year, an important consideration for patient well-being, access longevity, and access-related infections. The incremental hemodialysis approach, supported by national guidelines, can be considered for all home hemodialysis patients with residual kidney function.

  12. Effects of Predictor Weighting Methods on Incremental Validity.

    Science.gov (United States)

    Sackett, Paul R; Dahlke, Jeffrey A; Shewach, Oren R; Kuncel, Nathan R

    2017-05-22

    It is common to add an additional predictor to a selection system with the goal of increasing criterion-related validity. Research on the incremental validity of a second predictor is generally based on forming a regression-weighted composite of the predictors. However, in practice predictors are commonly used in ways other than regression-weighted composites, and we examine the robustness of incremental validity findings to other ways of using predictors, namely, unit weighting and multiple hurdles. We show that there are settings in which the incremental value of a second predictor disappears, and can even produce lower validity than the first predictor alone, when these alternatives to regression weighting are used. First, we examine conditions under which unit weighting will negate gain in predictive power attainable via regression weights. Second, we revisit Schmidt and Hunter's (1998) summary of incremental validity of predictors over cognitive ability, evaluating whether the reported incremental value of a second predictor is different when predictors are unit weighted rather than regression weighted. Third, we analyze data reported in the published literature to discern the frequency with which unit weighting might affect conclusions about whether there is value in adding a second predictor to a first. Finally, we shift from unit weighting to multiple hurdle selection, examining conditions under which conclusions about incremental validity differ when regression weighting is replaced by multiple-hurdle selection. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Entity versus incremental theories predict older adults' memory performance.

    Science.gov (United States)

    Plaks, Jason E; Chasteen, Alison L

    2013-12-01

    The authors examined whether older adults' implicit theories regarding the modifiability of memory in particular (Studies 1 and 3) and abilities in general (Study 2) would predict memory performance. In Study 1, individual differences in older adults' endorsement of the "entity theory" (a belief that one's ability is fixed) or "incremental theory" (a belief that one's ability is malleable) of memory were measured using a version of the Implicit Theories Measure (Dweck, 1999). Memory performance was assessed with a free-recall task. Results indicated that the higher the endorsement of the incremental theory, the better the free recall. In Study 2, older and younger adults' theories were measured using a more general version of the Implicit Theories Measure that focused on the modifiability of abilities in general. Again, for older adults, the higher the incremental endorsement, the better the free recall. Moreover, as predicted, implicit theories did not predict younger adults' memory performance. In Study 3, participants read mock news articles reporting evidence in favor of either the entity or incremental theory. Those in the incremental condition outperformed those in the entity condition on reading span and free-recall tasks. These effects were mediated by pretask worry such that, for those in the entity condition, higher worry was associated with lower performance. Taken together, these studies suggest that variation in entity versus incremental endorsement represents a key predictor of older adults' memory performance. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  14. Let's move to spheres! Why a spherical coordinate system is rewarding when analyzing particle increment statistics

    Science.gov (United States)

    Most, Sebastian; Nowak, Wolfgang; Bijeljic, Branko

    2016-04-01

    For understanding non-Fickian transport in porous media, thorough understanding of pore-scale processes is required. When using particle methods as research instruments, we need a detailed understanding of the dependence and memory between subsequent increments in particle motion. We are especially interested in the dependence and memory of the spatial increments (size and direction) at consecutive time steps. Understanding the increment statistics is crucial for the upscaling that always becomes essential for transport simulations at larger scales. Upscaling means averaging over a (representative elementary) volume to save limited computational resources. However, this averaging means a loss of detail and therefore dispersion models should compensate for this loss. Formulating an appropriate dispersion model requires a detailed understanding of the dependencies and memory effects in the transport process. Particle-based simulations for transport in porous media are usually conducted and analyzed in a Cartesian coordinate system. We will show that, for understanding the process physically and representing the process statistically, it is more appropriate to switch to a spherical coordinate system that moves with each particle. Increment statistics in a Cartesian coordinate system usually reveal that a large displacement in longitudinal direction triggers a large displacement in transverse direction as fast flow channels are not perfectly aligned with the Cartesian axis along the main flow direction. We can overcome this inherent link, typical for the Cartesian description by using the absolute displacements together with the direction of the particle movement, where the direction is determined by the angles azimuth and elevation. This can be understood as a Lagrangian spherical process description. The root of the dependence of the transport process is in the complex pore geometry. For some time past, high-resolution micro-CT scans of pore space geometry became the

  15. The start of the Sagittarius spiral arm (Sagittarius origin) and the start of the Norma spiral arm (Norma origin) - model-computed and observed arm tangents at galactic longitudes -20 degrees < l < +23 degrees

    CERN Document Server

    Vallee, Jacques P

    2016-01-01

    Here we fitted a 4-arm spiral structure to the more accurate data on global arm pitch angle and arm longitude tangents, to get the start of each spiral arm near the Galactic nucleus. We find that the tangent to the 'start of the Sagittarius' spiral arm (arm middle) is at l= -17 degrees +/- 0.5 degree, while the tangent to the 'start of the Norma' spiral arm (arm middle) is at l= +20 degrees +/- 0.5 degree. Earlier, we published a compilation of observations and analysis of the tangent to each spiral arm tracer, from longitudes +23 degrees to +340 degrees; here we cover the arm tracers in the remaining longitudes +340 degrees (=- 20 degrees) to +23 degrees. Our model arm tangents are confirmed through the recent observed masers data (at the arm's inner edge). Observed arm tracers in the inner Galaxy show an offset from the mid-arm; this was also found elsewhere in the Milky Way disk (Vallee 2014c). In addition, we collated the observed tangents to the so-called '3-kpc-arm' features; here they are found statist...

  16. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Directory of Open Access Journals (Sweden)

    Ming Xue

    2014-02-01

    Full Text Available To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks.

  17. Liposomal paclitaxel formulations.

    Science.gov (United States)

    Koudelka, Stěpán; Turánek, Jaroslav

    2012-11-10

    Over the past three decades, taxanes represent one of the most important new classes of drugs approved in oncology. Paclitaxel (PTX), the prototype of this class, is an anti-cancer drug approved for the treatment of breast and ovarian cancer. However, notwithstanding a suitable premedication, present-day chemotherapy employing a commercial preparation of PTX (Taxol®) is associated with serious side effects and hypersensitivity reactions. Liposomes represent advanced and versatile delivery systems for drugs. Generally, both in vivo mice tumor models and human clinical trials demonstrated that liposomal PTX formulations significantly increase a maximum tolerated dose (MTD) of PTX which outperform that for Taxol®. Liposomal PTX formulations are in various stages of clinical trials. LEP-ETU (NeoPharm) and EndoTAG®-1 (Medigene) have reached the phase II of the clinical trials; Lipusu® (Luye Pharma Group) has already been commercialized. Present achievements in the preparation of various liposomal formulations of PTX, the development of targeted liposomal PTX systems and the progress in clinical testing of liposomal PTX are discussed in this review summarizing about 30 years of liposomal PTX development.

  18. Tangent map intermittency as an approximate analysis of intermittency in a high dimensional fully stochastic dynamical system: The Tangled Nature model

    Science.gov (United States)

    Diaz-Ruelas, Alvaro; Jeldtoft Jensen, Henrik; Piovani, Duccio; Robledo, Alberto

    2016-12-01

    It is well known that low-dimensional nonlinear deterministic maps close to a tangent bifurcation exhibit intermittency and this circumstance has been exploited, e.g., by Procaccia and Schuster [Phys. Rev. A 28, 1210 (1983)], to develop a general theory of 1/f spectra. This suggests it is interesting to study the extent to which the behavior of a high-dimensional stochastic system can be described by such tangent maps. The Tangled Nature (TaNa) Model of evolutionary ecology is an ideal candidate for such a study, a significant model as it is capable of reproducing a broad range of the phenomenology of macroevolution and ecosystems. The TaNa model exhibits strong intermittency reminiscent of punctuated equilibrium and, like the fossil record of mass extinction, the intermittency in the model is found to be non-stationary, a feature typical of many complex systems. We derive a mean-field version for the evolution of the likelihood function controlling the reproduction of species and find a local map close to tangency. This mean-field map, by our own local approximation, is able to describe qualitatively only one episode of the intermittent dynamics of the full TaNa model. To complement this result, we construct a complete nonlinear dynamical system model consisting of successive tangent bifurcations that generates time evolution patterns resembling those of the full TaNa model in macroscopic scales. The switch from one tangent bifurcation to the next in the sequences produced in this model is stochastic in nature, based on criteria obtained from the local mean-field approximation, and capable of imitating the changing set of types of species and total population in the TaNa model. The model combines full deterministic dynamics with instantaneous parameter random jumps at stochastically drawn times. In spite of the limitations of our approach, which entails a drastic collapse of degrees of freedom, the description of a high-dimensional model system in terms of a low

  19. Mathematical Infinity and Medium Logic (Ⅰ)——Logical-mathematical Interpretation of Leibniz's Secant and Tangent Lines Problem in Medium Logic

    Institute of Scientific and Technical Information of China (English)

    ZHU Wu-jia; GONG Ning-sheng; DU Guo-ping

    2013-01-01

    From the perspective of potential infinity (poi) and actual infinity,Ref [4] has confirmed that poi and aci are in ‘unmediated opposition' (P,-P) whether in ZFC or not; it has further been proved that the manners in which a variable infinitely approaches its limit also satisfy the law of intermediate exclusion.With these results as theoretical bases,this paper attempts to provide an accurate and strict logical-mathematical interpretation of the incompatibility of Leibniz's secant and tangent lines in the medium logic system from the perspective of logical mathematics.

  20. Criterion and Incremental Validity of the Emotion Regulation Questionnaire

    Directory of Open Access Journals (Sweden)

    Christos A. Ioannidis

    2015-03-01

    Full Text Available Although research on emotion regulation (ER is developing, little attention has been paid to the predictive power of ER strategies beyond established constructs. The present study examined the incremental validity of the Emotion Regulation Questionnaire (ERQ; Gross & John, 2003, which measures cognitive reappraisal and expressive suppression, over and above the Big Five personality traits. It also extended the evidence for the measure’s criterion validity to yet unexamined criteria. A university student sample (N = 203 completed the ERQ, a measure of the Big Five, and relevant cognitive and emotion-laden criteria. Cognitive reappraisal predicted positive affect beyond personality, as well as experiential flexibility and constructive self-assertion beyond personality and affect. Expressive suppression explained incremental variance in negative affect beyond personality and in experiential flexibility beyond personality and general affect. No incremental effects were found for worry, social anxiety, rumination, reflection, and preventing negative emotions. Implications for the construct validity and utility of the ERQ are discussed.

  1. Event-based incremental updating of spatio-temporal database

    Institute of Scientific and Technical Information of China (English)

    周晓光; 陈军; 蒋捷; 朱建军; 李志林

    2004-01-01

    Based on the relationship among the geographic events, spatial changes and the database operations, a new automatic (semi-automatic) incremental updating approach of spatio-temporal database (STDB) named as event-based incremental updating (E-BIU) is proposed in this paper. At first, the relationship among the events, spatial changes and the database operations is analyzed, then a total architecture of E-BIU implementation is designed, which includes an event queue, three managers and two sets of rules, each component is presented in detail. The process of the E-BIU of master STDB is described successively. An example of building's incremental updating is given to illustrate this approach at the end. The result shows that E-BIU is an efficient automatic updating approach for master STDB.

  2. Incremental Web Usage Mining Based on Active Ant Colony Clustering

    Institute of Scientific and Technical Information of China (English)

    SHEN Jie; LIN Ying; CHEN Zhimin

    2006-01-01

    To alleviate the scalability problem caused by the increasing Web using and changing users' interests, this paper presents a novel Web Usage Mining algorithm-Incremental Web Usage Mining algorithm based on Active Ant Colony Clustering. Firstly, an active movement strategy about direction selection and speed, different with the positive strategy employed by other Ant Colony Clustering algorithms, is proposed to construct an Active Ant Colony Clustering algorithm, which avoid the idle and "flying over the plane" moving phenomenon, effectively improve the quality and speed of clustering on large dataset. Then a mechanism of decomposing clusters based on above methods is introduced to form new clusters when users' interests change. Empirical studies on a real Web dataset show the active ant colony clustering algorithm has better performance than the previous algorithms, and the incremental approach based on the proposed mechanism can efficiently implement incremental Web usage mining.

  3. Martingales, nonstationary increments, and the efficient market hypothesis

    Science.gov (United States)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  4. Making context explicit for explanation and incremental knowledge acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Brezillon, P. [Univ. Paris (France)

    1996-12-31

    Intelligent systems may be improved by making context explicit in problem solving. This is a lesson drawn from a study of the reasons why a number of knowledge-based systems (KBSs) failed. We discuss the interest to make context explicit in explanation generation and incremental knowledge acquisition, two important aspects of intelligent systems that aim to cooperate with users. We show how context can be used to better explain and incrementally acquire knowledge. The advantages of using context in explanation and incremental knowledge acquisition are discussed through SEPIT, an expert system for supporting diagnosis and explanation through simulation of power plants. We point out how the limitations of such systems may be overcome by making context explicit.

  5. Transient global amnesia: a complication of incremental exercise testing.

    Science.gov (United States)

    Richardson, R S; Leek, B T; Wagner, P D; Kritchevsky, M

    1998-10-01

    Incremental exercise testing is routinely used for diagnosis, rehabilitation, health screening, and research. We report the case of a 71-yr-old patient with chronic obstructive pulmonary disease (COPD) who suffered an episode of transient global amnesia (TGA) several minutes after successfully completing an incremental exercise test on a cycle ergometer. TGA, which is known to be precipitated by physical or emotional stress in about one-third of cases, is a transient neurological disorder in which memory impairment is the prominent deficit. TGA has a benign course and requires no treatment although 24-h observation is recommended. Recognition of TGA as a potential complication of incremental graded exercise testing is important to both aid diagnosis of the amnesia and to spare a patient unnecessary evaluation.

  6. Single-point incremental forming and formability-failure diagrams

    DEFF Research Database (Denmark)

    Silva, M.B.; Skjødt, Martin; Atkins, A.G.

    2008-01-01

    In a recent work [1], the authors constructed a closed-form analytical model that is capable of dealing with the fundamentals of single point incremental forming and explaining the experimental and numerical results published in the literature over the past couple of years. The model is based...... of deformation that are commonly found in general single point incremental forming processes; and (ii) to investigate the formability limits of SPIF in terms of ductile damage mechanics and the question of whether necking does, or does not, precede fracture. Experimentation by the authors together with data...... retrieved from the literature confirms that the proposed theoretical framework is capable of successfully addressing the influence of the major parameters of the single point incremental forming process. It is demonstrated that neck formation is suppressed in SPIF, so that traditional forming limit diagrams...

  7. Criterion and incremental validity of the emotion regulation questionnaire.

    Science.gov (United States)

    Ioannidis, Christos A; Siegling, A B

    2015-01-01

    Although research on emotion regulation (ER) is developing, little attention has been paid to the predictive power of ER strategies beyond established constructs. The present study examined the incremental validity of the Emotion Regulation Questionnaire (ERQ; Gross and John, 2003), which measures cognitive reappraisal and expressive suppression, over and above the Big Five personality factors. It also extended the evidence for the measure's criterion validity to yet unexamined criteria. A university student sample (N = 203) completed the ERQ, a measure of the Big Five, and relevant cognitive and emotion-laden criteria. Cognitive reappraisal predicted positive affect beyond personality, as well as experiential flexibility and constructive self-assertion beyond personality and affect. Expressive suppression explained incremental variance in negative affect beyond personality and in experiential flexibility beyond personality and general affect. No incremental effects were found for worry, social anxiety, rumination, reflection, and preventing negative emotions. Implications for the construct validity and utility of the ERQ are discussed.

  8. INCREMENTAL MICRO-MECHANICAL MODEL OF PLAIN WOVEN FABRIC

    Institute of Scientific and Technical Information of China (English)

    ZhangYitong; HaoYongjiang; LiCuiyu

    2004-01-01

    Warp yarns and weft yarns of plain woven fabric are the principal axes of material of fabric. They are orthogonal in their original configuration, but are obliquely crisscross in deformed configuration in general. In this paper the expressions of incremental components of strain tensor are derived, the non-linear model of woven fabric is linearized physically and its geometric non-linearity survives. The convenience of determining the total deformation is shown by the choice of the coordinate system of the principal axes of the material, with the convergence of the incremental methods illustrated by examples. This incremental model furnishes a basis for numerical simulations of fabric draping and wrinkling based oll the micro-mechanical model of fabric.

  9. Increment entropy as a measure of complexity for time series

    CERN Document Server

    Liu, Xiaofeng; Xu, Ning; Xue, Jianru

    2015-01-01

    Entropy has been a common index to quantify the complexity of time series in a variety of fields. Here, we introduce increment entropy to measure the complexity of time series in which each increment is mapped into a word of two letters, one letter corresponding to direction and the other corresponding to magnitude. The Shannon entropy of the words is termed as increment entropy (IncrEn). Simulations on synthetic data and tests on epileptic EEG signals have demonstrated its ability of detecting the abrupt change, regardless of energetic (e.g. spikes or bursts) or structural changes. The computation of IncrEn does not make any assumption on time series and it can be applicable to arbitrary real-world data.

  10. Increment Entropy as a Measure of Complexity for Time Series

    Directory of Open Access Journals (Sweden)

    Xiaofeng Liu

    2016-01-01

    Full Text Available Entropy has been a common index to quantify the complexity of time series in a variety of fields. Here, we introduce an increment entropy to measure the complexity of time series in which each increment is mapped onto a word of two letters, one corresponding to the sign and the other corresponding to the magnitude. Increment entropy (IncrEn is defined as the Shannon entropy of the words. Simulations on synthetic data and tests on epileptic electroencephalogram (EEG signals demonstrate its ability of detecting abrupt changes, regardless of the energetic (e.g., spikes or bursts or structural changes. The computation of IncrEn does not make any assumption on time series, and it can be applicable to arbitrary real-world data.

  11. Incremental Hemodialysis: The University of California Irvine Experience.

    Science.gov (United States)

    Ghahremani-Ghajar, Mehrdad; Rojas-Bautista, Vanessa; Lau, Wei-Ling; Pahl, Madeleine; Hernandez, Miguel; Jin, Anna; Reddy, Uttam; Chou, Jason; Obi, Yoshitsugu; Kalantar-Zadeh, Kamyar; Rhee, Connie M

    2017-05-01

    Incremental hemodialysis has been examined as a viable hemodialysis regimen for selected end-stage renal disease (ESRD) patients. Preservation of residual kidney function (RKF) has been the driving impetus for this approach given its benefits upon the survival and quality of life of dialysis patients. While clinical practice guidelines recommend an incremental start of dialysis in peritoneal dialysis patients with substantial RKF, there remains little guidance with respect to incremental hemodialysis as an initial renal replacement therapy regimen. Indeed, several large population-based studies suggest that incremental twice-weekly vs. conventional thrice-weekly hemodialysis has favorable impact upon RKF trajectory and survival among patients with adequate renal urea clearance and/or urine output. In this report, we describe a case series of 13 ambulatory incident ESRD patients enrolled in a university-based center's Incremental Hemodialysis Program over the period of January 2015 to August 2016 and followed through December 2016. Among five patients who maintained a twice-weekly hemodialysis schedule vs. eight patients who transitioned to thrice-weekly hemodialysis, we describe and compare patients' longitudinal case-mix, laboratory, and dialysis treatment characteristics over time. The University of California Irvine Experience is the first systemically examined twice-weekly hemodialysis practice in North America. While future studies are needed to refine the optimal approaches and the ideal patient population for implementation of incremental hemodialysis, our case-series serves as a first report of this innovative management strategy among incident ESRD patients with substantial RKF, and a template for implementation of this regimen. © 2017 Wiley Periodicals, Inc.

  12. Estimating incremental costs with skew: a cautionary note.

    Science.gov (United States)

    Polgreen, Linnea A; Brooks, John M

    2012-09-01

    Cost data in healthcare are often skewed across patients. Thus, researchers have used either a log transformation of the dependent variable or generalized linear models with log links. However, frequently these non-linear approaches produce non-linear incremental effects: the incremental effects differ at different levels of the covariates, and this can cause dramatic effects on predicted cost. The aim of this study was to demonstrate that when modelling skewed data, log link functions or log transformations are not necessary and have unintended effects. We simulated cost data using a linear model with a 'treatment', a covariate and a specified number of observations with excessive cost (skewed data). We also used actual data from a pain-relief intervention among hip-replacement patients. We then estimated cost models using various functional approaches suggested to handle skew and calculated the incremental cost of treatment at various levels of the covariate(s). All of these methods provide unbiased estimates of the incremental effect of treatment on costs at the mean level of the covariate. However, in some log-based models the implied incremental treatment cost doubled between extreme low and high values of the covariate in a manner inconsistent with the underlying linear model. Although specification checks are always needed, the potential for misleading incremental estimates resulting from log-based specifications is often ignored. In this era of cost containment and comparisons of treatment effectiveness it is vital that researchers and policymakers understand the limitation of the inferences that can be made using log-based models for patients whose characteristics differ from the sample mean.

  13. Incremental Hospital Costs Associated With Comorbidities of Prematurity.

    Science.gov (United States)

    Black, Libby; Hulsey, Thomas; Lee, Kwan; Parks, Daniel C; Ebeling, Myla D

    2015-12-01

    Preterm birth (PTB), defined as birth at a gestational age (GA) of less than 37 weeks, is associated with increased hospital costs. Lower GA at birth is negatively correlated with the presence of neonatal comorbidities, further increasing costs. This study evaluated incremental costs associated with comorbidities of PTB following spontaneous labor at 24-36 weeks. Birth records from January 2001 to December 2010 at the Medical University of South Carolina were screened to identify infants born at GA 23-37 weeks after uncomplicated singleton pregnancies and surviving to discharge. Comorbidities of interest and incremental costs were analyzed with a partial least squares (PLS) regression model adjusted for comorbidities and GA. Incremental comorbidity-associated costs, as well as total costs, were estimated for infants of GA 24-36 weeks. A total of 4,292 delivery visit records were analyzed. Use of the PLS regression model eliminated issues of multicollinearity and allowed derivation of stable cost estimates. Incremental costs of comorbidities at a mean GA of 34 weeks ranged from $4,529 to $23,121, and exceeded $9,000 in 6 cases. Incremental costs rangedfrom a high of $41,161 for a GA 24-week infant with a comorbidity of retinopathy of prematurity requiring surgery (ROP4) to $3,683 for a GA 36-week infant with a comorbidity of convulsions. Incremental comorbidity costs are additive, so the costs for infants with multiple comorbidities could easily exceed the high of $41,161 seen with ROP4. The PLS regression model allowed derivation of stable cost estimates from multivariate and highly collinear data and can be used in future cost analyses. Using this data set, predicted costs of all comorbidities, as well as total costs, negatively correlated with GA at birth.

  14. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors.

    Science.gov (United States)

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual's processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people's moral character is fixed (entity theorists) and individuals who hold the implicit belief that people's moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE), rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction) with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2-4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory) showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  15. Lubrication study for Single Point Incremental Forming of Copper

    Science.gov (United States)

    Jawale, Kishore; Ferreira Duarte, José; Reis, Ana; Silva, M. B.

    2016-08-01

    In conventional machining and sheet metal forming processes, in general, lubrication assists to increase the quality of the final product. Similarly it is observed that there is a positive effect of the use of lubrication in Single point incremental forming, namely in the surface roughness. This study is focused on the investigation of the most appropriate lubricant for incremental forming of copper sheet. The study involves the selection of the best lubricant from a range of several lubricants that provides the best surface finishing. The influence of the lubrication on other parameters such as the maximum forming angle, the fracture strains and the deformed profile are also studied for Copper.

  16. A Parsimonious and Universal Description of Turbulent Velocity Increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, O.E.; Blæsild, P.; Schmiegel, J.

    This paper proposes a reformulation and extension of the concept of Extended Self-Similarity. In support of this new hypothesis, we discuss an analysis of the probability density function (pdf) of turbulent velocity increments based on the class of normal inverse Gaussian distributions. It allows...... for a parsimonious description of velocity increments that covers the whole range of amplitudes and all accessible scales from the finest resolution up to the integral scale. The analysis is performed for three different data sets obtained from a wind tunnel experiment, a free-jet experiment and an atmospheric...

  17. Unsupervised incremental online learning and prediction of musical audio signals

    DEFF Research Database (Denmark)

    Marxer, Richard; Purwins, Hendrik

    2016-01-01

    Guided by the idea that musical human-computer interaction may become more effective, intuitive, and creative when basing its computer part on cognitively more plausible learning principles, we employ unsupervised incremental online learning (i.e. clustering) to build a system that predicts...... the next event in a musical sequence, given as audio input. The flow of the system is as follows: 1) segmentation by onset detection, 2) timbre representation of each segment by Mel frequency cepstrum coefficients, 3) discretization by incremental clustering, yielding a tree of different sound classes (e...

  18. The Relevance of an Incremental Approach to Ideational Change

    DEFF Research Database (Denmark)

    Carstensen, Martin Bæk

    2012-01-01

    This rejoinder takes issue with two criticisms of my Political Studies article on incremental ideational change presented in a recent reply by Liam Stanley. It argues that by presenting my critique of historical and discursive institutionalism as focused on the lack of ‘realism’ in their standard...... of the financial crisis, I try to show the relevance of an ontology conducive to theories that capture and explain incremental ideational change. I also defend an eclectic approach to theory building that is focused less on policing the boundaries of different approaches and more on building consistent theoretical...

  19. An Incremental Rule Acquisition Algorithm Based on Rough Set

    Institute of Scientific and Technical Information of China (English)

    YU Hong; YANG Da-chun

    2005-01-01

    Rough Set is a valid mathematical theory developed in recent years,which has the ability to deal with imprecise,uncertain,and vague information.This paper presents a new incremental rule acquisition algorithm based on rough set theory.First,the relation of the new instances with the original rule set is discussed.Then the change law of attribute reduction and value reduction are studied when a new instance is added.Follow,a new incremental learning algorithm for decision tables is presented within the framework of rough set.Finally,the new algorithm and the classical algorithm are analyzed and compared by theory and experiments.

  20. Rapid Prototyping of wax foundry models in an incremental process

    Directory of Open Access Journals (Sweden)

    B. Kozik

    2011-04-01

    Full Text Available The paper presents an analysis incremental methods of creating wax founding models. There are two methods of Rapid Prototypingof wax models in an incremental process which are more and more often used in industrial practice and in scientific research.Applying Rapid Prototyping methods in the process of making casts allows for acceleration of work on preparing prototypes. It isespecially important in case of element having complicated shapes. The time of making a wax model depending on the size and the appliedRP method may vary from several to a few dozen hours.

  1. Incrementally objective implicit integration of hypoelastic-viscoplastic constitutive equations based on the mechanical threshold strength model

    Science.gov (United States)

    Mourad, Hashem M.; Bronkhorst, Curt A.; Addessio, Francis L.; Cady, Carl M.; Brown, Donald W.; Chen, Shuh Rong; Gray, George T.

    2014-05-01

    The present paper focuses on the development of a fully implicit, incrementally objective integration algorithm for a hypoelastic formulation of -viscoplasticity, which employs the mechanical threshold strength model to compute the material's flow stress, taking into account its dependence on strain rate and temperature. Heat generation due to high-rate viscoplastic deformation is accounted for, assuming adiabatic conditions. The implementation of the algorithm is discussed, and its performance is assessed in the contexts of implicit and explicit dynamic finite element analysis, with the aid of example problems involving a wide range of loading rates. Computational results are compared to experimental data, showing very good agreement.

  2. Formulations of entomopathogens as bioinsecticides

    Science.gov (United States)

    Developing a proper formulation is a necessary component for commercialization of entomopathogenic microbes as biological insecticides. The objective of this chapter is to present broad-ranging information about formulations to foster research toward developing commercial microbial-based insecticide...

  3. 48 CFR 52.237-6 - Incremental Payment by Contractor to Government.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Incremental Payment by... Provisions and Clauses 52.237-6 Incremental Payment by Contractor to Government. As prescribed in 37.304(c... to the contractor for increments of property, only upon receipt of those payments: Incremental...

  4. Systematic Equation Formulation

    DEFF Research Database (Denmark)

    Lindberg, Erik

    2007-01-01

    A tutorial giving a very simple introduction to the set-up of the equations used as a model for an electrical/electronic circuit. The aim is to find a method which is as simple and general as possible with respect to implementation in a computer program. The “Modified Nodal Approach”, MNA, and th......, and the “Controlled Source Approach”, CSA, for systematic equation formulation are investigated. It is suggested that the kernel of the P Spice program based on MNA is reprogrammed....

  5. Introduction of the Lie group of Lorentz matrices in special relativity. Tangent boost along a worldline and its associated matrix in the Lie algebra. Applications

    CERN Document Server

    Langlois, Michel

    2014-01-01

    In order to generalize the relativistic notion of boost to the case of non inertial particles and to general relativity, we come back to the definition of Lie group of Lorentz matrices and its Lie algebra and we study how this group acts on the Minskowski space. We thus define the notion of tangent boost along a worldline. This notion very general notion gives a useful tool both in special relativity (for non inertial particles or/and for non rectilinear coordinates) and in general relativity. We also introduce a matrix of the Lie algebra which, together with the tangent boost, gives the whole dynamical description of the considered system (acceleration and Thomas rotation). After studying the properties of Lie algebra matrices and of their reduced forms, we show that the Lie group of special Lorentz matrices has four one-parameter subgroups. These tools lead us to introduce the Thomas rotation in a quite general way. At the end of the paper, we present some examples using these tools and we consider the case...

  6. Predicting Robust Vocabulary Growth from Measures of Incremental Learning

    Science.gov (United States)

    Frishkoff, Gwen A.; Perfetti, Charles A.; Collins-Thompson, Kevyn

    2011-01-01

    We report a study of incremental learning of new word meanings over multiple episodes. A new method called MESA (Markov Estimation of Semantic Association) tracked this learning through the automated assessment of learner-generated definitions. The multiple word learning episodes varied in the strength of contextual constraint provided by…

  7. Failure mechanisms in single-point incremental forming of metals

    DEFF Research Database (Denmark)

    Silva, Maria B.; Nielsen, Peter Søe; Bay, Niels

    2011-01-01

    The last years saw the development of two different views on how failure develops in single-point incremental forming (SPIF). Today, researchers are split between those claiming that fracture is always preceded by necking and those considering that fracture occurs with suppression of necking. Each...

  8. Generation of Referring Expressions: Assessing the Incremental Algorithm

    Science.gov (United States)

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  9. Increment Cores How to Collect, Handle, and Use Them.

    Science.gov (United States)

    1979-01-01

    and the first is off center, a pith locator has been designed by Applequist (1958) for estimating the number C--V of rings to pith. 3. Grano (1963) has...Sci. Tech. 7(1):34-44. Goodchild, R. 1963. An instrument for sharpening Swedish-type increment bores. Comm. For. Rev. 42(1):16-18. Grano , C. X. 1963

  10. 76 FR 73475 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2011-11-29

    ..., 207, 208, 211, 212, 213a, 244; 245, 324; 335 RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY: U.S. Citizenship and Immigration Services, DHS. ACTION: Final... to enable U.S. Citizenship and Immigration Services (USCIS) to transform its business processes....

  11. 78 FR 22770 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2013-04-17

    ... SECURITY 8 CFR Parts 103 and 208 RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY: U.S. Citizenship and Immigration Services, Department of Homeland Security. ACTION... final rule to amend DHS regulations to enable U.S. Citizenship and Immigration Services (USCIS)...

  12. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated ...

  13. Incremental principal component pursuit for video background modeling

    Energy Technology Data Exchange (ETDEWEB)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  14. Respiratory ammonia output and blood ammonia concentration during incremental exercise

    NARCIS (Netherlands)

    Ament, W; Huizenga, [No Value; Kort, E; van der Mark, TW; Grevink, RG; Verkerke, GJ

    1999-01-01

    The aim of this study was to investigate whether the increase of ammonia concentration and lactate concentration in blood was accompanied by an increased expiration of ammonia during graded exercise. Eleven healthy subjects performed an incremental cycle ergometer test. Blood ammonia, blood lactate

  15. Incremental Mining of the Schema of Semistructured Data

    Institute of Scientific and Technical Information of China (English)

    ZHOU Aoying; JIN Wen; ZHOU Shuigeng; QIAN Weining; TIAN Zenping

    2000-01-01

    Semistructured data are specified in lack of any fixed and rigid schema, even though typically some implicit structure appears in the data. The huge amounts of on-line applications make it important and imperative to mine the schema of semistructured data, both for the users (e.g., to gather useful information and facilitate querying) and for the systems (e.g., to optimize access). The critical problem is to discover the hidden structure in the semistructured data. Current methods in extracting Web data structure are either in a general way independent of application background, or bound in some concrete environment such as HTML, XML etc. But both face the burden of expensive cost and difficulty in keeping along with the frequent and complicated variances of Web data. In this paper, the problem of incremental mining of schema for semistructured data after the update of the raw data is discussed. An algorithm for incrementally mining the schema of semistructured data is provided, and some experimental results are also given, which show that incremental mining for semistructured data is more efficient than non-incremental mining.

  16. Variations in serum magnesium and hormonal levels during incremental exercise.

    Science.gov (United States)

    Soria, Marisol; González-Haro, Carlos; Ansón, Miguel Angel; Iñigo, Carmen; Calvo, Maria Luisa; Escanero, Jesús Fernando

    2014-01-01

    In this study, we examined the relationship between plasma magnesium levels and hormonal variations during an incremental exercise test until exhaustion in 27, well-trained, male endurance athletes. After a warm-up of 10 min at 2 W/kg, the test began at an initial workload of 2.5 W/kg and continued with increments of 0.5 W/kg every 10 min until exhaustion. Plasma magnesium, catecholamine, insulin, glucagon, parathyroid hormone (PTH), calcitonin, aldosterone and cortisol levels were determined at rest, at the end of each stage and three, five and seven minutes post-exercise. With the incremental exercise test, no variations in plasma magnesium levels were found, while plasma adrenaline, noradrenaline, PTH, glucagon and cortisol levels increased significantly. Over the course of the exercise, plasma levels of insulin decreased significantly, but those of calcitonin remained steady. During the recovery period, catecholamines and insulin returned to basal levels. These findings indicate that the magnesium status of euhydrated endurance athletes during incremental exercise testing may be the result of the interrelation between several hormonal variations.

  17. An Approach to Incremental Design of Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to incremental design of distributed embedded systems for hard real-time applications. We start from an already existing system running a set of applications and the design problem is to implement new functionality on this system. Thus, we propose mapping...

  18. Assessing the Incremental Algorithm: A Response to Krahmer et al.

    Science.gov (United States)

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    This response discusses the experiment reported in Krahmer et al.'s Letter to the Editor of "Cognitive Science". We observe that their results do not tell us whether the Incremental Algorithm is better or worse than its competitors, and we speculate about implications for reference in complex domains, and for learning from "normal" (i.e.,…

  19. The Incremental Validity of Positive Emotions in Predicting School Functioning

    Science.gov (United States)

    Lewis, Ashley D.; Huebner, E. Scott; Reschly, Amy L.; Valois, Robert F.

    2009-01-01

    Proponents of positive psychology have argued for more comprehensive assessments incorporating positive measures (e.g., student strengths) as well as negative measures (e.g., psychological symptoms). However, few variable-centered studies have addressed the incremental validity of positive assessment data. The authors investigated the incremental…

  20. Minimizing System Modification in an Incremental Design Approach

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. We...

  1. Effectiveness of Incremental Rehearsal When Implemented by a Paraprofessional

    Science.gov (United States)

    Petersen-Brown, Shawna; Panahon, Carlos J.; Schreiber, Cassandra M.

    2017-01-01

    A growing body of research has established incremental rehearsal (IR) as an effective intervention for teaching basic skills in various student populations. However, there have been no published studies to date in which interventionists have been school-based personnel rather than researchers. In this study, a paraprofessional implemented IR with…

  2. Against the Odds: Academic Underdogs Benefit from Incremental Theories

    Science.gov (United States)

    Davis, Jody L.; Burnette, Jeni L.; Allison, Scott T.; Stone, Heather

    2011-01-01

    An implicit theory of ability approach to motivation argues that students who believe traits to be malleable (incremental theorists), relative to those who believe traits to be fixed (entity theorists), cope more effectively when academic challenges arise. In the current work, we integrated the implicit theory literature with research on top dog…

  3. Factors for Radical Creativity, Incremental Creativity, and Routine, Noncreative Performance

    Science.gov (United States)

    Madjar, Nora; Greenberg, Ellen; Chen, Zheng

    2011-01-01

    This study extends theory and research by differentiating between routine, noncreative performance and 2 distinct types of creativity: radical and incremental. We also use a sensemaking perspective to examine the interplay of social and personal factors that may influence a person's engagement in a certain level of creative action versus routine,…

  4. Peaks beyond Phonology: Adolescence, Incrementation, and Language Change

    Science.gov (United States)

    Tagliamonte, Sali A.; D'Arcy, Alexandra

    2009-01-01

    What is the mechanism by which a linguistic change advances across successive generations of speakers? We explore this question by using the model of incrementation provided in Labov 2001 and analyzing six current changes in English. Extending Labov's focus on recent and vigorous phonological changes, we target ongoing morphosyntactic(-semantic)…

  5. Incremental concept learning with few training examples and hierarchical classification

    NARCIS (Netherlands)

    Bouma, H.; Eendebak, P.T.; Schutte, K.; Azzopardi, G.; Burghouts, G.J.

    2015-01-01

    Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible

  6. Bipower variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, José Manuel; Podolskij, Mark

    2009-01-01

    Convergence in probability and central limit laws of bipower variation for Gaussian processes with stationary increments and for integrals with respect to such processes are derived. The main tools of the proofs are some recent powerful techniques of Wiener/Itô/Malliavin calculus for establishing...

  7. Incremental Closed-loop Identification of Linear Parameter Varying Systems

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, Klaus

    2011-01-01

    This paper deals with system identification for control of linear parameter varying systems. In practical applications, it is often important to be able to identify small plant changes in an incremental manner without shutting down the system and/or disconnecting the controller; unfortunately, cl...

  8. Revisiting the fundamentals of single point incremental forming by

    DEFF Research Database (Denmark)

    Silva, Beatriz; Skjødt, Martin; Martins, Paulo A.F.

    2008-01-01

    Knowledge of the physics behind the fracture of material at the transition between the inclined wall and the corner radius of the sheet is of great importance for understanding the fundamentals of single point incremental forming (SPIF). How the material fractures, what is the state of strain...

  9. Domestic Plant Productivity and Incremental Spillovers from Foreign Direct Investment

    NARCIS (Netherlands)

    C. Altomonte (Carlo); H.P.G. Pennings (Enrico)

    2009-01-01

    textabstractWe develop a simple test to assess whether horizontal spillover effects from multinational to domestic firms are endogenous to the market structure generated by the incremental entry of the same multinationals. In particular, we analyze the performance of a panel of 10,650 firms operatin

  10. Incremental concept learning with few training examples and hierarchical classification

    NARCIS (Netherlands)

    Bouma, H.; Eendebak, P.T.; Schutte, K.; Azzopardi, G.; Burghouts, G.J.

    2015-01-01

    Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible w

  11. Incremental validity of a measure of emotional intelligence.

    Science.gov (United States)

    Chapman, Benjamin P; Hayslip, Bert

    2005-10-01

    After the Schutte Self-Report Inventory of Emotional Intelligence (SSRI; Schutte et al., 1998) was found to predict college grade point average, subsequent emotional intelligence (EI)-college adjustment research has used inconsistent measures and widely varying criteria, resulting in confusion about the construct's predictive validity. In this study, we assessed the SSRI's incremental validity for a wide range of adjustment criteria, pitting it against a competing trait measure, the NEO Five-Factor Inventory (NEO-FFI; Costa & McCrae, 1992), and tests of fluid and crystallized intelligence. At a broad bandwidth, the SSRI total score significantly and uniquely predicted variance beyond NEO-FFI domain scores in the UCLA Loneliness Scale, Revised (Russell, Peplau, & Cutrono, 1980) scores. Higher fidelity analyses using previously identified SSRI factors and NEO-FFI item clusters revealed that the SSRI's Optimism/Mood Regulation and Emotion Appraisal factors contributed unique variance to self-reported study habits and social stress, respectively. The potential moderation of incremental validity by gender did not reach significance due to loss of power from splitting the sample, and mediational analyses revealed the SSRI Optimism/Mood Regulation factor was both directly and indirectly related to various criteria. We discuss the small magnitude of incremental validity coefficients and the differential incremental validity of SSRI factor and total scores.

  12. Minimizing System Modification in an Incremental Design Approach

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian;

    2001-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. ...

  13. Retroactive Operations: On "Increments" in Mandarin Chinese Conversations

    Science.gov (United States)

    Lim, Ni Eng

    2014-01-01

    Conversation Analysis (CA) has established repair (Schegloff, Jefferson & Sacks 1977; Schegloff 1979; Kitzinger 2013) as a conversational mechanism for managing contingencies of talk-in-interaction. In this dissertation, I look at a particular sort of "repair" termed TCU-continuations (or otherwise known increments in other…

  14. Incorporating "Unconscious Reanalysis" into an Incremental, Monotonic Parser

    CERN Document Server

    Sturt, P

    1995-01-01

    This paper describes an implementation based on a recent model in the psycholinguistic literature. We define a parsing operation which allows the reanalysis of dependencies within an incremental and monotonic processing architecture, and discuss search strategies for its application in a head-initial language (English) and a head-final language (Japanese).

  15. a model for incremental grounding in spoken dialogue systems

    NARCIS (Netherlands)

    Visser, Thomas; Traum, David; DeVault, David; op den Akker, Hendrikus J.A.

    2012-01-01

    Recent advances in incremental language processing for dialogue systems promise to enable more natural conversation between humans and computers. By analyzing the user's utterance while it is still in progress, systems can provide more human-like overlapping and backchannel responses to convey their

  16. Efficient Incremental Maintenance of Frequent Patterns with FP-Tree

    Institute of Scientific and Technical Information of China (English)

    Xiu-Li Ma; Yun-Hai Tong; Shi-Wei Tang; Dong-Qing Yang

    2004-01-01

    Mining frequent patterns has been studied popularly in data mining area. However, little work has been done on mining patterns when the database has an influx of fresh data constantly. In these dynamic scenarios, efficient maintenance of the discovered patterns is crucial. Most existing methods need to scan the entire database repeatedly, which is an obvious disadvantage. In this paper, an efficient incremental mining algorithm, Incremental-Mining (IM), is proposed for maintenance of the frequent patterns when new incremental data come. Based on the frequent pattern tree (FP-tree) structure, IM gives a way to make the most of the things from the previous mining process, and requires scanning the original data once at most. Furthermore, IM can identify directly the differential set of frequent patterns, which may be more informative to users. Moreover, IM can deal with changing thresholds as well as changing data, thus provide a full maintenance scheme. IM has been implemented and the performance study shows it outperforms three other incremental algorithms: FUP, DB-tree and re-running frequent pattern growth (FP-growth).

  17. Object class hierarchy for an incremental hypertext editor

    Directory of Open Access Journals (Sweden)

    A. Colesnicov

    1995-02-01

    Full Text Available The object class hierarchy design is considered due to a hypertext editor implementation. The following basic classes were selected: the editor's coordinate system, the memory manager, the text buffer executing basic editing operations, the inherited hypertext buffer, the edit window, the multi-window shell. Special hypertext editing features, the incremental hypertext creation support and further generalizations are discussed.

  18. Olaparib tablet formulation

    DEFF Research Database (Denmark)

    Plummer, Ruth; Swaisland, Helen; Leunen, Karin

    2015-01-01

    BACKGROUND: The oral PARP inhibitor olaparib has shown efficacy in patients with BRCA-mutated cancer. This Phase I, open-label, three-part study (Parts A-C) in patients with advanced solid tumours evaluated the effect of food on the pharmacokinetics (PK) of olaparib when administered in tablet...... formulation. METHODS: PK data were obtained in Part A using a two-treatment period crossover design; single-dose olaparib 300 mg (two 150 mg tablets) was administered in two prandial states: fasted and fed. In Part B, patients received olaparib tablets (300 mg bid) for 5 days under fasting conditions; in Part...... exposure to olaparib 300 mg tablets, although in the absence of an effect on the extent of olaparib absorption....

  19. Final Safety Analysis Report (FSAR) for Building 332, Increment III

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B. N.; Toy, Jr., A. J.

    1977-08-31

    This Final Safety Analysis Report (FSAR) supplements the Preliminary Safety Analysis Report (PSAR), dated January 18, 1974, for Building 332, Increment III of the Plutonium Materials Engineering Facility located at the Lawrence Livermore Laboratory (LLL). The FSAR, in conjunction with the PSAR, shows that the completed increment provides facilities for safely conducting the operations as described. These documents satisfy the requirements of ERDA Manual Appendix 6101, Annex C, dated April 8, 1971. The format and content of this FSAR complies with the basic requirements of the letter of request from ERDA San to LLL, dated March 10, 1972. Included as appendices in support of th FSAR are the Building 332 Operational Safety Procedure and the LLL Disaster Control Plan.

  20. Ground Fault Line Selection with Improved Residual Flow Incremental Method

    Directory of Open Access Journals (Sweden)

    Wenhong Li

    2013-08-01

    Full Text Available According to the shortcoming of single-phase ground fault line selection method in the resonant grounded system such as the uncertainty of its device by fast compensation with the automatic compensation equipment, an arc suppression and residual flow incremental method is proposed to effectively choose the earth fault line. Firstly, when the single-phase ground fault occurs, the arc suppression coil parameters are adjusted to realize compensation and arc suppression. Then the arc suppression coil inductance values are modulated to make the zero-sequence current of fault line changed, at the same time, the zero-sequence current value is detected and its change will be captured to select the fault line. The simulation experiments prove that the arc grounding over voltage damage can be effectively reduced by arc suppression coil full compensation and fault line can be effectively selected by arc suppression and residual flow increment method.

  1. Electrical-assisted double side incremental forming and processes thereof

    Science.gov (United States)

    Roth, John; Cao, Jian

    2014-06-03

    A process for forming a sheet metal component using an electric current passing through the component is provided. The process can include providing a double side incremental forming machine, the machine operable to perform a plurality of double side incremental deformations on the sheet metal component and also apply an electric direct current to the sheet metal component during at least part of the forming. The direct current can be applied before or after the forming has started and/or be terminated before or after the forming has stopped. The direct current can be applied to any portion of the sheet metal. The electrical assistance can reduce the magnitude of force required to produce a given amount of deformation, increase the amount of deformation exhibited before failure and/or reduce any springback typically exhibited by the sheet metal component.

  2. A new incremental updating algorithm for association rules

    Institute of Scientific and Technical Information of China (English)

    WANG Zuo-cheng; XUE Li-xia

    2007-01-01

    Incremental data mining is an attractive goal for many kinds of mining in large databases or data warehouses. A new incremental updating algorithm rule growing algorithm (RGA) is presented for efficient maintenance discovered association rules when new transaction data is added to a transaction database. The algorithm RGA makes use of previous association rules as seed rules. By RGA, the seed rules whether are strong or not can be confirmed without scanning all the transaction DB in most cases. If the distributing of item of transaction DB is not uniform, the inflexion of robustness curve comes very quickly, and RGA gets great efficiency, saving lots of time for I/O. Experiments validate the algorithm and the test results showed that this algorithm is efficient.

  3. Parameter Estimation for Generalized Brownian Motion with Autoregressive Increments

    CERN Document Server

    Fendick, Kerry

    2011-01-01

    This paper develops methods for estimating parameters for a generalization of Brownian motion with autoregressive increments called a Brownian ray with drift. We show that a superposition of Brownian rays with drift depends on three types of parameters - a drift coefficient, autoregressive coefficients, and volatility matrix elements, and we introduce methods for estimating each of these types of parameters using multidimensional times series data. We also cover parameter estimation in the contexts of two applications of Brownian rays in the financial sphere: queuing analysis and option valuation. For queuing analysis, we show how samples of queue lengths can be used to estimate the conditional expectation functions for the length of the queue and for increments in its net input and lost potential output. For option valuation, we show how the Black-Scholes-Merton formula depends on the price of the security on which the option is written through estimates not only of its volatility, but also of a coefficient ...

  4. Automobile sheet metal part production with incremental sheet forming

    Directory of Open Access Journals (Sweden)

    İsmail DURGUN

    2016-02-01

    Full Text Available Nowadays, effect of global warming is increasing drastically so it leads to increased interest on energy efficiency and sustainable production methods. As a result of adverse conditions, national and international project platforms, OEMs (Original Equipment Manufacturers, SMEs (Small and Mid-size Manufacturers perform many studies or improve existing methodologies in scope of advanced manufacturing techniques. In this study, advanced manufacturing and sustainable production method "Incremental Sheet Metal Forming (ISF" was used for sheet metal forming process. A vehicle fender was manufactured with or without die by using different toolpath strategies and die sets. At the end of the study, Results have been investigated under the influence of method and parameters used.Keywords: Template incremental sheet metal, Metal forming

  5. Incremental learning by message passing in hierarchical temporal memory.

    Science.gov (United States)

    Rehn, Erik M; Maltoni, Davide

    2014-08-01

    Hierarchical temporal memory (HTM) is a biologically inspired framework that can be used to learn invariant representations of patterns in a wide range of applications. Classical HTM learning is mainly unsupervised, and once training is completed, the network structure is frozen, thus making further training (i.e., incremental learning) quite critical. In this letter, we develop a novel technique for HTM (incremental) supervised learning based on gradient descent error minimization. We prove that error backpropagation can be naturally and elegantly implemented through native HTM message passing based on belief propagation. Our experimental results demonstrate that a two-stage training approach composed of unsupervised pretraining and supervised refinement is very effective (both accurate and efficient). This is in line with recent findings on other deep architectures.

  6. The Analysis of Forming Forces in Single Point Incremental Forming

    Directory of Open Access Journals (Sweden)

    Koh Kyung Hee

    2016-01-01

    Full Text Available Incremental forming is a process to produce sheet metal parts in quick. Because there is no need for dedicated dies and molds, this process is less cost and time spent. The purpose of this study is to investigate forming forces in single point incremental forming. Producing a cone frustum of aluminum is tested for forming forces. A dynamometer is used to collect forming forces and analyze them. These forces are compared with cutting forces upon producing same geometrical shapes of experimental parts. The forming forces in Z direction are 40 times larger than the machining forces. A spindle and its axis of a forming machine should be designed enough to withstand the forming forces.

  7. Reynolds number scaling of velocity increments in isotropic turbulence

    Science.gov (United States)

    Iyer, Kartik P.; Sreenivasan, Katepalli R.; Yeung, P. K.

    2017-02-01

    Using the largest database of isotropic turbulence available to date, generated by the direct numerical simulation (DNS) of the Navier-Stokes equations on an 81923 periodic box, we show that the longitudinal and transverse velocity increments scale identically in the inertial range. By examining the DNS data at several Reynolds numbers, we infer that the contradictory results of the past on the inertial-range universality are artifacts of low Reynolds number and residual anisotropy. We further show that both longitudinal and transverse velocity increments scale on locally averaged dissipation rate, just as postulated by Kolmogorov's refined similarity hypothesis, and that, in isotropic turbulence, a single independent scaling adequately describes fluid turbulence in the inertial range.

  8. Mission Planning System Increment 5 (MPS Inc 5)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Mission Planning System Increment 5 ( MPS Inc 5) Defense Acquisition Management Information...Retrieval (DAMIR) UNCLASSIFIED MPS Inc 5 2016 MAR UNCLASSIFIED 2 Table of Contents Common Acronyms and Abbreviations for MAIS Programs 3...U.S.C- United States Code USD(AT&L) - Under Secretary of Defense for Acquisition, Technology, & Logistics MPS Inc 5 2016 MAR UNCLASSIFIED 3 Col

  9. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  10. Research on homogeneous deformation of electromagnetic incremental tube bulging

    OpenAIRE

    Cui, Xiaohui; Mo, Jianhua; Li, Jianjun

    2014-01-01

    The electromagnetic incremental forming (EMIF) method is used for tube forming process. Suitable 2D FE models are designed to predict the forming process with a moving coil. In comparison with experimental values, simulation method can obtain accurate results. Then, effect factors named overlapping ration of adjacent discharge positions, discharge voltage, forming sequence and die dimension on tube homogeneous deformation are discussed. The result demonstrates that it is feasib...

  11. AN INCREMENTAL UPDATING ALGORITHM FOR MINING ASSOCIATION RULES

    Institute of Scientific and Technical Information of China (English)

    Xu Baowen; Yi Tong; Wu Fangjun; Chen Zhenqiang

    2002-01-01

    In this letter, on the basis of Frequent Pattern(FP) tree, the support function to update FP-tree is introduced, then an Incremental FP (IFP) algorithm for mining association rules is proposed. IFP algorithm considers not only adding new data into the database but also reducing old data from the database. Furthermore, it can predigest five cases to three cases.The algorithm proposed in this letter can avoid generating lots of candidate items, and it is high efficient.

  12. Amphibious Combat Vehicle Acquisition: Marine Corps Adopts an Incremental Approach

    Science.gov (United States)

    2015-04-01

    program progress towards achieving the balance—that is sought in accordance with best practices—between customer needs and resources (e.g., technologies...adoption of an incremental approach has helped the program progress towards achieving the balance between customer needs and resources (e.g., technologies...USMC plans to acquire 204 ACV 1.1s and anticipates achieving initial operational capability in fiscal year 2020 . According to program officials

  13. An Incremental Algorithm of Text Clustering Based on Semantic Sequences

    Institute of Scientific and Technical Information of China (English)

    FENG Zhonghui; SHEN Junyi; BAO Junpeng

    2006-01-01

    This paper proposed an incremental textclustering algorithm based on semantic sequence.Using similarity relation of semantic sequences and calculating the cover of similarity semantic sequences set, the candidate cluster with minimum entropy overlap value was selected as a result cluster every time in this algorithm.The comparison of experimental results shows that the precision of the algorithm is higher than other algorithms under same conditions and this is obvious especially on long documents set.

  14. INTERMITTENT VERSUS CONTINUOUS INCREMENTAL FIELD TESTS: ARE MAXIMAL VARIABLES INTERCHANGEABLE?

    Directory of Open Access Journals (Sweden)

    Lorival J. Carminatti

    2013-03-01

    Full Text Available The aim of the present study was to compare physiological responses derived from an incremental progressive field test with a constant speed test i.e. intermittent versus continuous protocol. Two progressive maximum tests (Carminatti`s test (T-CAR and the Vameval test (T-VAM, characterized by increasing speed were used. T-CAR is an intermittent incremental test, performed as shuttle runs; while T-VAM is a continuous incremental test performed on an athletic track. Eighteen physically active, healthy young subjects (21.9 ± 2.0 years; 76.5 ± 8.6 kg, 1.78 ± 0.08 m, 11.2 ± 5.4% body fat, volunteered for this study. Subjects performed four different maximum test sessions conducted in the field: two incremental tests and two time to exhaustion tests (TTE at peak test velocities (PV. No significant differences were found for PV (T-CAR = 15.6 ± 1.2; T-VAM = 15.5 ± 1.3 km·h-1 and maximal HR (T-CAR = 195 ± 11; T- VAM = 194 ± 14 bpm. During TTE, there were no significant differences for HR (TTET-CAR and TTET-VAM = 192 ± 12 bpm. However, there was a significant difference in TTE (p = 0.04 (TTET-CAR = 379 ± 84, TTET-VAM = 338 ± 58 s with a low correlation (r = 0.41. The blood lactate concentration measured at the end of the TTE tests, showed no significant difference (TTET-CAR = 13.2 ± 2.4 vs. TTET-VAM = 12.9 ± 2.4 mmol·l-1. Based on the present findings, it is suggested that the maximal variables derived from T-CAR and T-VAM can be interchangeable in the design of training programs.

  15. Superiorization of incremental optimization algorithms for statistical tomographic image reconstruction

    Science.gov (United States)

    Helou, E. S.; Zibetti, M. V. W.; Miqueles, E. X.

    2017-04-01

    We propose the superiorization of incremental algorithms for tomographic image reconstruction. The resulting methods follow a better path in its way to finding the optimal solution for the maximum likelihood problem in the sense that they are closer to the Pareto optimal curve than the non-superiorized techniques. A new scaled gradient iteration is proposed and three superiorization schemes are evaluated. Theoretical analysis of the methods as well as computational experiments with both synthetic and real data are provided.

  16. Firms' skills as drivers of radical and incremental innovation

    OpenAIRE

    Doran, Justin; Ryan, Geraldine

    2014-01-01

    Using firm level data from the Irish Community Innovation Survey 2008–2010 we analyse the importance of eight skill sets for the innovation performance of firms. We distinguish between radical and incremental innovation. Our results suggest that there is substantial heterogeneity in the importance of skills for different types of innovation and that some skills are best sourced from outside the firm while others are best developed in-house.

  17. Incremental Learning with SVM for Multimodal Classification of Prostatic Adenocarcinoma

    OpenAIRE

    José Fernando García Molina; Lei Zheng; Metin Sertdemir; Dietmar J Dinter; Stefan Schönberg; Matthias Rädle

    2014-01-01

    Robust detection of prostatic cancer is a challenge due to the multitude of variants and their representation in MR images. We propose a pattern recognition system with an incremental learning ensemble algorithm using support vector machines (SVM) tackling this problem employing multimodal MR images and a texture-based information strategy. The proposed system integrates anatomic, texture, and functional features. The data set was preprocessed using B-Spline interpolation, bias field correcti...

  18. Color Transformations for the 2MASS Second Incremental Data Release

    CERN Document Server

    Carpenter, J M

    2001-01-01

    Transformation equations are presented to convert colors and magnitudes measured in the AAO, ARNICA, CIT, DENIS, ESO, LCO (Persson standards), MSSSO, SAAO, and UKIRT photometric systems to the photometric system inherent to the 2MASS Second Incremental Data Release. The transformations have been derived by comparing 2MASS photometry with published magnitudes and colors for stars observed in these systems. Transformation equations have also been derived indirectly for the Bessell & Brett (1988) and Koornneef (1983) homogenized photometric systems.

  19. Plutonium Immobilization Project Baseline Formulation

    Energy Technology Data Exchange (ETDEWEB)

    Ebbinghaus, B.

    1999-02-01

    A key milestone for the Immobilization Project (AOP Milestone 3.2a) in Fiscal Year 1998 (FY98) is the definition of the baseline composition or formulation for the plutonium ceramic form. The baseline formulation for the plutonium ceramic product must be finalized before the repository- and plant-related process specifications can be determined. The baseline formulation that is currently specified is given in Table 1.1. In addition to the baseline formulation specification, this report provides specifications for two alternative formulations, related compositional specifications (e.g., precursor compositions and mixing recipes), and other preliminary form and process specifications that are linked to the baseline formulation. The preliminary specifications, when finalized, are not expected to vary tremendously from the preliminary values given.

  20. Incremental validity of emotional intelligence ability in predicting academic achievement.

    Science.gov (United States)

    Lanciano, Tiziana; Curci, Antonietta

    2014-01-01

    We tested the incremental validity of an ability measure of emotional intelligence (El) in predicting academic achievement in undergraduate students, controlling for cognitive abilities and personality traits. Academic achievement has been conceptualized in terms of the number of exams, grade point average, and study time taken to prepare for each exam. Additionally, gender differences were taken into account in these relationships. Participants filled in the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT), the Raven's Advanced Progressive Matrices, the reduced version of the Eysenck Personality Questionnaire, and academic achievement measures. Results showed that El abilities were positively related to academic achievement indices, such as the number of exams and grade point average; total El ability and the Perceiving branch were negatively associated with the study time spent preparing for exams. Furthermore, El ability adds a percentage of incremental variance with respect to cognitive ability and personality variables in explaining scholastic success. The magnitude of the associations between El abilities and academic achievement measures was generally higher for men than for women. Jointly considered, the present findings support the incremental validity of the MSCEIT and provide positive indications of the importance of El in students' academic development. The helpfulness of El training in the context of academic institutions is discussed.

  1. Early ventilation-heart rate breakpoint during incremental cycling exercise.

    Science.gov (United States)

    Gravier, G; Delliaux, S; Ba, A; Delpierre, S; Guieu, R; Jammes, Y

    2014-03-01

    Previous observations having reported a transient hypoxia at the onset of incremental exercise, we investigated the existence of concomitant ventilatory and heart rate (HR) breakpoints.33 subjects executed a maximal cycling exercise with averaging for successive 5-s periods of HR, ventilation, tidal volume (VT), mean inspiratory flow rate (VT/Ti), and end-tidal partial pressures of O2 (PETO2) and CO2. In 10 subjects, the transcutaneous partial pressure of O2 (PtcO2) was recorded and the venous blood lactic acid (LA) concentration measured.At the beginning of exercise, PETO2 decreased, reaching a nadir, then progressively increased until the exercise ended. PtcO2 varied in parallel. Whether or not a 0-W cycling period preceded the incremental exercise, the rate of changes in VE, VT, VT/Ti and HR significantly increased when the nadir PO2 was reached. The ventilatory/ HR breakpoint was measured at 33±4% of VO2max, whereas the ventilatory threshold (VTh) was detected at 67±4% of VO2max and LA began to increase at 45 to 50% of VO2max.During incremental cycling exercise, we identified the existence of HR and ventilatory breakpoints in advance of both lactate and ventilatory thresholds which coincided with modest hypoxia and hypercapnia. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Incremental Mining for Regular Frequent Patterns in Vertical Format

    Directory of Open Access Journals (Sweden)

    Vijay Kumar G

    2013-04-01

    Full Text Available In the real world database updates continuously in several online applications like super market, network monitoring, web administration, stock market etc. Frequent pattern mining is afundamental and essential area in data mining research. Not only occurrence frequency of a pattern but also occurrence behaviour of a pattern may be treated as important criteria to measure the interestingness of a pattern. A frequent pattern is said to be regular frequent if the occurrence behaviour is less than or equal to the user given regularity threshold. In incremental transactional databases the occurrence frequency and the occurrence behaviour of a pattern changes whenever a small set of new transactions are added to the database. It is undesirable to mine regular frequent patterns from the scratch. Thus proposes a new algorithm called RFPID (Regular Frequent Pattern Mining in Incremental Databases to mine regular frequent patterns in incremental transactional databases using vertical data format which requires only one database scan. The experimental results show our algorithm is efficient in both memory utilization and execution.

  3. Incremental learning for ν-Support Vector Regression.

    Science.gov (United States)

    Gu, Bin; Sheng, Victor S; Wang, Zhijie; Ho, Derek; Osman, Said; Li, Shuo

    2015-07-01

    The ν-Support Vector Regression (ν-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter ν on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to ν-Support Vector Classification (ν-SVC) (Schölkopf et al., 2000), ν-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line ν-SVC algorithm (AONSVM) to ν-SVR will not generate an effective initial solution. It is the main challenge to design an incremental ν-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of ν-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental ν-SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch ν-SVR algorithms with both cold and warm starts. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. An Enhanced Visualization Process Model for Incremental Visualization.

    Science.gov (United States)

    Schulz, Hans-Jorg; Angelini, Marco; Santucci, Giuseppe; Schumann, Heidrun

    2016-07-01

    With today's technical possibilities, a stable visualization scenario can no longer be assumed as a matter of course, as underlying data and targeted display setup are much more in flux than in traditional scenarios. Incremental visualization approaches are a means to address this challenge, as they permit the user to interact with, steer, and change the visualization at intermediate time points and not just after it has been completed. In this paper, we put forward a model for incremental visualizations that is based on the established Data State Reference Model, but extends it in ways to also represent partitioned data and visualization operators to facilitate intermediate visualization updates. In combination, partitioned data and operators can be used independently and in combination to strike tailored compromises between output quality, shown data quantity, and responsiveness-i.e., frame rates. We showcase the new expressive power of this model by discussing the opportunities and challenges of incremental visualization in general and its usage in a real world scenario in particular.

  5. Design and Performance Analysis of Incremental Networked Predictive Control Systems.

    Science.gov (United States)

    Pang, Zhong-Hua; Liu, Guo-Ping; Zhou, Donghua

    2016-06-01

    This paper is concerned with the design and performance analysis of networked control systems with network-induced delay, packet disorder, and packet dropout. Based on the incremental form of the plant input-output model and an incremental error feedback control strategy, an incremental networked predictive control (INPC) scheme is proposed to actively compensate for the round-trip time delay resulting from the above communication constraints. The output tracking performance and closed-loop stability of the resulting INPC system are considered for two cases: 1) plant-model match case and 2) plant-model mismatch case. For the former case, the INPC system can achieve the same output tracking performance and closed-loop stability as those of the corresponding local control system. For the latter case, a sufficient condition for the stability of the closed-loop INPC system is derived using the switched system theory. Furthermore, for both cases, the INPC system can achieve a zero steady-state output tracking error for step commands. Finally, both numerical simulations and practical experiments on an Internet-based servo motor system illustrate the effectiveness of the proposed method.

  6. Exploiting Homogeneity of Density in Incremental Hierarchical Clustering

    Directory of Open Access Journals (Sweden)

    Dwi H. Widiyantoro

    2006-11-01

    Full Text Available Hierarchical clustering is an important tool in many applications. As it involves a large data set that proliferates over time, reclustering the data set periodically is not an efficient process. Therefore, the ability to incorporate a new data set incrementally into an existing hierarchy becomes increasingly demanding. This article describes Homogen, a system that employs a new algorithm for generating a hierarchy of concepts and clusters incrementally from a stream of observations. The system aims to construct a hierarchy that satisfies the homogeneity and the monotonicity properties. Working in a bottom-up fashion, a new observation is placed in the hierarchy and a sequence of hierarchy restructuring processes is performed only in regions that have been affected by the presence of the new observation. Additionally, it combines multiple restructuring techniques that address different restructuring objectives to get a synergistic effect. The system has been tested on a variety of domains including structured and unstructured data sets. The experimental results reveal that the system is able to construct a concept hierarchy that is consistent regardless of the input data order and whose quality is comparable to the quality of those produced by non incremental clustering algorithms.

  7. A New Evolutionary-Incremental Framework for Feature Selection

    Directory of Open Access Journals (Sweden)

    Mohamad-Hoseyn Sigari

    2014-01-01

    Full Text Available Feature selection is an NP-hard problem from the viewpoint of algorithm design and it is one of the main open problems in pattern recognition. In this paper, we propose a new evolutionary-incremental framework for feature selection. The proposed framework can be applied on an ordinary evolutionary algorithm (EA such as genetic algorithm (GA or invasive weed optimization (IWO. This framework proposes some generic modifications on ordinary EAs to be compatible with the variable length of solutions. In this framework, the solutions related to the primary generations have short length. Then, the length of solutions may be increased through generations gradually. In addition, our evolutionary-incremental framework deploys two new operators called addition and deletion operators which change the length of solutions randomly. For evaluation of the proposed framework, we use that for feature selection in the application of face recognition. In this regard, we applied our feature selection method on a robust face recognition algorithm which is based on the extraction of Gabor coefficients. Experimental results show that our proposed evolutionary-incremental framework can select a few number of features from existing thousands features efficiently. Comparison result of the proposed methods with the previous methods shows that our framework is comprehensive, robust, and well-defined to apply on many EAs for feature selection.

  8. Baseline LAW Glass Formulation Testing

    Energy Technology Data Exchange (ETDEWEB)

    Kruger, Albert A. [USDOE Office of River Protection, Richland, WA (United States); Mooers, Cavin [The Catholic University of America, Washington, DC (United States). Vitreous State Lab.; Bazemore, Gina [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Pegg, Ian L. [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Hight, Kenneth [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Lai, Shan Tao [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Buechele, Andrew [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Rielley, Elizabeth [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Gan, Hao [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Muller, Isabelle S. [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Cecil, Richard [The Catholic University of America, Washington, DC (United States). Vitreous State Lab

    2013-06-13

    The major objective of the baseline glass formulation work was to develop and select glass formulations that are compliant with contractual and processing requirements for each of the LAW waste streams. Other objectives of the work included preparation and characterization of glasses with respect to the properties of interest, optimization of sulfate loading in the glasses, evaluation of ability to achieve waste loading limits, testing to demonstrate compatibility of glass melts with melter materials of construction, development of glass formulations to support ILAW qualification activities, and identification of glass formulation issues with respect to contract specifications and processing requirements.

  9. Formulation and characterisation of tetracycline-containing bioadhesive polymer networks designed for the treatment of periodontal disease.

    Science.gov (United States)

    Jones, David S; Lawlor, Michelle S; Woolfson, A David

    2004-01-01

    This study described the drug release, rheological (dynamic and flow) and textural/mechanical properties of a series of formulations composed of 15% w/w polymethylvinylether-co-maleic anhydride (PMVE-MA), 0-9% w/w polyvinylpyrrolidone (PVP) and containing 1-5% w/w tetracycline hydrochloride, designed for the treatment of periodontal disease. All formulations exhibited pseudoplastic flow with minimal thixotropy. Increasing the concentration of PVP sequentially increased the zero-rate viscosity (derived from the Cross model) and the hardness and compressibility of the formulations (derived from texture profile analysis). These affects may be accredited to increased polymer entanglement and, in light of the observed synergy between the two polymers with respect to their textural and rheological properties, interaction between PVP and PMVE-MA. Increasing the concentration of PVP increased the storage and loss moduli yet decreased the loss tangent of all formulations, indicative of increased elastic behaviour. Synergy between the two polymers with respect to their viscoelastic properties was observed. Increased adhesiveness, associated with increased concentrations of PVP was ascribed to the increasing bioadhesion and tack of the formulations. The effect of increasing drug concentration on the rheological and textural properties was dependent on PVP concentration. At lower concentrations (0, 3% w/w) no effect was observed whereas, in the presence of 9% w/w PVP, increasing drug concentration increased formulation elasticity, zero rate viscosity, hardness and compressibility. These observations were ascribed to the greater mass of suspended drug in formulations containing the highest concentration of PVP. Drug release from formulations containing 6 and 9% PVP (and 5% w/w drug) was prolonged and swelling/diffusion controlled. Based on the drug release, rheological and textural properties, it is suggested that the formulation containing 15% w/w PMVE-MA, 6% w/w PVP and

  10. Message formulation and structural assembly: Describing “easy” and “hard” events with preferred and dispreferred syntactic structures

    NARCIS (Netherlands)

    Velde, M. van de; Meyer, A.S.; Konopka, A.E.

    2014-01-01

    When formulating simple sentences to describe pictured events, speakers look at the referents they are describing in the order of mention. Accounts of incrementality in sentence production rely heavily on analyses of this gaze-speech link. To identify systematic sources of variability in message and

  11. Saltstone Clean Cap Formulation

    Energy Technology Data Exchange (ETDEWEB)

    Langton, C

    2005-04-22

    The current operation strategy for using Saltstone Vault 4 to receive 0.2 Ci/gallon salt solution waste involves pouring a clean grout layer over the radioactive grout prior to initiating pour into another cell. This will minimize the radiating surface area and reduce the dose rate at the vault and surrounding area. The Clean Cap will be used to shield about four feet of Saltstone poured into a Z-Area vault cell prior to moving to another cell. The minimum thickness of the Clean Cap layer will be determined by the cesium concentration and resulting dose levels and it is expected to be about one foot thick based on current calculations for 0.1 Ci Saltstone that is produced in the Saltstone process by stabilization of 0.2 Ci salt solution. This report documents experiments performed to identify a formulation for the Clean Cap. Thermal transient calculations, adiabatic temperature rise measurements, pour height, time between pour calculations and shielding calculations were beyond the scope and time limitations of this study. However, data required for shielding calculations (composition and specific gravity) are provided for shielding calculations. The approach used to design a Clean Cap formulation was to produce a slurry from the reference premix (10/45/45 weight percent cement/slag/fly ash) and domestic water that resembled as closely as possible the properties of the Saltstone slurry. In addition, options were investigated that may offer advantages such as less bleed water and less heat generation. The options with less bleed water required addition of dispersants. The options with lower heat contained more fly ash and less slag. A mix containing 10/45/45 weight percent cement/slag/fly ash with a water to premix ratio of 0.60 is recommended for the Clean Cap. Although this mix may generate more than 3 volume percent standing water (bleed water), it has rheological, mixing and flow properties that are similar to previously processed Saltstone. The recommended

  12. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2006-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  13. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  14. Operator Formulation of Classical Mechanics.

    Science.gov (United States)

    Cohn, Jack

    1980-01-01

    Discusses the construction of an operator formulation of classical mechanics which is directly concerned with wave packets in configuration space and is more similar to that of convential quantum theory than other extant operator formulations of classical mechanics. (Author/HM)

  15. Formulation of supergravity without superspace

    CERN Document Server

    Ferrara, S

    1979-01-01

    Supergravity, the particle theory which unifies under a unique gauge principle the quantum-mechanical concept of spin and space-time geometry, is formulated in terms of quantities defined over Minkowski space-time. 'l'he relation between this formulation and the fonnulation which uses superspace, the space-time supplemented by spinning degrees of freedom, is also briefly discussed.

  16. Incremental Innovation and Competitive Pressure in the Presence of Discrete Innovation

    DEFF Research Database (Denmark)

    Ghosh, Arghya; Kato, Takao; Morita, Hodaka

    2017-01-01

    Technical progress consists of improvements made upon the existing technology (incremental innovation) and innovative activities aiming at entirely new technology (discrete innovation). Incremental innovation is often of limited relevance to the new technology invented by successful discrete...... innovation. Previous theoretical studies have indicated that higher competitive pressure measured by product substitutability increases incremental innovation. In contrast, we find that intensified competition can decrease incremental innovation. A firm's market share upon its failure in discrete innovation...

  17. Thickness control in a new flexible hybrid incremental sheet forming process

    OpenAIRE

    Zhang, H.; LU, B; Chen,J.; Feng,S.; Li, Z; Long, H.

    2017-01-01

    Incremental sheet forming is a cost-effective process for rapid manufacturing of sheet metal products. However, incremental sheet forming also has some limitations such as severe sheet thinning and long processing time. These limitations hamper the forming part quality and production efficiency, thus restricting the incremental sheet forming application in industrial practice. To overcome the problem of sheet thinning, a variety of processes, such as multi-step incremental sheet forming, have...

  18. Global Modeling and Data Assimilation. Volume 11; Documentation of the Tangent Linear and Adjoint Models of the Relaxed Arakawa-Schubert Moisture Parameterization of the NASA GEOS-1 GCM; 5.2

    Science.gov (United States)

    Suarez, Max J. (Editor); Yang, Wei-Yu; Todling, Ricardo; Navon, I. Michael

    1997-01-01

    A detailed description of the development of the tangent linear model (TLM) and its adjoint model of the Relaxed Arakawa-Schubert moisture parameterization package used in the NASA GEOS-1 C-Grid GCM (Version 5.2) is presented. The notational conventions used in the TLM and its adjoint codes are described in detail.

  19. A sequential tree approach for incremental sequential pattern mining

    Indian Academy of Sciences (India)

    RAJESH KUMAR BOGHEY; SHAILENDRA SINGH

    2016-12-01

    ‘‘Sequential pattern mining’’ is a prominent and significant method to explore the knowledge and innovation from the large database. Common sequential pattern mining algorithms handle static databases.Pragmatically, looking into the functional and actual execution, the database grows exponentially thereby leading to the necessity and requirement of such innovation, research, and development culminating into the designing of mining algorithm. Once the database is updated, the previous mining result will be incorrect, and we need to restart and trigger the entire mining process for the new updated sequential database. To overcome and avoid the process of rescanning of the entire database, this unique system of incremental mining of sequential pattern is available. The previous approaches, system, and techniques are a priori-based frameworks but mine patterns is an advanced and sophisticated technique giving the desired solution. We propose and incorporate an algorithm called STISPM for incremental mining of sequential patterns using the sequence treespace structure. STISPM uses the depth-first approach along with backward tracking and the dynamic lookahead pruning strategy that removes infrequent and irregular patterns. The process and approach from the root node to any leaf node depict a sequential pattern in the database. The structural characteristic of the sequence tree makes it convenient and appropriate for incremental sequential pattern mining. The sequence tree also stores all the sequential patterns with its count and statistics, so whenever the support system is withdrawn or changed, our algorithm using frequent sequence tree as the storage structure can find and detect all the sequential patternswithout mining the database once again.

  20. Metastable Pain-Attention Dynamics during Incremental Exhaustive Exercise

    Science.gov (United States)

    Slapšinskaitė, Agnė; Hristovski, Robert; Razon, Selen; Balagué, Natàlia; Tenenbaum, Gershon

    2017-01-01

    Background: Pain attracts attention on the bodily regions. Attentional allocation toward pain results from the neural communication across the brain-wide network “connectome” which consists of pain-attention related circuits. Connectome is intrinsically dynamic and spontaneously fluctuating on multiple time-scales. The present study delineates the pain-attention dynamics during incremental cycling performed until volitional exhaustion and investigates the potential presence of nested metastable dynamics. Method: Fifteen young and physically active adults completed a progressive incremental cycling test and reported their discomfort and pain on a body map every 15 s. Results: The analyses revealed that the number of body locations with perceived pain and discomfort increased throughout five temporal windows reaching an average of 4.26 ± 0.59 locations per participant. A total of 37 different locations were reported and marked as painful for all participants throughout the cycling task. Significant differences in entropy were observed between all temporal windows except the fourth and fifth windows. Transient dynamics of bodily locations with perceived discomfort and pain were spanned by three principal components. The metastable dynamics of the body pain locations groupings over time were discerned by three time scales: (1) the time scale of shifts (15 s); (2) the time scale of metastable configurations (100 s), and (3) the observational time scale (1000 s). Conclusion: The results of this study indicate that body locations perceived as painful increase throughout the incremental cycling task following a switching metastable and nested dynamics. These findings support the view that human brain is intrinsically organized into active, mutually interacting complex and nested functional networks, and that subjective experiences inherent in pain perception depict identical dynamical principles to the neural tissue in the brain. PMID:28111563

  1. Metastable Pain-Attention Dynamics during Incremental Exhaustive Exercise.

    Science.gov (United States)

    Slapšinskaitė, Agnė; Hristovski, Robert; Razon, Selen; Balagué, Natàlia; Tenenbaum, Gershon

    2016-01-01

    Background: Pain attracts attention on the bodily regions. Attentional allocation toward pain results from the neural communication across the brain-wide network "connectome" which consists of pain-attention related circuits. Connectome is intrinsically dynamic and spontaneously fluctuating on multiple time-scales. The present study delineates the pain-attention dynamics during incremental cycling performed until volitional exhaustion and investigates the potential presence of nested metastable dynamics. Method: Fifteen young and physically active adults completed a progressive incremental cycling test and reported their discomfort and pain on a body map every 15 s. Results: The analyses revealed that the number of body locations with perceived pain and discomfort increased throughout five temporal windows reaching an average of 4.26 ± 0.59 locations per participant. A total of 37 different locations were reported and marked as painful for all participants throughout the cycling task. Significant differences in entropy were observed between all temporal windows except the fourth and fifth windows. Transient dynamics of bodily locations with perceived discomfort and pain were spanned by three principal components. The metastable dynamics of the body pain locations groupings over time were discerned by three time scales: (1) the time scale of shifts (15 s); (2) the time scale of metastable configurations (100 s), and (3) the observational time scale (1000 s). Conclusion: The results of this study indicate that body locations perceived as painful increase throughout the incremental cycling task following a switching metastable and nested dynamics. These findings support the view that human brain is intrinsically organized into active, mutually interacting complex and nested functional networks, and that subjective experiences inherent in pain perception depict identical dynamical principles to the neural tissue in the brain.

  2. The associated random walk and martingales in random walks with stationary increments

    CERN Document Server

    Grey, D R

    2010-01-01

    We extend the notion of the associated random walk and the Wald martingale in random walks where the increments are independent and identically distributed to the more general case of stationary ergodic increments. Examples are given where the increments are Markovian or Gaussian, and an application in queueing is considered.

  3. Global Combat Support System - Army Increment 2 (GCSS-A Inc 2)

    Science.gov (United States)

    2016-03-01

    Product Lifecycle Management Plus. The two components are managed as a single program. GCSS-A is being developed incrementally . Increment 1 provides...2016 Major Automated Information System Annual Report Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) Defense Acquisition...Base Year CAE - Component Acquisition Executive CDD - Capability Development Document CPD - Capability Production Document DAE - Defense

  4. 77 FR 9653 - Comment Sought on Potential Data for Connect America Fund Phase One Incremental Support

    Science.gov (United States)

    2012-02-17

    ... COMMISSION Comment Sought on Potential Data for Connect America Fund Phase One Incremental Support AGENCY... incremental support. DATES: Comments are due on or before March 19, 2012. ADDRESSES: Interested parties may... incremental support is designed to provide an immediate boost to broadband deployment in areas that are...

  5. 77 FR 25747 - Certain Incremental Dental Positioning Adjustment Appliances and Methods of Producing Same...

    Science.gov (United States)

    2012-05-01

    ... COMMISSION Certain Incremental Dental Positioning Adjustment Appliances and Methods of Producing Same; Notice... importation, and the sale within the United States after importation of certain incremental dental positioning... importing, offering for sale, and selling for importation in the United States incremental dental...

  6. How big are the increments of 1~p-valued Gaussian processes?

    Institute of Scientific and Technical Information of China (English)

    林正炎

    1997-01-01

    be a sequence of independent Gaussian processes with σk2 (h)The large increments for Y(·) with boundedσ (p, h ) are investigated. As an example the large increments of infinite-dimensional fractional Ornstein-Uhlenbeck process in 1p are given. The method can also be applied to certain processes with stationary increments.

  7. Nuclear cycler: An incremental approach to the deflection of asteroids

    Science.gov (United States)

    Vasile, Massimiliano; Thiry, Nicolas

    2016-04-01

    This paper introduces a novel deflection approach based on nuclear explosions: the nuclear cycler. The idea is to combine the effectiveness of nuclear explosions with the controllability and redundancy offered by slow push methods within an incremental deflection strategy. The paper will present an extended model for single nuclear stand-off explosions in the proximity of elongated ellipsoidal asteroids, and a family of natural formation orbits that allows the spacecraft to deploy multiple bombs while being shielded by the asteroid during the detonation.

  8. Incremental Entropy Relation as an Alternative to MaxEnt

    Directory of Open Access Journals (Sweden)

    Montse Casas

    2008-06-01

    Full Text Available We show that, to generate the statistical operator appropriate for a given system, and as an alternative to Jaynes’ MaxEnt approach, that refers to the entropy S, one can use instead the increments ±S in S. To such an effect, one uses the macroscopic thermodynamic relation that links ±S to changes in i the internal energy E and ii the remaining M relevant extensive quantities Ai, i = 1; : : : ;M; that characterize the context one is working with.

  9. Fast Discovering Frequent Patterns for Incremental XML Queries

    Institute of Scientific and Technical Information of China (English)

    PENG Dun-lu; QIU Yang

    2004-01-01

    It is nontrivial to maintain such discovered frequent query patterns in real XML-DBMS because the transaction database of queries may allow frequent updates and such updates may not only invalidate some existing frequent query patterns but also generate some new frequent query patterns.In this paper, two incremental updating algorithms, FUXQMiner and FUFXQMiner, are proposed for efficient maintenance of discovered frequent query patterns and generation the new frequent query patterns when new XML queries are added into the database.Experimental results from our implementation show that the proposed algorithms have good performance.

  10. AN INCREMENTAL UPDATING ALGORITHM FOR MINING ASSOCIATION RULES

    Institute of Scientific and Technical Information of China (English)

    XuBaowen; YiTong; 等

    2002-01-01

    In this letter,on the basis of Frequent Pattern(FP) tree,the support function to update FP-tree is introduced,then an incremental FP(IFP) algorithm for mining association rules is proposed.IFP algorithm considers not only adding new data into the database but also reducing old data from the database.Furthermore,it can predigest five cases to three case .The algorithm proposed in this letter can avoid generating lots of candidate items,and it is high efficient.

  11. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...... show that for this class of processes the optimal endowment and strategy can be expressed more explicitly. The corresponding formulas involve the moment resp. cumulant generating function of the underlying process and a Laplace- or Fourier-type representation of the contingent claim. An example...

  12. Testing single point incremental forming molds for thermoforming operations

    Science.gov (United States)

    Afonso, Daniel; de Sousa, Ricardo Alves; Torcato, Ricardo

    2016-10-01

    Low pressure polymer processing processes as thermoforming or rotational molding use much simpler molds then high pressure processes like injection. However, despite the low forces involved with the process, molds manufacturing for this operations is still a very material, energy and time consuming operation. The goal of the research is to develop and validate a method for manufacturing plastically formed sheets metal molds by single point incremental forming (SPIF) operation for thermoforming operation. Stewart platform based SPIF machines allow the forming of thick metal sheets, granting the required structural stiffness for the mold surface, and keeping the short lead time manufacture and low thermal inertia.

  13. TRANSITIONAL SHELTER FOR DISASTER VICTIMS: BAMBOO CORE AND INCREMENTAL HOUSES

    Directory of Open Access Journals (Sweden)

    JULISTIONO Eunike Kristi

    2014-07-01

    Full Text Available Indonesia has experienced many catastrophic disasters since 2004. Tsunami, earthqukes, floods and volcanic eruptions have caused devastated destruction towards houses, land, belongings, and wellfare. In post-disaster recovery process, it is essential to provide a transitional shelter especially for low-income community while preparing the reconstruction of their permanent housing. This paper presents bamboo incremental house as disaster victims’ transitional shelter in Jember. An empathic approach was taken in developing the house design, taking into consideration the disaster victims’ need, perception, and their economic condition, as well as the local materials, technology and the financial support available.

  14. Incremental stress-strain law for graphite under multiaxial loadings

    Energy Technology Data Exchange (ETDEWEB)

    Tzung, F.

    1979-11-01

    An incremental stress-strain law for describing the nonlinear, compressible and asymmetric behavior of graphite under tension and compression as well as complex loadings is derived based on a dry friction model in the theory of plasticity. Stress-strain relations are defined by longitudinal-lateral strain measurements for specimens under uniaxial tension-compression. Agreements with experimentally determined curves from biaxial loading experiments are shown. Agreements in finite element computations using the present model with strain measurements for diametral compression and 4-point bend tests of graphite are also obtained.

  15. Incremental View Computation Model for Object-Oriented Information

    Institute of Scientific and Technical Information of China (English)

    Guo Hai-ying; Zhong Ting-xiu

    2004-01-01

    We introduce a model to implement incremental update of views. The principle is that unless a view is accessed, the modification related to the view is not computed. This modification information is used only when views are updated.Modification information is embodied in the classes (including inheritance classes and nesting classes) that derive the view.We establish a modify list consisted of tuples (one tuple for each view which is related to the class) to implement view update. A method is used to keep views from re-update.

  16. Automating the Incremental Evolution of Controllers for Physical Robots

    DEFF Research Database (Denmark)

    Faina, Andres; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range...... constitute one such problem. Another lies in the embodiment of the evolutionary processes, which links to the first, but focuses on how evolution can act on real agents and occur independently from simulation, that is, going from being, as Eiben et al. put it, “the evolution of things, rather than just...

  17. Intermittent versus Continuous Incremental Field Tests: Are Maximal Variables Interchangeable?

    Science.gov (United States)

    Carminatti, Lorival J; Possamai, Carlos A P; de Moraes, Marcelo; da Silva, Juliano F; de Lucas, Ricardo D; Dittrich, Naiandra; Guglielmo, Luiz G A

    2013-01-01

    The aim of the present study was to compare physiological responses derived from an incremental progressive field test with a constant speed test i.e. intermittent versus continuous protocol. Two progressive maximum tests (Carminatti`s test (T-CAR) and the Vameval test (T-VAM)), characterized by increasing speed were used. T-CAR is an intermittent incremental test, performed as shuttle runs; while T-VAM is a continuous incremental test performed on an athletic track. Eighteen physically active, healthy young subjects (21.9 ± 2.0 years; 76.5 ± 8.6 kg, 1.78 ± 0.08 m, 11.2 ± 5.4% body fat), volunteered for this study. Subjects performed four different maximum test sessions conducted in the field: two incremental tests and two time to exhaustion tests (TTE) at peak test velocities (PV). No significant differences were found for PV (T-CAR = 15.6 ± 1.2; T-VAM = 15.5 ± 1.3 km·h(-1)) and maximal HR (T-CAR = 195 ± 11; T- VAM = 194 ± 14 bpm). During TTE, there were no significant differences for HR (TTET-CAR and TTET-VAM = 192 ± 12 bpm). However, there was a significant difference in TTE (p = 0.04) (TTET-CAR = 379 ± 84, TTET-VAM = 338 ± 58 s) with a low correlation (r = 0.41). The blood lactate concentration measured at the end of the TTE tests, showed no significant difference (TTET-CAR = 13.2 ± 2.4 vs. TTET-VAM = 12.9 ± 2.4 mmol·l(-1)). Based on the present findings, it is suggested that the maximal variables derived from T-CAR and T-VAM can be interchangeable in the design of training programs. Key pointsT-CAR is an intermittent shuttle run test that predicts the maximal aerobic speed with accuracy, hence, test results could be interchangeable with continuous straight-line tests.T-CAR provides valid field data for evaluating aerobic fitness.In comparison with T-VAM, T-CAR may be a more favourable way to prescribe intermittent training using a shuttle-running protocol.

  18. Comparison of the incremental and hierarchical methods for crystalline neon.

    Science.gov (United States)

    Nolan, S J; Bygrave, P J; Allan, N L; Manby, F R

    2010-02-24

    We present a critical comparison of the incremental and hierarchical methods for the evaluation of the static cohesive energy of crystalline neon. Both of these schemes make it possible to apply the methods of molecular electronic structure theory to crystalline solids, offering a systematically improvable alternative to density functional theory. Results from both methods are compared with previous theoretical and experimental studies of solid neon and potential sources of error are discussed. We explore the similarities of the two methods and demonstrate how they may be used in tandem to study crystalline solids.

  19. An Approach to Incremental Design of Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian;

    2001-01-01

    In this paper we present an approach to incremental design of distributed embedded systems for hard real-time applications. We start from an already existing system running a set of applications and the design problem is to implement new functionality on this system. Thus, we propose mapping...... strategies of functionality so that the already running functionality is not disturbed and there is a good chance that, later, new functionality can easily be mapped on the resulted system. The mapping and scheduling for hard real-time embedded systems are considered the context of a realistic communication...

  20. Transferring the Incremental Capacity Analysis to Lithium-Sulfur Batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Kalogiannis, Theodoros; Purkayastha, Rajlakshmi

    2017-01-01

    In order to investigate the battery degradation and to estimate their health, various techniques can be applied. One of them, which is widely used for Lithium-ion batteries, is the incremental capacity analysis (ICA). In this work, we apply the ICA to Lithium-Sulfur batteries, which differ in many...... aspects from Lithium-ion batteries and possess unique behavior. One of the challenges of applying the ICA to Lithium-Sulfur batteries is the representation of the IC curves, as their voltage profiles are often non-monotonic, resulting in more complex IC curves. The ICA is at first applied to charge...

  1. Finite-element formulations for problems of large elastic-plastic deformation

    Science.gov (United States)

    Mcmeeking, R. M.; Rice, J. R.

    1975-01-01

    An Eulerian finite element formulation is presented for problems of large elastic-plastic flow. The method is based on Hill's variational principle for incremental deformations, and is ideally suited to isotropically hardening Prandtl-Reuss materials. Further, the formulation is given in a manner which allows any conventional finite element program, for 'small strain' elastic-plastic analysis, to be simply and rigorously adapted to problems involving arbitrary amounts of deformation and arbitrary levels of stress in comparison to plastic deformation moduli. The method is applied to a necking bifurcation analysis of a bar in plane-strain tension. The paper closes with a unified general formulation of finite element equations, both Lagrangian and Eulerian, for large deformations, with arbitrary choice of the conjugate stress and strain measures. Further, a discussion is given of other proposed formulations for elastic-plastic finite element analysis at large strain, and the inadequacies of some of these are commented upon.

  2. Finite element formulations for problems of large elastic-plastic deformation

    Science.gov (United States)

    Mcmeeking, R. M.; Rice, J. R.

    1974-01-01

    An Eulerian finite element formulation is presented for problems of large elastic-plastic flow. The method is based on Hill's variational principle for incremental deformations, and is suited to isotropically hardening Prandtl-Reuss materials. The formulation is given in a manner which allows any conventional finite element program, for "small strain" elasticplastic analysis, to be simply and rigorously adapted to problems involving arbitrary amounts of deformation and arbitrary levels of stress in comparison to plastic deformation moduli. The method is applied to a necking bifurcation analysis of a bar in plane-strain tension. A unified general formulation of finite element equations, both Lagrangian and Eulerian, for large deformations, with arbitrary choice of the conjugate stress and strain measures, and a discussion is given of other proposed formulations for elastic-plastic finite element analysis at large strain.

  3. Novel Formulations for Antimicrobial Peptides

    Directory of Open Access Journals (Sweden)

    Ana Maria Carmona-Ribeiro

    2014-10-01

    Full Text Available Peptides in general hold much promise as a major ingredient in novel supramolecular assemblies. They may become essential in vaccine design, antimicrobial chemotherapy, cancer immunotherapy, food preservation, organs transplants, design of novel materials for dentistry, formulations against diabetes and other important strategical applications. This review discusses how novel formulations may improve the therapeutic index of antimicrobial peptides by protecting their activity and improving their bioavailability. The diversity of novel formulations using lipids, liposomes, nanoparticles, polymers, micelles, etc., within the limits of nanotechnology may also provide novel applications going beyond antimicrobial chemotherapy.

  4. Numerical simulation of high speed incremental forming of aluminum alloy

    Science.gov (United States)

    Giuseppina, Ambrogio; Teresa, Citrea; Luigino, Filice; Francesco, Gagliardi

    2013-12-01

    In this study, an innovative process is analyzed with the aim to satisfy the industrial requirements, such as process flexibility, differentiation and customizing of products, cost reduction, minimization of execution time, sustainable production, etc. The attention is focused on incremental forming process, nowadays used in different fields such as: rapid prototyping, medical sector, architectural industry, aerospace and marine, in the production of molds and dies. Incremental forming consists in deforming only a small region of the workspace through a punch driven by a NC machine. SPIF is the considered variant of the process, in which the punch gives local deformation without dies and molds; consequently, the final product geometry can be changed by the control of an actuator without requiring a set of different tools. The drawback of this process is its slowness. The aim of this study is to assess the IF feasibility at high speeds. An experimental campaign will be performed by a CNC lathe with high speed to test process feasibility and the influence on materials formability mainly on aluminum alloys. The first results show how the material presents the same performance than in conventional speed IF and, in some cases, better material behavior due to the temperature field. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process substantially confirming experimental evidence.

  5. A fast flexible docking method using an incremental construction algorithm.

    Science.gov (United States)

    Rarey, M; Kramer, B; Lengauer, T; Klebe, G

    1996-08-23

    We present an automatic method for docking organic ligands into protein binding sites. The method can be used in the design process of specific protein ligands. It combines an appropriate model of the physico-chemical properties of the docked molecules with efficient methods for sampling the conformational space of the ligand. If the ligand is flexible, it can adopt a large variety of different conformations. Each such minimum in conformational space presents a potential candidate for the conformation of the ligand in the complexed state. Our docking method samples the conformation space of the ligand on the basis of a discrete model and uses a tree-search technique for placing the ligand incrementally into the active site. For placing the first fragment of the ligand into the protein, we use hashing techniques adapted from computer vision. The incremental construction algorithm is based on a greedy strategy combined with efficient methods for overlap detection and for the search of new interactions. We present results on 19 complexes of which the binding geometry has been crystallographically determined. All considered ligands are docked in at most three minutes on a current workstation. The experimentally observed binding mode of the ligand is reproduced with 0.5 to 1.2 A rms deviation. It is almost always found among the highest-ranking conformations computed.

  6. Transformational adaptation when incremental adaptations to climate change are insufficient.

    Science.gov (United States)

    Kates, Robert W; Travis, William R; Wilbanks, Thomas J

    2012-05-08

    All human-environment systems adapt to climate and its natural variation. Adaptation to human-induced change in climate has largely been envisioned as increments of these adaptations intended to avoid disruptions of systems at their current locations. In some places, for some systems, however, vulnerabilities and risks may be so sizeable that they require transformational rather than incremental adaptations. Three classes of transformational adaptations are those that are adopted at a much larger scale, that are truly new to a particular region or resource system, and that transform places and shift locations. We illustrate these with examples drawn from Africa, Europe, and North America. Two conditions set the stage for transformational adaptation to climate change: large vulnerability in certain regions, populations, or resource systems; and severe climate change that overwhelms even robust human use systems. However, anticipatory transformational adaptation may be difficult to implement because of uncertainties about climate change risks and adaptation benefits, the high costs of transformational actions, and institutional and behavioral actions that tend to maintain existing resource systems and policies. Implementing transformational adaptation requires effort to initiate it and then to sustain the effort over time. In initiating transformational adaptation focusing events and multiple stresses are important, combined with local leadership. In sustaining transformational adaptation, it seems likely that supportive social contexts and the availability of acceptable options and resources for actions are key enabling factors. Early steps would include incorporating transformation adaptation into risk management and initiating research to expand the menu of innovative transformational adaptations.

  7. Incremental Learning of Skill Collections based on Intrinsic Motivation

    Directory of Open Access Journals (Sweden)

    Jan Hendrik Metzen

    2013-07-01

    Full Text Available Life-long learning of reusable, versatile skills is a key prerequisite forembodied agents that act in a complex, dynamic environment and are faced withdifferent tasks over their lifetime. We address the question of how an agentcan learn useful skills efficiently during a developmental period,i.e., when no task is imposed on him and no external reward signal is provided.Learning of skills in a developmental period needs to be incremental andself-motivated. We propose a new incremental, task-independent skill discoveryapproach that is suited for continuous domains. Furthermore, the agent learnsspecific skills based on intrinsic motivation mechanisms thatdetermine on which skills learning is focused at a given point in time. Weevaluate the approach in a reinforcement learning setup in two continuousdomains with complex dynamics. We show that an intrinsically motivated, skilllearning agent outperforms an agent which learns task solutions from scratch.Furthermore, we compare different intrinsic motivation mechanisms and howefficiently they make use of the agent's developmental period.

  8. Optimal Curiosity-Driven Modular Incremental Slow Feature Analysis.

    Science.gov (United States)

    Kompella, Varun Raj; Luciw, Matthew; Stollenga, Marijn Frederik; Schmidhuber, Juergen

    2016-08-01

    Consider a self-motivated artificial agent who is exploring a complex environment. Part of the complexity is due to the raw high-dimensional sensory input streams, which the agent needs to make sense of. Such inputs can be compactly encoded through a variety of means; one of these is slow feature analysis (SFA). Slow features encode spatiotemporal regularities, which are information-rich explanatory factors (latent variables) underlying the high-dimensional input streams. In our previous work, we have shown how slow features can be learned incrementally, while the agent explores its world, and modularly, such that different sets of features are learned for different parts of the environment (since a single set of regularities does not explain everything). In what order should the agent explore the different parts of the environment? Following Schmidhuber's theory of artificial curiosity, the agent should always concentrate on the area where it can learn the easiest-to-learn set of features that it has not already learned. We formalize this learning problem and theoretically show that, using our model, called curiosity-driven modular incremental slow feature analysis, the agent on average will learn slow feature representations in order of increasing learning difficulty, under certain mild conditions. We provide experimental results to support the theoretical analysis.

  9. On exceedance times for some processes with dependent increments

    CERN Document Server

    Asmussen, Søren

    2012-01-01

    Let ${Z_n}_{n\\ge 0}$ be a random walk with a negative drift and i.i.d. increments with heavy-tailed distribution and let $M=\\sup_{n\\ge 0}Z_n$ be its supremum. Asmussen & Kl{\\"u}ppelberg (1996) considered the behavior of the random walk given that $M>x$, for $x$ large, and obtained a limit theorem, as $x\\to\\infty$, for the distribution of the quadruple that includes the time $\\rtreg=\\rtreg(x)$ to exceed level $x$, position $Z_{\\rtreg}$ at this time, position $Z_{\\rtreg-1}$ at the prior time, and the trajectory up to it (similar results were obtained for the Cram\\'er-Lundberg insurance risk process). We obtain here several extensions of this result to various regenerative-type models and, in particular, to the case of a random walk with dependent increments. Particular attention is given to describing the limiting conditional behavior of $\\tau$. The class of models include Markov-modulated models as particular cases. We also study fluid models, the Bj{\\"o}rk-Grandell risk process, give examples where the or...

  10. Incremental Aerodynamic Coefficient Database for the USA2

    Science.gov (United States)

    Richardson, Annie Catherine

    2016-01-01

    In March through May of 2016, a wind tunnel test was conducted by the Aerosciences Branch (EV33) to visually study the unsteady aerodynamic behavior over multiple transition geometries for the Universal Stage Adapter 2 (USA2) in the MSFC Aerodynamic Research Facility's Trisonic Wind Tunnel (TWT). The purpose of the test was to make a qualitative comparison of the transonic flow field in order to provide a recommended minimum transition radius for manufacturing. Additionally, 6 Degree of Freedom force and moment data for each configuration tested was acquired in order to determine the geometric effects on the longitudinal aerodynamic coefficients (Normal Force, Axial Force, and Pitching Moment). In order to make a quantitative comparison of the aerodynamic effects of the USA2 transition geometry, the aerodynamic coefficient data collected during the test was parsed and incorporated into a database for each USA2 configuration tested. An incremental aerodynamic coefficient database was then developed using the generated databases for each USA2 geometry as a function of Mach number and angle of attack. The final USA2 coefficient increments will be applied to the aerodynamic coefficients of the baseline geometry to adjust the Space Launch System (SLS) integrated launch vehicle force and moment database based on the transition geometry of the USA2.

  11. Incremental Knowledge Base Construction Using DeepDive

    Science.gov (United States)

    Shin, Jaeho; Wu, Sen; Wang, Feiran; De Sa, Christopher; Zhang, Ce; Ré, Christopher

    2016-01-01

    Populating a database with unstructured information is a long-standing problem in industry and research that encompasses problems of extraction, cleaning, and integration. Recent names used for this problem include dealing with dark data and knowledge base construction (KBC). In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based respectively on sampling and variational techniques. We also study the tradeoff space of these methods and develop a simple rule-based optimizer. DeepDive includes all of these contributions, and we evaluate Deep-Dive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality. PMID:27144081

  12. Acute Hematological Responses to a Maximal Incremental Treadmill Test

    Directory of Open Access Journals (Sweden)

    Filipe Dinato de Lima

    2017-03-01

    Full Text Available The present study aimed to study acute hematologic responses in individuals undergoing a  cardiopulmonary maximum incremental treadmill test without inclination. Were analyzed 23 individuals, 12 men and 11 women, with a mean age of 30.2 (± 8.4 years, mean weight of 68.1 (± 18.1 kg, mean height of 170.2 (± 9.8 cm, and mean BMI of 23.2 (±3.7 kg/m², physically active, with a minimum practice of 3.5 hours per week of exercise for at least 6 months. The subjects were submitted to a maximal incremental treadmill test, with venous blood collection for analysis before and immediately after completion of the test. Was used Wilcoxon test for analysis of pre and post test variables. Was adopted p < 0.05 as significance level. There was a significant increase in leukocyte count (69.23%; p = 0.005, lymphocytes (17.56%; p = 0.043, monocytes (85.41%; p = 0.012 and granulocytes (28.21%; p = 0.011. It was also observed a significant increase in erythrocytes (3,42%; p = 0,042, hematocrit (5.39%; p = 0.038 and hemoglobin (5.58%; p = 0.013. With this study, was concluded that performing a maximal test of treadmill running can significantly raise blood levels of leukocytes and respective sub-populations, as well as red blood cells and hemoglobin.

  13. Incremental Beliefs About Ability Ameliorate Self-Doubt Effects

    Directory of Open Access Journals (Sweden)

    Qin Zhao

    2015-12-01

    Full Text Available Past research has typically shown negative effects of self-doubt on performance and psychological well-being. We suggest that these self-doubt effects largely may be due to an underlying assumption that ability is innate and fixed. The present research investigated the main hypothesis that incremental beliefs about ability might ameliorate negative effects of self-doubt. We examined our hypotheses using two lab tasks: verbal reasoning and anagram tasks. Participants’ self-doubt was measured and beliefs about ability were measured after participants read articles advocating either for incremental or entity theories of ability. American College Testing (ACT scores were obtained to index actual ability level. Consistent with our hypothesis, for participants who believed ability was relatively fixed, higher self-doubt was associated with increased negative affect and lower task performance and engagement. In contrast, for participants who believed that ability was malleable, negative self-doubt effects were ameliorated; self-doubt was even associated with better task performance. These effects were further moderated by participants’ academic ability. These findings suggest that mind-sets about ability moderate self-doubt effects. Self-doubt may have negative effects only when it is interpreted as signaling that ability is immutably low.

  14. Incremental concept learning with few training examples and hierarchical classification

    Science.gov (United States)

    Bouma, Henri; Eendebak, Pieter T.; Schutte, Klamer; Azzopardi, George; Burghouts, Gertjan J.

    2015-10-01

    Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible with only a few training samples. Secondly, we show that novel objects can be added incrementally without retraining existing objects, which is important for fast interaction. Thirdly, we show that an unbalanced number of positive training samples leads to biased classifier scores that can be corrected by modifying weights. Fourthly, we show that the detector performance can deteriorate due to hard-negative mining for similar or closely related classes (e.g., for Barbie and dress, because the doll is wearing a dress). This can be solved by our hierarchical classification. We introduce a new dataset, which we call TOSO, and use it to demonstrate the effectiveness of the proposed method for the localization and recognition of multiple objects in images.

  15. Incremental learning of concept drift in nonstationary environments.

    Science.gov (United States)

    Elwell, Ryan; Polikar, Robi

    2011-10-01

    We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper.

  16. Better Bounds for Incremental Frequency Allocation in Bipartite Graphs

    CERN Document Server

    Chrobak, Marek; Sgall, Jiří

    2011-01-01

    We study frequency allocation in wireless networks. A wireless network is modeled by an undirected graph, with vertices corresponding to cells. In each vertex we have a certain number of requests, and each of those requests must be assigned a different frequency. Edges represent conflicts between cells, meaning that frequencies in adjacent vertices must be different as well. The objective is to minimize the total number of used frequencies. The offline version of the problem is known to be NP-hard. In the incremental version, requests for frequencies arrive over time and the algorithm is required to assign a frequency to a request as soon as it arrives. Competitive incremental algorithms have been studied for several classes of graphs. For paths, the optimal (asymptotic) ratio is known to be 4/3, while for hexagonal-cell graphs it is between 1.5 and 1.9126. For k-colorable graphs, the ratio of (k+1)/2 can be achieved. In this paper, we prove nearly tight bounds on the asymptotic competitive ratio for bipartit...

  17. Incremental ECAP of thick continuous plates - machine and initial trials

    Science.gov (United States)

    Rosochowski, A.; Olejnik, L.

    2014-08-01

    Incremental ECAP (I-ECAP) can be used for SPD of continuous bars, plates and sheets. This paper describes design, construction and preliminary trials of a prototype machine capable of processing thick continuous plates. To increase productivity, a two-turn I-ECAP is used, which is equivalent to route C in conventional one-turn ECAP. The machine has a reciprocating punch inclined at 45°, a clamp holding the plate in the die during deformation and a feeder incrementally feeding the plate when it is not deformed; all these devices are driven by hydraulic actuators controlled by a PLC. The machine is capable of deforming materials at room temperature as well as elevated temperatures. The die is heated with electric heaters. The machine has also an integrated cooling system and a lubrication system. The material used for the initial trials was Al 1050 plate (10×50×1000) conversion coated with calcium aluminate and lubricated with dry soap. The process was carried out at room temperature using 1.6 mm feeding stroke and a low cycle frequency of approximately 0.2 Hz. The UFG structure after the first pass of the process revealed by STEM confirms process feasibility.

  18. Incremental support vector machines for fast reliable image recognition

    Energy Technology Data Exchange (ETDEWEB)

    Makili, L., E-mail: makili_le@yahoo.com [Instituto Superior Politécnico da Universidade Katyavala Bwila, Benguela (Angola); Vega, J. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Dormido-Canto, S. [Dpto. Informática y Automática – UNED, Madrid (Spain)

    2013-10-15

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency.

  19. Incremental Scheduling Engines for Human Exploration of the Cosmos

    Science.gov (United States)

    Jaap, John; Phillips, Shaun

    2005-01-01

    As humankind embarks on longer space missions farther from home, the requirements and environments for scheduling the activities performed on these missions are changing. As we begin to prepare for these missions it is appropriate to evaluate the merits and applicability of the different types of scheduling engines. Scheduling engines temporally arrange tasks onto a timeline so that all constraints and objectives are met and resources are not overbooked. Scheduling engines used to schedule space missions fall into three general categories: batch, mixed-initiative, and incremental. This paper presents an assessment of the engine types, a discussion of the impact of human exploration of the moon and Mars on planning and scheduling, and the applicability of the different types of scheduling engines. This paper will pursue the hypothesis that incremental scheduling engines may have a place in the new environment; they have the potential to reduce cost, to improve the satisfaction of those who execute or benefit from a particular timeline (the customers), and to allow astronauts to plan their own tasks and those of their companion robots.

  20. Efficient incremental relaying for packet transmission over fading channels

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-07-01

    In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying (EIR) scheme with both amplify and forward and decode and forward relaying. We compare the performance of the EIR scheme with the threshold-based incremental relaying (TIR) scheme. It is shown that the efficiency of the TIR scheme is better for lower values of the threshold. However, the efficiency of the TIR scheme for higher values of threshold is outperformed by the EIR. In addition, three new threshold-based adaptive EIR are devised to further improve the efficiency of the EIR scheme. We calculate the packet error rate and the efficiency of these new schemes to provide the analytical insight. © 2014 IEEE.

  1. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    characterize ten decision units $62,725 $39,500 $18,900 E2S2 - June 20102 2 –40National Defense Center for Energy and Environment Validation of EVC Soil...Stick • EVC Soil Stick used in Fort Lewis to collect eight replicate samples (0-2.5 cm depth) of 100 increments each • Same decision unit as...Energy and Environment Fort Lewis Live-Fire Laboratory Replicates NG Results Using EVC Tool (mg/kg) Sample Type Replicates Mean Std Dev % RSD 1 2 3

  2. 39 CFR 3050.23 - Documentation supporting incremental cost estimates in the Postal Service's section 3652 report.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Documentation supporting incremental cost... REGULATORY COMMISSION PERSONNEL PERIODIC REPORTING § 3050.23 Documentation supporting incremental cost... incremental cost model shall be reported. ...

  3. Incremental Optimization of Hub and Spoke Network for the Spokes’ Numbers and Flow

    Directory of Open Access Journals (Sweden)

    Yanfeng Wang

    2015-01-01

    Full Text Available Hub and spoke network problem is solved as part of a strategic decision making process which may have a profound effect on the future of enterprises. In view of the existing network structure, as time goes on, the number of spokes and the flow change because of different sources of uncertainty. Hence, the incremental optimization of hub and spoke network problem is considered in this paper, and the policy makers should adopt a series of strategies to cope with the change, such as setting up new hubs, adjusting the capacity level of original hubs, or closing some original hubs. The objective is to minimize the total cost, which includes the setup costs for the new hubs, the closure costs, and the adjustment costs for the original hubs as well as the flow routing costs. Two mixed-integer linear programming formulations are proposed and analyzed for this problem. China Deppon Logistics as an example is performed to present computational analysis, and we analyze the changes in the solutions driven by the number of spokes and the flow. The tests also allow an analysis to consider the effect of variation in parameters on network.

  4. DEFORMATION ANALYSIS OF SHEET METAL SINGLE-POINT INCREMENTAL FORMING BY FINITE ELEMENT METHOD SIMULATION

    Institute of Scientific and Technical Information of China (English)

    MA Linwei; MO Jianhua

    2008-01-01

    Single-point incremental forming (SPIF) is an innovational sheet metal forming method without dedicated dies, which belongs to rapid prototyping technology. In generalizing the SPIF of sheet metal, the deformation analysis on forming process becomes an important and useful method for the planning of shell products, the choice of material, the design of the forming process and the planning of the forming tool. Using solid brick elements, the finite element method(FEM) model of truncated pyramid was established. Based on the theory of anisotropy and assumed strain formulation, the SPIF processes with different parameters were simulated. The resulted comparison between the simulations and the experiments shows that the FEM model is feasible and effective. Then, according to the simulated forming process, the deformation pattern of SPIF can be summarized as the combination of plane-stretching deformation and bending deformation. And the study about the process parameters' impact on deformation shows that the process parameter of interlayer spacing is a dominant factor on the deformation. Decreasing interlayer spacing, the strain of one step decreases and the formability of blank will be improved. With bigger interlayer spacing, the plastic deformation zone increases and the forming force will be bigger.

  5. Formulation of Complex Action Theory

    OpenAIRE

    Nagao, Keiichi; Nielsen, Holger Bech

    2011-01-01

    We formulate a complex action theory which includes operators of coordinate and momentum $\\hat{q}$ and $\\hat{p}$ being replaced with non-hermitian operators $\\hat{q}_{new}$ and $\\hat{p}_{new}$, and their eigenstates ${}_m

  6. Formulation optimization of arecoline patches.

    Science.gov (United States)

    Wu, Pao-Chu; Tsai, Pi-Ju; Lin, Shin-Chen; Huang, Yaw-Bin

    2014-01-01

    The response surface methodology (RSM) including polynomial equations has been used to design an optimal patch formulation with appropriate adhesion and flux. The patch formulations were composed of different polymers, including Eudragit RS 100 (ERS), Eudragit RL 100 (ERL) and polyvinylpyrrolidone K30 (PVP), plasticizers (PEG 400), and drug. In addition, using terpenes as enhancers could increase the flux of the drug. Menthol showed the highest enhancement effect on the flux of arecoline.

  7. Formulation Optimization of Arecoline Patches

    Directory of Open Access Journals (Sweden)

    Pao-Chu Wu

    2014-01-01

    Full Text Available The response surface methodology (RSM including polynomial equations has been used to design an optimal patch formulation with appropriate adhesion and flux. The patch formulations were composed of different polymers, including Eudragit RS 100 (ERS, Eudragit RL 100 (ERL and polyvinylpyrrolidone K30 (PVP, plasticizers (PEG 400, and drug. In addition, using terpenes as enhancers could increase the flux of the drug. Menthol showed the highest enhancement effect on the flux of arecoline.

  8. Niosomal Formulation Of Orlistat: Formulation And In-Vitro Evaluation

    Directory of Open Access Journals (Sweden)

    SAMYUKTHA RANI. B

    2011-06-01

    Full Text Available The purpose of the research was to prepare Orlistat niosomes from proniosome to improve its poor and variable oral bioavailability. The non-ionic surfactant vesicles are prepared by the reverse phase evaporation technique (slurry method. The slurry of - Cyclodextrin and Span 60 was dried to form a free flowing powder in rotary flash evaporator which could be rehydrated by addition of buffer (0.5% NaCl with 3% SLS at pH 6.0. The lipid mixture consisted of cholesterol, Span 60 and - Cyclodextrin carrier in molar ratios of (0.1:0.9:1 to 0.9:0.1:1 respectively. The niosomal formulations were evaluated for particle size, entrapment efficiency, in-vitro drug release, release kinetics, Interactions and compatibility (FT-IR, surface morphology (SEM, stability studies, conductivity and sedimentation rate, pH density, viscosity. The formulation OT9 which showed higher entrapment efficiency of 44.09% and invitro releases of 94.59% at the end of 12hrs was found to be best among all the 9 formulations. Release was best fitted with Hixson kinetics and it shows that the drug release may follow diffusion mechanism. FT-IR data revealed that, compatible and there were no interactions between the drug and excipients added in the formulation. SEM images of niosomes with various magnifications revealed the mean size of the niosomes were 100 nm with smooth surface. Niosome formulation has showed appropriate stability for 90 days by storing the formulation at room temperature. Thus the niosomal formulations could be a promising delivery system for Orlistat with improved oral bioavailability, stability and for sustained drug release.

  9. Bioavailability of cefuroxime axetil formulations.

    Science.gov (United States)

    Donn, K H; James, N C; Powell, J R

    1994-06-01

    Cefuroxime axetil tablets have proved effective for the treatment of a variety of community-acquired infections. A suspension formulation has been developed for use in children. Two studies have been conducted to determine if the cefuroxime axetil formulations are bioequivalent. In the initial randomized, two-period crossover study, 24 healthy men received 250-mg doses of suspension and tablet formulations of cefuroxime axetil every 12 h after eating for seven doses. Each treatment period was separated by 4 days. Comparisons of serum and urine pharmacokinetic parameters indicated that the suspension and tablet formulations of cefuroxime axetil are not bioequivalent. Following the initial bioequivalency study, 0.1 % sodium lauryl sulfate (SLS) was added to the suspension to assure the homogeneity of the granules during the manufacturing process. In the subsequent randomized, three-period crossover study, 24 healthy men received single 250-mg doses of three cefuroxime axetil formulations: suspension without SLS, suspension with SLS, and tablet. Again each treatment period was separated by 4 days. Pharmacokinetic analyses demonstrated that while the suspension with SLS and suspension without SLS are bioequivalent, bioequivalence between the suspension with SLS and the tablet was not observed. Thus, the addition of the SLS surfactant to the suspension did not alter the bioavailability of the formulation.

  10. Filtros digitales de tangente hiperbólica aplicados al alisado de las perturbaciones en los datos de resistividad de fosfato de Marruecos

    Directory of Open Access Journals (Sweden)

    M. Amrani

    2008-06-01

    Full Text Available Es posible diseñar filtros pasa bajos y pasa bandas por medio de una combinación de funciones tangente hiperbólica en el dominio de la frecuencia, usando los teoremas de escalamiento y deslizamiento de las transformadas de Fourier. Las funciones de filtro correspondientes en el dominio del tiempo pueden ser derivadas analíticamente a partir de las expresiones en el dominio de la frecuencia. Los parámetros de suavidad controlan las pendientes en las regiones de corte y permiten la construcción de filtros relativamente pequeños al mismo tiempo que reducen las oscilaciones de la respuesta del filtro en el dominio del tiempo. Se pueden elegir diferentes parámetros de suavidad para las frecuencias de corte alta y baja en el diseño de filtros pasa banda. Siguiendo el esquema propuesto en este artículo se pueden derivar fácilmente los otros tipos de filtro.

  11. Numerical study of flow and heat transfer of non-Newtonian Tangent Hyperbolic fluid from a sphere with Biot number effects

    Directory of Open Access Journals (Sweden)

    S. Abdul Gaffar

    2015-12-01

    Full Text Available In this article, we investigate the nonlinear steady boundary layer flow and heat transfer of an incompressible Tangent Hyperbolic fluid from a sphere. The transformed conservation equations are solved numerically subject to physically appropriate boundary conditions using implicit finite-difference Keller Box technique. The numerical code is validated with previous studies. The influence of a number of emerging non-dimensional parameters, namely Weissenberg number (We, power law index (n, Prandtl number (Pr, Biot number (γ and dimensionless tangential coordinate (ξ on velocity and temperature evolution in the boundary layer regime is examined in detail. Furthermore, the effects of these parameters on heat transfer rate and skin friction are also investigated. Validation with earlier Newtonian studies is presented and excellent correlation is achieved. It is found that the velocity, Skin friction and the Nusselt number (heat transfer rate are decreased with increasing Weissenberg number (We, whereas the temperature is increased. Increasing power law index (n increases the velocity and the Nusselt number (heat transfer rate but decreases the temperature and the Skin friction. An increase in the Biot number (γ is observed to increase velocity, temperature, local skin friction and Nusselt number. The study is relevant to chemical materials processing applications.

  12. Compiler-Enhanced Incremental Checkpointing for OpenMP Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Marques, D; Pingali, K; Rugina, R; McKee, S A

    2008-01-21

    As modern supercomputing systems reach the peta-flop performance range, they grow in both size and complexity. This makes them increasingly vulnerable to failures from a variety of causes. Checkpointing is a popular technique for tolerating such failures, enabling applications to periodically save their state and restart computation after a failure. Although a variety of automated system-level checkpointing solutions are currently available to HPC users, manual application-level checkpointing remains more popular due to its superior performance. This paper improves performance of automated checkpointing via a compiler analysis for incremental checkpointing. This analysis, which works with both sequential and OpenMP applications, reduces checkpoint sizes by as much as 80% and enables asynchronous checkpointing.

  13. Compiler-Enhanced Incremental Checkpointing for OpenMP Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Marques, D; Pingali, K; McKee, S; Rugina, R

    2009-02-18

    As modern supercomputing systems reach the peta-flop performance range, they grow in both size and complexity. This makes them increasingly vulnerable to failures from a variety of causes. Checkpointing is a popular technique for tolerating such failures, enabling applications to periodically save their state and restart computation after a failure. Although a variety of automated system-level checkpointing solutions are currently available to HPC users, manual application-level checkpointing remains more popular due to its superior performance. This paper improves performance of automated checkpointing via a compiler analysis for incremental checkpointing. This analysis, which works with both sequential and OpenMP applications, significantly reduces checkpoint sizes and enables asynchronous checkpointing.

  14. Incremental Commute Time Distance and Applications in Anomaly Detection Systems

    CERN Document Server

    Khoa, Nguyen Lu Dang

    2011-01-01

    Commute Time Distance (CTD) is a random walk based metric on graphs. CTD has found widespread applications in many domains including personalized search, collaborative filtering and making search engines robust against manipulation. Our interest is inspired by the use of CTD as a metric for anomaly detection. It has been shown that CTD can be used to simultaneously identify both global and local anomalies. Here we propose an accurate and efficient approximation for computing the CTD in an incremental fashion in order to facilitate real-time applications. An online anomaly detection algorithm is designed where the CTD of each new arriving data point to any point in the current graph can be estimated in constant time ensuring a real-time response. Moreover, the proposed approach can also be applied in many other applications that utilize commute time distance.

  15. Analisis Incremental Kelayakan Penambahan Lini Perakitan Engine Motor Pt Abc

    Directory of Open Access Journals (Sweden)

    Jonny Jonny

    2013-06-01

    Full Text Available Manufacturing motorcycle unit in Plant 1 of PT ABC requires engine supply from Plant 2. This has some burden on the operating cost of Plant 1 and the opportunity loss as well due to delay of motor engine delivery. Based on the problem, team proposes additional engine line in Plant 1 to cut engine supply from Plant 2. Therefore, team analyzes this proposal using incremental analysis to determine whether the proposal is feasible or not by comparing before and after condition with result NPV about IDR 967 Billion and Payback Period under 1 year. By this result, team determines that this proposal is recommended to be approved by management. 

  16. Incremental forming of aluminium alloys in cryogenic environment

    Science.gov (United States)

    Vanhove, Hans; Mohammadi, Amirahmad; Duflou, Joost R.

    2016-10-01

    Incremental Sheet Forming processes suffer from stringent forming limits, restricting the range of producible geometries. Through in-process cooling of the sheet to cryogenic level, this paper explores the potential of altering material properties benefiting the formability and residual hardness of different aluminium alloys. Global cooling of aluminium sheets with liquid nitrogen and dry ice allows to reach temperatures of 78K and 193K respectively. Extended with experiments at room temperature (293K), these tests form a base for comparison of surface quality, formability and residual hardness. As an aluminium alloy commonly used for its high strength to weight ratio, but suffering from limited formability compared to draw-quality steels, AA5083-H111 is of interest for cryogenic treatment. AA1050-H24 is included in the test campaign as a base for commercially pure aluminium.

  17. An incremental design of radial basis function networks.

    Science.gov (United States)

    Yu, Hao; Reiner, Philip D; Xie, Tiantian; Bartczak, Tomasz; Wilamowski, Bogdan M

    2014-10-01

    This paper proposes an offline algorithm for incrementally constructing and training radial basis function (RBF) networks. In each iteration of the error correction (ErrCor) algorithm, one RBF unit is added to fit and then eliminate the highest peak (or lowest valley) in the error surface. This process is repeated until a desired error level is reached. Experimental results on real world data sets show that the ErrCor algorithm designs very compact RBF networks compared with the other investigated algorithms. Several benchmark tests such as the duplicate patterns test and the two spiral problem were applied to show the robustness of the ErrCor algorithm. The proposed ErrCor algorithm generates very compact networks. This compactness leads to greatly reduced computation times of trained networks.

  18. A new insertion sequence for incremental Delaunay triangulation

    Institute of Scientific and Technical Information of China (English)

    Jian-Fei Liu; Jin-Hui Yan; S.H.Lo

    2013-01-01

    Incremental algorithm is one of the most popular procedures for constructing Delaunay triangulations (DTs).However,the point insertion sequence has a great impact on the amount of work needed for the construction of DTs.It affects the time for both point location and structure update,and hence the overall computational time of the triangulation algorithm.In this paper,a simple deterministic insertion sequence is proposed based on the breadth-first-search on a Kd-tree with some minor modifications for better performance.Using parent nodes as search-hints,the proposed insertion sequence proves to be faster and more stable than the Hilbert curve order and biased randomized insertion order (BRIO),especially for non-uniform point distributions over a wide range of benchmark examples.

  19. A New Method for Incremental Testing of Finite State Machines

    Science.gov (United States)

    Pedrosa, Lehilton Lelis Chaves; Moura, Arnaldo Vieira

    2010-01-01

    The automatic generation of test cases is an important issue for conformance testing of several critical systems. We present a new method for the derivation of test suites when the specification is modeled as a combined Finite State Machine (FSM). A combined FSM is obtained conjoining previously tested submachines with newly added states. This new concept is used to describe a fault model suitable for incremental testing of new systems, or for retesting modified implementations. For this fault model, only the newly added or modified states need to be tested, thereby considerably reducing the size of the test suites. The new method is a generalization of the well-known W-method and the G-method, but is scalable, and so it can be used to test FSMs with an arbitrarily large number of states.

  20. Incremental dimension reduction of tensors with random index

    CERN Document Server

    Sandin, Fredrik; Sahlgren, Magnus

    2011-01-01

    We present an incremental, scalable and efficient dimension reduction technique for tensors that is based on sparse random linear coding. Data is stored in a compactified representation with fixed size, which makes memory requirements low and predictable. Component encoding and decoding are performed on-line without computationally expensive re-analysis of the data set. The range of tensor indices can be extended dynamically without modifying the component representation. This idea originates from a mathematical model of semantic memory and a method known as random indexing in natural language processing. We generalize the random-indexing algorithm to tensors and present signal-to-noise-ratio simulations for representations of vectors and matrices. We present also a mathematical analysis of the approximate orthogonality of high-dimensional ternary vectors, which is a property that underpins this and other similar random-coding approaches to dimension reduction. To further demonstrate the properties of random ...

  1. Incremental Maintenance of Quotient Cube Based on Galois Lattice

    Institute of Scientific and Technical Information of China (English)

    Cui-PingLi; Kum-HoeTung; ShanWang

    2004-01-01

    Data cube computation is a well-known expensive operation and has been studied extensively. It is often not feasible to compute a complete data cube due to the huge storage requirement. Recently proposed quotient cube addressed this fundamental issue through a partitioning method that groups cube cells into equivalent partitions. The effectiveness and efficiency of the quotient cube for cube compression and computation have been proved. However, as changes axe made to the data sources, to maintain such a quotient cube is non-trivial since the equivalent classes in it must be split or merged. In this paper, incremental algorithms are designed to update existing quotient cube efficiently based on Galois lattice. Performance study shows that these algorithms are efficient and scalable for large databases.

  2. Approximately bisimilar symbolic models for incrementally stable switched systems

    CERN Document Server

    Girard, Antoine; Tabuada, Paulo

    2008-01-01

    Switched systems constitute an important modeling paradigm faithfully describing many engineering systems in which software interacts with the physical world. Despite considerable progress on stability and stabilization of switched systems, the constant evolution of technology demands that we make similar progress with respect to different, and perhaps more complex, objectives. This paper describes one particular approach to address these different objectives based on the construction of approximately equivalent (bisimilar) symbolic models for switched systems. The main contribution of this paper consists in showing that under standard assumptions ensuring incremental stability of a switched system (i.e. existence of a common Lyapunov function, or multiple Lyapunov functions with dwell time), it is possible to construct a finite symbolic model that is approximately bisimilar to the original switched system with a precision that can be chosen a priori. To support the computational merits of the proposed approa...

  3. Incremental Maintenance of Quotient Cube Based on Galois Lattice

    Institute of Scientific and Technical Information of China (English)

    Cui-Ping Li; Kum-Hoe Tung; Shan Wang

    2004-01-01

    Data cube computation is a well-known expensive operation and has been studied extensively. It is often not feasible to compute a complete data cube due to the huge storage requirement. Recently proposed quotient cube addressed this fundamental issue through a partitioning method that groups cube cells into equivalent partitions. The effectiveness and efficiency of the quotient cube for cube compression and computation have been proved. However, as changes are made to the data sources, to maintain such a quotient cube is non-trivial since the equivalent classes in it must be split or merged. In this paper, incremental algorithms are designed to update existing quotient cube efficiently based on Galois lattice. Performance study shows that these algorithms are efficient and scalable for large databases.

  4. Single Point Incremental Forming using a Dummy Sheet

    DEFF Research Database (Denmark)

    Skjødt, Martin; Silva, Beatriz; Bay, Niels

    2007-01-01

    A new version of single point incremental forming (SPIF) is presented. This version includes a dummy sheet on top of the work piece, thus forming two sheets instead of one. The dummy sheet, which is in contact with the rotating tool pin, is discarded after forming. The new set-up influences...... the process and furthermore offers a number of new possibilities for solving some of the problems appearing in SPIF. Investigations of the influence of dummy sheet on: formability, wear, surface quality and bulging of planar sides is done by forming to test shapes: a hyperboloid and a truncated pyramid....... The possible influence of friction between the two sheets is furthermore investigated. The results show that the use of a dummy sheet reduces wear of the work piece to almost zero, but also causes a decrease in formability. Bulging of the planar sides of the pyramid is reduced and surface roughness...

  5. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 data points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.

  6. Distance-independent individual tree diameter-increment model for Thuya [Tetraclinis articulata (VAHL. MAST.] stands in Tunisia

    Directory of Open Access Journals (Sweden)

    T. Sghaier

    2013-12-01

    Full Text Available Aim of study: The aim of the work was to develop an individual tree diameter-increment model for Thuya (Tetraclinis articulata in Tunisia.Area of study: The natural Tetraclinis articulata stands at Jbel Lattrech in north-eastern of Tunisia.Material and methods:  Data came from 200 trees located in 50 sample plots. The diameter at age t and the diameter increment for the last five years obtained from cores taken at breast height were measured for each tree. Four difference equations derived from the base functions of Richards, Lundqvist, Hossfeld IV and Weibull were tested using the age-independent formulations of the growth functions. Both numerical and graphical analyses were used to evaluate the performance of the candidate models.Main results: Based on the analysis, the age-independent difference equation derived from the base function Richards model was selected. Two of the three parameters (growth rate and shape parameter of the retained model were related to site quality, represented by a Growth Index, stand density and the basal area in larger trees divided by diameter of the subject tree expressing the inter-tree competition.Research highlights: The proposed model can be useful for predicting the diameter growth of Tetraclinis articulata in Tunisia when age is not available or for trees growing in uneven-aged stands.Keywords: Age-independent growth model; difference equations; Tetraclinis articulata; Tunisia.

  7. Social Groupwork. A Model for Goal Formulation.

    Science.gov (United States)

    Tompkins, Rosamond P.; Gallo, Frank T.

    1978-01-01

    A conceptual model for goal formulation in social groupwork, discussion of existing models and their limitations, and an attempt to formulate an encompassing groupwork model that facilitates goal formulation. (Author/PD)

  8. Four formulations of noncommutative quantum mechanics

    CERN Document Server

    Gouba, Laure

    2016-01-01

    Four formulations of noncommutative quantum mechanics are reviewed. These are the canonical, path-integral, Weyl-Wigner and systematic formulations. The four formulations are charaterized by a deformed Heisenberg algebra but differ in mathematical and conceptual overview.

  9. Incremental learning with SVM for multimodal classification of prostatic adenocarcinoma.

    Science.gov (United States)

    García Molina, José Fernando; Zheng, Lei; Sertdemir, Metin; Dinter, Dietmar J; Schönberg, Stefan; Rädle, Matthias

    2014-01-01

    Robust detection of prostatic cancer is a challenge due to the multitude of variants and their representation in MR images. We propose a pattern recognition system with an incremental learning ensemble algorithm using support vector machines (SVM) tackling this problem employing multimodal MR images and a texture-based information strategy. The proposed system integrates anatomic, texture, and functional features. The data set was preprocessed using B-Spline interpolation, bias field correction and intensity standardization. First- and second-order angular independent statistical approaches and rotation invariant local phase quantization (RI-LPQ) were utilized to quantify texture information. An incremental learning ensemble SVM was implemented to suit working conditions in medical applications and to improve effectiveness and robustness of the system. The probability estimation of cancer structures was calculated using SVM and the corresponding optimization was carried out with a heuristic method together with a 3-fold cross-validation methodology. We achieved an average sensitivity of 0.844 ± 0.068 and a specificity of 0.780 ± 0.038, which yielded superior or similar performance to current state of the art using a total database of only 41 slices from twelve patients with histological confirmed information, including cancerous, unhealthy non-cancerous and healthy prostate tissue. Our results show the feasibility of an ensemble SVM being able to learn additional information from new data while preserving previously acquired knowledge and preventing unlearning. The use of texture descriptors provides more salient discriminative patterns than the functional information used. Furthermore, the system improves selection of information, efficiency and robustness of the classification. The generated probability map enables radiologists to have a lower variability in diagnosis, decrease false negative rates and reduce the time to recognize and delineate structures in

  10. Multi-DOF Incremental Optical Encoder with Laser Wavelength Compensation

    Directory of Open Access Journals (Sweden)

    Cha'o-Kuang Chen

    2013-09-01

    Full Text Available This study used a reflective diffraction grating as the medium to develop a multi-DOF incremental optical encoder for motion stage. The optical encoder can measure three angular displacements, roll, yaw and pitch of the motion stage simultaneously, as well as the horizontal straightness and linear displacement, summed to five DOF errors of motion stage by only using the positive and negative first-order diffracted light. The grating diffraction theory, Doppler effect, and optical interference technique were used. Two quadrant photodetectors were used to measure the changes in three-dimensional space of diffraction direction of diffracted light, in order to construct a multi-DOF incremental optical encoder. Considering the working stability of a laser diode and preventing the influence of the zeroth-order diffracted light returning to the laser diode, an additional optical isolation system was designed and a wavelength variation monitoring module was created. The compensation for the light source wavelength variation could be 0.001 nm. The multi-DOF verification results showed that the roll error is ±0.7/60 arcsec, the standard deviation is 0.025 arcsec; the yaw error is ±0.7/30 arcsec, the standard deviation is 0.05 arcsec; the pitch error is ±0.8/90 arcsec, the standard deviation is 0.18 arcsec, the horizontal straightness error is ±0.5/250 μm, the standard deviation is 0.05 μm and the linear displacement error is ±1/20000 μm, the standard deviation is 12 nm.

  11. PROCESS SIMULATION AND QUALITY EVALUATION IN INCREMENTAL SHEET FORMING

    Directory of Open Access Journals (Sweden)

    Meftah Hrairi

    2011-12-01

    Full Text Available Single Point Incremental Forming (SPIF is a promising sheet-metal-forming process that permits the manufacturing of small to medium-sized batches of complex parts at low cost. It allows metal forming to work in the critical ‘necking-to-tearing' zone which results in a strong thinning before failure if the process is well designed. Moreover, the process is complex due to the number of variables involved. Thus, it is not possible to consider that the process has been well assessed; several remaining aspects need to be clarified. The objective of the present paper is to study some of these aspects, namely, the phenomenon of the wall thickness overstretch along depth and the effect of the tool path on the distribution of the wall thickness using finite element simulations.Abstrak: Pembentukan Tokokan Mata Tunggal (Single Point Incremental Forming (SPIF merupakan satu proses pembentukan kepingan logam yang membolehkan pembuatan dalam jumlah yang kecil hingga sederhana, bahagian-bahagian yang kompleks pada kos yang rendah. Jika proses ini direka dengan baik, kaedah ini membolehkan pembentukan logam yang baik terhasil. Jika tidak, semasa peringkat zon kritikal ‘perleheran-ke-pengoyakan' menyebabkan penipisan keterlaluan yang boleh menyebabkan logam tersebut rosak. Tambahan pula, proses ini agak kompleks, kerana ia melibatkan beberapa pemboleh ubah. Maka, walaupun proses ini telah dinilaikan seeloknya; masih terdapat beberapa aspek lain yang perlu diperjelaskan. Objektif kertas ini dibentangkan adalah untuk mengkaji beberapa aspek tertentu, seperti, ketebalan dinding regangan berlebihan di sepanjang kedalaman dan kesan tool path (beberapa siri posisi koordinat untuk menentukan pergerakan alatan memotong ketika operasi memesin terhadap pengagihan ketebalan dinding menggunakan simulasi unsur terhingga.

  12. Incremental learning with SVM for multimodal classification of prostatic adenocarcinoma.

    Directory of Open Access Journals (Sweden)

    José Fernando García Molina

    Full Text Available Robust detection of prostatic cancer is a challenge due to the multitude of variants and their representation in MR images. We propose a pattern recognition system with an incremental learning ensemble algorithm using support vector machines (SVM tackling this problem employing multimodal MR images and a texture-based information strategy. The proposed system integrates anatomic, texture, and functional features. The data set was preprocessed using B-Spline interpolation, bias field correction and intensity standardization. First- and second-order angular independent statistical approaches and rotation invariant local phase quantization (RI-LPQ were utilized to quantify texture information. An incremental learning ensemble SVM was implemented to suit working conditions in medical applications and to improve effectiveness and robustness of the system. The probability estimation of cancer structures was calculated using SVM and the corresponding optimization was carried out with a heuristic method together with a 3-fold cross-validation methodology. We achieved an average sensitivity of 0.844 ± 0.068 and a specificity of 0.780 ± 0.038, which yielded superior or similar performance to current state of the art using a total database of only 41 slices from twelve patients with histological confirmed information, including cancerous, unhealthy non-cancerous and healthy prostate tissue. Our results show the feasibility of an ensemble SVM being able to learn additional information from new data while preserving previously acquired knowledge and preventing unlearning. The use of texture descriptors provides more salient discriminative patterns than the functional information used. Furthermore, the system improves selection of information, efficiency and robustness of the classification. The generated probability map enables radiologists to have a lower variability in diagnosis, decrease false negative rates and reduce the time to recognize and

  13. Tactile friction of topical formulations.

    Science.gov (United States)

    Skedung, L; Buraczewska-Norin, I; Dawood, N; Rutland, M W; Ringstad, L

    2016-02-01

    The tactile perception is essential for all types of topical formulations (cosmetic, pharmaceutical, medical device) and the possibility to predict the sensorial response by using instrumental methods instead of sensory testing would save time and cost at an early stage product development. Here, we report on an instrumental evaluation method using tactile friction measurements to estimate perceptual attributes of topical formulations. Friction was measured between an index finger and an artificial skin substrate after application of formulations using a force sensor. Both model formulations of liquid crystalline phase structures with significantly different tactile properties, as well as commercial pharmaceutical moisturizing creams being more tactile-similar, were investigated. Friction coefficients were calculated as the ratio of the friction force to the applied load. The structures of the model formulations and phase transitions as a result of water evaporation were identified using optical microscopy. The friction device could distinguish friction coefficients between the phase structures, as well as the commercial creams after spreading and absorption into the substrate. In addition, phase transitions resulting in alterations in the feel of the formulations could be detected. A correlation was established between skin hydration and friction coefficient, where hydrated skin gave rise to higher friction. Also a link between skin smoothening and finger friction was established for the commercial moisturizing creams, although further investigations are needed to analyse this and correlations with other sensorial attributes in more detail. The present investigation shows that tactile friction measurements have potential as an alternative or complement in the evaluation of perception of topical formulations. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Formulations of Amlodipine: A Review

    Directory of Open Access Journals (Sweden)

    Muhammad Ali Sheraz

    2016-01-01

    Full Text Available Amlodipine (AD is a calcium channel blocker that is mainly used in the treatment of hypertension and angina. However, latest findings have revealed that its efficacy is not only limited to the treatment of cardiovascular diseases as it has shown to possess antioxidant activity and plays an important role in apoptosis. Therefore, it is also employed in the treatment of cerebrovascular stroke, neurodegenerative diseases, leukemia, breast cancer, and so forth either alone or in combination with other drugs. AD is a photosensitive drug and requires protection from light. A number of workers have tried to formulate various conventional and nonconventional dosage forms of AD. This review highlights all the formulations that have been developed to achieve maximum stability with the desired therapeutic action for the delivery of AD such as fast dissolving tablets, floating tablets, layered tablets, single-pill combinations, capsules, oral and transdermal films, suspensions, emulsions, mucoadhesive microspheres, gels, transdermal patches, and liposomal formulations.

  15. Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment%主流形和非线性维数化简的切向空间校正

    Institute of Scientific and Technical Information of China (English)

    张振跃; 查宏远

    2004-01-01

    We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.

  16. Covariant Formulations of Superstring Theories.

    Science.gov (United States)

    Mikovic, Aleksandar Radomir

    1990-01-01

    Chapter 1 contains a brief introduction to the subject of string theory, and tries to motivate the study of superstrings and covariant formulations. Chapter 2 describes the Green-Schwarz formulation of the superstrings. The Hamiltonian and BRST structure of the theory is analysed in the case of the superparticle. Implications for the superstring case are discussed. Chapter 3 describes the Siegel's formulation of the superstring, which contains only the first class constraints. It is shown that the physical spectrum coincides with that of the Green-Schwarz formulation. In chapter 4 we analyse the BRST structure of the Siegel's formulation. We show that the BRST charge has the wrong cohomology, and propose a modification, called first ilk, which gives the right cohomology. We also propose another superparticle model, called second ilk, which has infinitely many coordinates and constraints. We construct the complete BRST charge for it, and show that it gives the correct cohomology. In chapter 5 we analyse the properties of the covariant vertex operators and the corresponding S-matrix elements by using the Siegel's formulation. We conclude that the knowledge of the ghosts is necessary, even at the tree level, in order to obtain the correct S-matrix. In chapter 6 we attempt to calculate the superstring loops, in a covariant gauge. We calculate the vacuum-to -vacuum amplitude, which is also the cosmological constant. We show that it vanishes to all loop orders, under the assumption that the free covariant gauge-fixed action exists. In chapter 7 we present our conclusions, and briefly discuss the random lattice approach to the string theory, as a possible way of resolving the problem of the covariant quantization and the nonperturbative definition of the superstrings.

  17. Estimates for the Tail Probability of the Supremum of a Random Walk with Independent Increments

    Institute of Scientific and Technical Information of China (English)

    Yang YANG; Kaiyong WANG

    2011-01-01

    The authors investigate the tail probability of the supremum of a random walk with independent increments and obtain some equivalent assertions in the case that the increments are independent and identically distributed random variables with Osubexponential integrated distributions.A uniform upper bound is derived for the distribution of the supremum of a random walk with independent but non-identically distributed increments,whose tail distributions are dominated by a common tail distribution with an O-subexponential integrated distribution.

  18. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    OpenAIRE

    Vermeulen, Patrick; Bosch, Frans; Volberda, Henk

    2006-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation. In this paper, we use an institutional perspective to investigate why established firms in the financial services industry struggle with their complex incremental product innovation efforts. We ar...

  19. EVALUATION OF DISTRIBUTION HISTOGRAMS FOR INCREMENT OF CHROMATICITY COORDINATES IN DISPLAY TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    I. O. Zharinov

    2016-05-01

    Full Text Available We consider evaluation problem of chromaticity coordinates increment for an image displayed by indicating means (liquid crystal panels and etc.. Display device profile set by the weight matrix for components of primary colors serves as basic data for quantitative calculation. Research results have the form of mathematical expressions allowing calculation of increment values of chromaticity coordinates of the image displayed on indicating means and histograms of increment distribution.

  20. Incremental peritoneal dialysis: a 10 year single-centre experience.

    Science.gov (United States)

    Sandrini, Massimo; Vizzardi, Valerio; Valerio, Francesca; Ravera, Sara; Manili, Luigi; Zubani, Roberto; Lucca, Bernardo J A; Cancarini, Giovanni

    2016-12-01

    Incremental dialysis consists in prescribing a dialysis dose aimed towards maintaining total solute clearance (renal + dialysis) near the targets set by guidelines. Incremental peritoneal dialysis (incrPD) is defined as one or two dwell-times per day on CAPD, whereas standard peritoneal dialysis (stPD) consists in three-four dwell-times per day. Single-centre cohort study. Enrollement period: January 2002-December 2007; end of follow up (FU): December 2012. incident patients with FU ≥6 months, initial residual renal function (RRF) 3-10 ml/min/1.73 sqm BSA, renal indication for PD. Median incrPD duration was 17 months (I-III Q: 10; 30). There were no statistically significant differences between 29 patients on incrPD and 76 on stPD regarding: clinical, demographic and anthropometric characteristics at the beginning of treatment, adequacy indices, peritonitis-free survival (peritonitis incidence: 1/135 months-patients in incrPD vs. 1/52 months-patients in stPD) and patient survival. During the first 6 months, RRF remained stable in incrPD (6.20 ± 2.02 vs. 6.08 ± 1.47 ml/min/1.73 sqm BSA; p = 0.792) whereas it decreased in stPD (4.48 ± 2.12 vs. 5.61 ± 1.49; p < 0.001). Patient survival was affected negatively by ischemic cardiopathy (HR: 4.269; p < 0.001), peripheral and cerebral vascular disease (H2.842; p = 0.006) and cirrhosis (2.982; p = 0.032) and positively by urine output (0.392; p = 0.034). Hospitalization rates were significantly lower in incrPD (p = 0.021). Eight of 29 incrPD patients were transplanted before reaching full dose treatment. IncrPD is a safe modality to start PD; compared to stPD, it shows similar survival rates, significantly less hospitalization, a trend towards lower peritonitis incidence and slower reduction of renal function.

  1. Incremental triangulation by way of edge swapping and local optimization

    Science.gov (United States)

    Wiltberger, N. Lyn

    1994-01-01

    This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve

  2. On critical cases in limit theory for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Podolskij, Mark

    averages. The limit theory heavily depends on the interplay between the given order of the increments, the considered power, the Blumenthal-Getoor index of the driving pure jump Lévy process L and the behavior of the kernel function g at 0. In this work we will study the critical cases, which were......In this paper we present some limit theorems for power variation of stationary increments Lévy driven moving averages in the setting of critical regimes. In [5] the authors derived first and second order asymptotic results for k-th order increments of stationary increments Lévy driven moving...

  3. The incremental impact of cardiac MRI on clinical decision-making

    Science.gov (United States)

    Stewart, Michael J; Richardson, James D; Child, Nicholas M; Maredia, Neil

    2016-01-01

    Objective: Despite a significant expansion in the use of cardiac MRI (CMR), there is inadequate evaluation of its incremental impact on clinical decision-making over and above other well-established modalities. We sought to determine the incremental utility of CMR in routine practice. Methods: 629 consecutive CMR studies referred by 44 clinicians from 9 institutions were evaluated. Pre-defined algorithms were used to determine the incremental influence on diagnostic thinking, influence on clinical management and thus the overall clinical utility. Studies were also subdivided and evaluated according to the indication for CMR. Results: CMR provided incremental information to the clinician in 85% of cases, with incremental influence on diagnostic thinking in 85% of cases and incremental impact on management in 42% of cases. The overall incremental utility of CMR exceeded 90% in 7 out of the 13 indications, whereas in settings such as the evaluation of unexplained ventricular arrhythmia or mild left ventricular systolic dysfunction, this was <50%. Conclusion: CMR was frequently able to inform and influence decision-making in routine clinical practice, even with analyses that accepted only incremental clinical information and excluded a redundant duplication of imaging. Significant variations in yield were noted according to the indication for CMR. These data support a wider integration of CMR services into cardiac imaging departments. Advances in knowledge: These data are the first to objectively evaluate the incremental value of a UK CMR service in clinical decision-making. Such data are essential when seeking justification for a CMR service. PMID:26493468

  4. The incremental impact of cardiac MRI on clinical decision-making.

    Science.gov (United States)

    Rajwani, Adil; Stewart, Michael J; Richardson, James D; Child, Nicholas M; Maredia, Neil

    2016-01-01

    Despite a significant expansion in the use of cardiac MRI (CMR), there is inadequate evaluation of its incremental impact on clinical decision-making over and above other well-established modalities. We sought to determine the incremental utility of CMR in routine practice. 629 consecutive CMR studies referred by 44 clinicians from 9 institutions were evaluated. Pre-defined algorithms were used to determine the incremental influence on diagnostic thinking, influence on clinical management and thus the overall clinical utility. Studies were also subdivided and evaluated according to the indication for CMR. CMR provided incremental information to the clinician in 85% of cases, with incremental influence on diagnostic thinking in 85% of cases and incremental impact on management in 42% of cases. The overall incremental utility of CMR exceeded 90% in 7 out of the 13 indications, whereas in settings such as the evaluation of unexplained ventricular arrhythmia or mild left ventricular systolic dysfunction, this was incremental clinical information and excluded a redundant duplication of imaging. Significant variations in yield were noted according to the indication for CMR. These data support a wider integration of CMR services into cardiac imaging departments. These data are the first to objectively evaluate the incremental value of a UK CMR service in clinical decision-making. Such data are essential when seeking justification for a CMR service.

  5. Instream flow incremental methodology: microhabitat utilization criteria for Atlantic salmon in Newfoundland waters

    National Research Council Canada - National Science Library

    DeGraaf, D.A; Chaput, G.J

    1984-01-01

    ... with the PHABSIM computer model of the Instream Flow Incremental Methodology (IFIM). Potential study areas were identified through discussions among LGL, the Scientific Authority and DFO researchers...

  6. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  7. Arc-tangent Transformation Algorithm for Active Impulsive Noise Control%有源脉冲噪声控制的反正切变换算法

    Institute of Scientific and Technical Information of China (English)

    邵俊; 周亚丽; 张奇志

    2012-01-01

    In recent years, some effective algorithms for active impulsive noise control have been proposed. But these algorithms may not be stable due to the high-and-sharp peaks of the impulsive noise. To overcome the shortage of these algorithms, a new algorithm based on minimizing the squared arc-tangent transformation of the error signal was proposed. This algorithm doesn' t need to consider thresholds estimation and parameters selection based on priori knowledge of impulsive noise. Moreover, the algorithm is simple in structure, and easy to be realized. The simulation results show that the proposed algorithm can effectively eliminate the impulsive noise, and compared with the other algorithms, the performance of the proposed algorithm has a better convergence and stability.%近年来,针对有源脉冲噪声控制,提出一些较为有效的算法.由于脉冲噪声的高尖峰特性,给算法带来了不稳定.为克服这些算法的不足,提出一种基于反正切变换的滤波x最小均方差算法.该算法不需要根据脉冲噪声的先验知识估测阈值和选择参数,并且算法结构简单、易于实现.仿真结果表明该算法能有效地消除脉冲噪声,与其他几种算法相比表现了更好的收敛性和稳定性.

  8. Formulating Policy for Summer Schools.

    Science.gov (United States)

    Marriot, Helen

    1991-01-01

    Explores issues relating to the formulation of policy for summer programs for language learning, describing one university's experience with student demand, student motivation and progress, course timing and structure, academic staffing, administrative organization, student and staff evaluation, and funding. (three references) (CB)

  9. Hyperbolic Formulation of General Relativity

    CERN Document Server

    Abrahams, A M; Choquet-Bruhat, Y; York, J W; Abrahams, Andrew; Anderson, Arlen; Choquet-Bruhat, Yvonne; York, James W.

    1998-01-01

    Two geometrical well-posed hyperbolic formulations of general relativity are described. One admits any time-slicing which preserves a generalized harmonic condition. The other admits arbitrary time-slicings. Both systems have only the physical characteristic speeds of zero and the speed of light.

  10. Business Collaboration in Food Networks: Incremental Solution Development

    Directory of Open Access Journals (Sweden)

    Harald Sundmaeker

    2014-10-01

    Full Text Available The paper will present an approach for an incremental solution development that is based on the usage of the currently developed Internet based FIspace business collaboration platform. Key element is the clear segmentation of infrastructures that are either internal or external to the collaborating business entity in the food network. On the one hand, the approach enables to differentiate between specific centralised as well as decentralised ways for data storage and hosting of IT based functionalities. The selection of specific dataexchange protocols and data models is facilitated. On the other hand, the supported solution design and subsequent development is focusing on reusable “software Apps” that can be used on their own and are incorporating a clear added value for the business actors. It will be outlined on how to push the development and introduction of Apps that do not require basic changes of the existing infrastructure. The paper will present an example that is based on the development of a set of Apps for the exchange of product quality related information in food networks, specifically addressing fresh fruits and vegetables. It combines workflow support for data exchange from farm to retail as well as to provide quality feedback information to facilitate the business process improvement. Finally, the latest status of theFIspace platform development will be outlined. Key features and potential ways for real users and software developers in using the FIspace platform that is initiated by science and industry will be outlined.

  11. Comparative Study between Programming Systems for Incremental Sheet Forming Process

    Directory of Open Access Journals (Sweden)

    Moayedfar Majid

    2014-07-01

    Full Text Available Incremental Sheet Forming (ISF is a method developed to form a desired surface feature on sheet metals in batch production series. Due to a lack of dedicated programming system to execute, control and monitor the whole ISF, researchers tried to utilize programming systems designed for chip making process to suits for ISF. In this work, experiments were conducted to find suitability and quality of ISF parts produced by using manual CNC part programming. Therefore, ISF was carried out on stainless steel sheets using Computer Numerical Control (CNC milling machines. Prior to running the experiments, a ball-point shaped tool made of bronze alloy was fabricated due to its superior ability to reduce the amount of friction and improve the surface quality of the stainless steel sheet metal. The experiments also employed the method of forming in negative direction with a blank mould and the tool which helped to shape the desired part quickly. The programming was generated using the MasterCAM software for the CNC milling machine and edited before transferring to the machine. However, the programming for the machine was written manually to show the differences of output date between software programming and manual programming. From the results, best method of programming was found and minimum amount of contact area between tool and sheet metal achieved.

  12. Urdu to Punjabi Machine Translation: An Incremental Training Approach

    Directory of Open Access Journals (Sweden)

    Umrinderpal Singh

    2016-04-01

    Full Text Available The statistical machine translation approach is highly popular in automatic translation research area and promising approach to yield good accuracy. Efforts have been made to develop Urdu to Punjabi statistical machine translation system. The system is based on an incremental training approach to train the statistical model. In place of the parallel sentences corpus has manually mapped phrases which were used to train the model. In preprocessing phase, various rules were used for tokenization and segmentation processes. Along with these rules, text classification system was implemented to classify input text to predefined classes and decoder translates given text according to selected domain by the text classifier. The system used Hidden Markov Model(HMM for the learning process and Viterbi algorithm has been used for decoding. Experiment and evaluation have shown that simple statistical model like HMM yields good accuracy for a closely related language pair like Urdu-Punjabi. The system has achieved 0.86 BLEU score and in manual testing and got more than 85% accuracy.

  13. Online Botnet Detection Based on Incremental Discrete Fourier Transform

    Directory of Open Access Journals (Sweden)

    Xiaocong Yu

    2010-05-01

    Full Text Available Botnet detection has attracted lots of attention since botnet attack is becoming one of the most serious threats on the Internet. But little work has considered the online detection. In this paper, we propose a novel approach that can monitor the botnet activities in an online way. We define the concept of “feature streams” to describe raw network traffic. If some feature streams show high similarities, the corresponding hosts will be regarded as suspected bots which will be added into the suspected bot hosts set. After activity analysis, bot hosts will be confirmed as soon as possible. We present a simple method by computing the average Euclidean distance for similarity measurement.  To avoid huge calculation among feature streams, classical Discrete Fourier Transform (DFT technique is adopted. Then an incremental calculation of DFT coefficients is introduced to obtain the optimal execution time. The experimental evaluations show that our approach can detect both centralized and distributed botnet activities successfully with high efficiency and low false positive rate.

  14. [Effect of menstrual cycle on cardiorespiratory system during incremental exercise].

    Science.gov (United States)

    Mesaki, N; Sasaki, J; Shoji, M; Iwasaki, H; Asano, K; Eda, M

    1986-01-01

    According to the results of questionnaires to college athletes, they believe the follicular phase is better than luteal phase for competitive sports. However, it is not clear whether there is significant difference in athletic performance between the two phases of the menstrual cycle. The effects of the menstrual cycle on the cardiorespiratory system were investigated in exercising women who are top players of basketball in Japan. They performed incremental exercise on a cycle ergometer. During the exercise, the ECG and heart rate (HR) were monitored. The expired air was sampled continuously and expiratory gas volume/minute (VE), oxygen uptake (VO2), carbon dioxide output (VCO2), gas exchange ratio(R) and respiratory rate (Resp. E.) were measured. Blood samples were collected to measure the blood lactic acid concentration during the exercise. HR in the luteal phase is higher than in the follicular phase at rest and throughout the exercise. VE, R and Resp. R. at rest and during exercise indicated a tendency to a higher level in the follicular phase. The blood lactic acid concentration during exercise in the follicular phase indicated a tendency to increase more rapidly than in luteal phase. However, no statistical differences in the cardiorespiratory system were detected when the follicular and luteal phase were compared. These results did not indicate conclusively in which phase it is better for athletic women to take part in competitive sports.

  15. Incremental planning to control a blackboard-based problem solver

    Science.gov (United States)

    Durfee, E. H.; Lesser, V. R.

    1987-01-01

    To control problem solving activity, a planner must resolve uncertainty about which specific long-term goals (solutions) to pursue and about which sequences of actions will best achieve those goals. A planner is described that abstracts the problem solving state to recognize possible competing and compatible solutions and to roughly predict the importance and expense of developing these solutions. With this information, the planner plans sequences of problem solving activities that most efficiently resolve its uncertainty about which of the possible solutions to work toward. The planner only details actions for the near future because the results of these actions will influence how (and whether) a plan should be pursued. As problem solving proceeds, the planner adds new details to the plan incrementally, and monitors and repairs the plan to insure it achieves its goals whenever possible. Through experiments, researchers illustrate how these new mechanisms significantly improve problem solving decisions and reduce overall computation. They briefly discuss current research directions, including how these mechanisms can improve a problem solver's real-time response and can enhance cooperation in a distributed problem solving network.

  16. Numerical simulation and experimental investigation of incremental sheet forming process

    Institute of Scientific and Technical Information of China (English)

    HAN Fei; MO Jian-hua

    2008-01-01

    In order to investigate the process of incremental sheet forming (ISF) through both experimental and numerical approaches, a three-dimensional elasto-plastic finite element model (FEM) was developed to simulate the process and the simulated results were compared with those of experiment. The results of numerical simulations, such as the strain history and distribution, the stress state and distribution, sheet thickness distribution, etc, were discussed in details, and the influences of process parameters on these results were also analyzed. The simulated results of the radial strain and the thickness distribution are in good agreement with experimental results. The simulations reveal that the deformation is localized around the tool and constantly remains close to a plane strain state. With decreasing depth step, increasing tool diameter and wall inclination angle, the axial stress reduces, leading to less thinning and more homogeneous plastic strain and thickness distribution. During ISF, the plastic strain increases stepwise under the action of the tool. Each increase in plastic strain is accompanied by hydrostatic pressure, which explains why obtainable deformation using ISF exceeds the forming limits of conventional sheet forming.

  17. Process Parameters Optimization in Single Point Incremental Forming

    Science.gov (United States)

    Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh

    2016-04-01

    This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.

  18. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab

    2017-08-22

    Frequent subgraph mining is a core graph operation used in many domains, such as graph data management and knowledge exploration, bioinformatics and security. Most existing techniques target static graphs. However, modern applications, such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem on a single large evolving graph. We adapt the notion of “fringe” to the graph context, that is the set of subgraphs on the border between frequent and infrequent subgraphs. IncGM+ maintains fringe subgraphs and exploits them to prune the search space. To boost the efficiency, we propose an efficient index structure to maintain selected embeddings with minimal memory overhead. These embeddings are utilized to avoid redundant expensive subgraph isomorphism operations. Moreover, the proposed system supports batch updates. Using large real-world graphs, we experimentally verify that IncGM+ outperforms existing methods by up to three orders of magnitude, scales to much larger graphs and consumes less memory.

  19. Towards a fully size-consistent method of increments

    CERN Document Server

    Fertitta, E; Paulus, B; Barcza, G; Legeza, Ö

    2016-01-01

    The method of increments (MoI) allows one to successfully calculate cohesive energies of bulk materials with high accuracy, but it encounters difficulties when calculating whole dissociation curves. The reason is that its standard formalism is based on a single Hartree-Fock (HF) configuration whose orbitals are localized and used for the many-body expansion. Therefore, in those situations where HF does not allow a size-consistent description of the dissociation, the MoI cannot yield proper results either. Herein we address the problem by employing a size-consistent multiconfigurational reference for the MoI formalism. This leads to a matrix equation where a coupling derived by the reference itself is employed. In principle, such approach allows one to evaluate approximate values for the ground as well as excited states energies. While the latter are accurate close to the avoided crossing only, the ground state results are very promising for the whole dissociation curve, as shown by the comparison with density...

  20. Incremental refinement of a multi-user-detection algorithm (II

    Directory of Open Access Journals (Sweden)

    M. Vollmer

    2003-01-01

    Full Text Available Multi-user detection is a technique proposed for mobile radio systems based on the CDMA principle, such as the upcoming UMTS. While offering an elegant solution to problems such as intra-cell interference, it demands very significant computational resources. In this paper, we present a high-level approach for reducing the required resources for performing multi-user detection in a 3GPP TDD multi-user system. This approach is based on a displacement representation of the parameters that describe the transmission system, and a generalized Schur algorithm that works on this representation. The Schur algorithm naturally leads to a highly parallel hardware implementation using CORDIC cells. It is shown that this hardware architecture can also be used to compute the initial displacement representation. It is very beneficial to introduce incremental refinement structures into the solution process, both at the algorithmic level and in the individual cells of the hardware architecture. We detail these approximations and present simulation results that confirm their effectiveness.

  1. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Directory of Open Access Journals (Sweden)

    Jerry Chun-Wei Lin

    2015-01-01

    Full Text Available Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.

  2. Extending XNAT Platform with an Incremental Semantic Framework

    Directory of Open Access Journals (Sweden)

    Santiago Timón

    2017-08-01

    Full Text Available Informatics increases the yield from neuroscience due to improved data. Data sharing and accessibility enable joint efforts between different research groups, as well as replication studies, pivotal for progress in the field. Research data archiving solutions are evolving rapidly to address these necessities, however, distributed data integration is still difficult because of the need of explicit agreements for disparate data models. To address these problems, ontologies are widely used in biomedical research to obtain common vocabularies and logical descriptions, but its application may suffer from scalability issues, domain bias, and loss of low-level data access. With the aim of improving the application of semantic models in biobanking systems, an incremental semantic framework that takes advantage of the latest advances in biomedical ontologies and the XNAT platform is designed and implemented. We follow a layered architecture that allows the alignment of multi-domain biomedical ontologies to manage data at different levels of abstraction. To illustrate this approach, the development is integrated in the JPND (EU Joint Program for Neurodegenerative Disease APGeM project, focused on finding early biomarkers for Alzheimer's and other dementia related diseases.

  3. A novel instrument for generating angular increments of 1 nanoradian

    Science.gov (United States)

    Alcock, Simon G.; Bugnar, Alex; Nistea, Ioana; Sawhney, Kawal; Scott, Stewart; Hillman, Michael; Grindrod, Jamie; Johnson, Iain

    2015-12-01

    Accurate generation of small angles is of vital importance for calibrating angle-based metrology instruments used in a broad spectrum of industries including mechatronics, nano-positioning, and optic fabrication. We present a novel, piezo-driven, flexure device capable of reliably generating micro- and nanoradian angles. Unlike many such instruments, Diamond Light Source's nano-angle generator (Diamond-NANGO) does not rely on two separate actuators or rotation stages to provide coarse and fine motion. Instead, a single Physik Instrumente NEXLINE "PiezoWalk" actuator provides millimetres of travel with nanometre resolution. A cartwheel flexure efficiently converts displacement from the linear actuator into rotary motion with minimal parasitic errors. Rotation of the flexure is directly measured via a Magnescale "Laserscale" angle encoder. Closed-loop operation of the PiezoWalk actuator, using high-speed feedback from the angle encoder, ensures that the Diamond-NANGO's output drifts by only ˜0.3 nrad rms over ˜30 min. We show that the Diamond-NANGO can reliably move with unprecedented 1 nrad (˜57 ndeg) angular increments over a range of >7000 μrad. An autocollimator, interferometer, and capacitive displacement sensor are used to independently confirm the Diamond-NANGO's performance by simultaneously measuring the rotation of a reflective cube.

  4. Incremental Placement-Based Clock Network Minimization Methodology

    Institute of Scientific and Technical Information of China (English)

    ZHOU Oiang; CAI Yici; HUANG Liang; HONG Xianlong

    2008-01-01

    Power is the major challenge threatening the progress of very large scale integration (VLSI) tech-nology development. In ultra-deep submicron VLSI designs, clock network size must be minimized to re-duce power consumption, power supply noise, and the number of clock buffers which are vulnerable to process variations. Traditional design methodologies usually let the clock router independently undertake the clock network minimization. Since clock routing is based on register locations, register placement actu-ally strongly influences the clock network size. This paper describes a clock network design methodology that optimizes register placement. For a given cell placement result, incremental modifications are per-formed based on the clock skew specifications by moving registers toward preferred locations that may re-duce the clock network size. At the same time, the side-effects to logic cell placement, such as signal net wirelength and critical path delay, are controlled. Test results on benchmark circuits show that the methodol-ogy can considerably reduce clock network size with limited impact on signal net wirelength and critical path delay.

  5. Automated Dimension Determination for NMF-based Incremental Collaborative Filtering

    Directory of Open Access Journals (Sweden)

    Xiwei Wang

    2015-12-01

    Full Text Available The nonnegative matrix factorization (NMF based collaborative filtering t e chniques h a ve a c hieved great success in product recommendations. It is well known that in NMF, the dimensions of the factor matrices have to be determined in advance. Moreover, data is growing fast; thus in some cases, the dimensions need to be changed to reduce the approximation error. The recommender systems should be capable of updating new data in a timely manner without sacrificing the prediction accuracy. In this paper, we propose an NMF based data update approach with automated dimension determination for collaborative filtering purposes. The approach can determine the dimensions of the factor matrices and update them automatically. It exploits the nearest neighborhood based clustering algorithm to cluster users and items according to their auxiliary information, and uses the clusters as the constraints in NMF. The dimensions of the factor matrices are associated with the cluster quantities. When new data becomes available, the incremental clustering algorithm determines whether to increase the number of clusters or merge the existing clusters. Experiments on three different datasets (MovieLens, Sushi, and LibimSeTi were conducted to examine the proposed approach. The results show that our approach can update the data quickly and provide encouraging prediction accuracy.

  6. Characterization of potential impurities and degradation products in electronic cigarette formulations and aerosols.

    Science.gov (United States)

    Flora, Jason W; Meruva, Naren; Huang, Chorng B; Wilkinson, Celeste T; Ballentine, Regina; Smith, Donna C; Werley, Michael S; McKinney, Willie J

    2016-02-01

    E-cigarettes are gaining popularity in the U.S. as well as in other global markets. Currently, limited published analytical data characterizing e-cigarette formulations (e-liquids) and aerosols exist. While FDA has not published a harmful and potentially harmful constituent (HPHC) list for e-cigarettes, the HPHC list for currently regulated tobacco products may be useful to analytically characterize e-cigarette aerosols. For example, most e-cigarette formulations contain propylene glycol and glycerin, which may produce aldehydes when heated. In addition, nicotine-related chemicals have been previously reported as potential e-cigarette formulation impurities. This study determined e-liquid formulation impurities and potentially harmful chemicals in aerosols of select commercial MarkTen(®) e-cigarettes manufactured by NuMark LLC. The potential hazard of the identified formulation impurities and aerosol chemicals was also estimated. E-cigarettes were machine puffed (4-s duration, 55-mL volume, 30-s intervals) to battery exhaustion to maximize aerosol collection. Aerosols analyzed for carbonyls were collected in 20-puff increments to account for analyte instability. Tobacco specific nitrosamines were measured at levels observed in pharmaceutical grade nicotine. Nicotine-related impurities in the e-cigarette formulations were below the identification and qualification thresholds proposed in ICH Guideline Q3B(R2). Levels of potentially harmful chemicals detected in the aerosols were determined to be below published occupational exposure limits.

  7. Sandbox Modeling of the Fault-increment Pattern in Extensional Basins

    Institute of Scientific and Technical Information of China (English)

    Geng Changbo; Tong Hengmao; He Yudan; Wei Chunguang

    2007-01-01

    Three series of sandbox modeling experiments were performed to study the fault-increment pattern in extensional basins.Experimental results showed that the tectonic action mode of boundaries and the shape of major boundary faults control the formation and evolution of faults in extensional basins.In the process of extensional deformation,the increase in the number and length of faults was episodic,and every 'episode' experienced three periods,strain-accumulation period,quick fault-increment period and strain-adjustment period.The more complex the shape of the boundary fault,the higher the strain increment each 'episode' experienced.Different extensional modes resulted in different fault-increment patterns.The horizontal detachment extensional mode has the 'linear' style of fault-increment pattern,while the extensional mode controlled by a listric fault has the 'stepwise' style of fault-increment pattern,and the extensional mode controlled by a ramp-flat boundary fault has the 'stepwise-linear' style of fault-increment pattern.These fault-increment patterns given above could provide a theoretical method of fault interpretation and fracture prediction in extensional basins.

  8. Integrated Strategic Planning and Analysis Network Increment 4 (ISPAN Inc 4)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Integrated Strategic Planning and Analysis Network Increment 4 (ISPAN Inc 4) Defense...Baseline BY - Base Year CAE - Component Acquisition Executive CDD - Capability Development Document CPD - Capability Production Document DAE...Assigned: April 29, 2009 Program Information Program Name Integrated Strategic Planning and Analysis Network Increment 4 (ISPAN Inc 4) DoD

  9. Estimating the Optimum Number of Options per Item Using an Incremental Option Paradigm.

    Science.gov (United States)

    Trevisan, Michael S.; And Others

    1994-01-01

    The reliabilities of 2-, 3-, 4-, and 5-choice tests were compared through an incremental-option model on a test taken by 154 high school seniors. Creating the test forms incrementally more closely approximates actual test construction. The nonsignificant differences among the option choices support the three-option item. (SLD)

  10. Incremental soil sampling root water uptake, or be great through others

    Science.gov (United States)

    Ray Allmaras pursued several research topics in relation to residue and tillage research. He looked for new tools to help explain soil responses to tillage, including disk permeameters and image analysis. The incremental sampler developed by Pikul and Allmaras allowed small-depth increment, volumetr...

  11. Incremental Beliefs of Ability, Achievement Emotions and Learning of Singapore Students

    Science.gov (United States)

    Luo, Wenshu; Lee, Kerry; Ng, Pak Tee; Ong, Joanne Xiao Wei

    2014-01-01

    This study investigated the relationships of students' incremental beliefs of math ability to their achievement emotions, classroom engagement and math achievement. A sample of 273 secondary students in Singapore were administered measures of incremental beliefs of math ability, math enjoyment, pride, boredom and anxiety, as well as math classroom…

  12. Using Incremental Rehearsal for Building Fluency of Sight Words for Children with a Learning Disability

    Science.gov (United States)

    Aldawish, Abeer

    2017-01-01

    Incremental Rehearsal (IR) is an effective, evidence-based intervention for teaching words that uses high repetition and a high ratio of unknown and known items. The purpose of the present research study was to evaluate the effectiveness of using Incremental Rehearsal to improve the fluency in reading sight words for three elementary students…

  13. Word Decoding Development in Incremental Phonics Instruction in a Transparent Orthography

    Science.gov (United States)

    Schaars, Moniek M.; Segers, Eliane; Verhoeven, Ludo

    2017-01-01

    The present longitudinal study aimed to investigate the development of word decoding skills during incremental phonics instruction in Dutch as a transparent orthography. A representative sample of 973 Dutch children in the first grade (M[subscript age] = 6;1, SD = 0;5) was exposed to incremental subsets of Dutch grapheme-phoneme correspondences…

  14. 26 CFR 1.41-8T - Alternative incremental credit (temporary).

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Alternative incremental credit (temporary). 1.41-8T Section 1.41-8T Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Credits Against Tax § 1.41-8T Alternative incremental credit (temporary). (a) For further...

  15. Dimension reduction for p53 protein recognition by using incremental partial least squares.

    Science.gov (United States)

    Zeng, Xue-Qiang; Li, Guo-Zheng

    2014-06-01

    As an important tumor suppressor protein, reactivating mutated p53 was found in many kinds of human cancers and that restoring active p53 would lead to tumor regression. In recent years, more and more data extracted from biophysical simulations, which makes the modelling of mutant p53 transcriptional activity suffering from the problems of huge amount of instances and high feature dimension. Incremental feature extraction is effective to facilitate analysis of large-scale data. However, most current incremental feature extraction methods are not suitable for processing big data with high feature dimension. Partial Least Squares (PLS) has been demonstrated to be an effective dimension reduction technique for classification. In this paper, we design a highly efficient and powerful algorithm named Incremental Partial Least Squares (IPLS), which conducts a two-stage extraction process. In the first stage, the PLS target function is adapted to be incremental with updating historical mean to extract the leading projection direction. In the last stage, the other projection directions are calculated through equivalence between the PLS vectors and the Krylov sequence. We compare IPLS with some state-of-the-arts incremental feature extraction methods like Incremental Principal Component Analysis, Incremental Maximum Margin Criterion and Incremental Inter-class Scatter on real p53 proteins data. Empirical results show IPLS performs better than other methods in terms of balanced classification accuracy.

  16. Influence of Rotation Increments on Imaging Performance for a Rotatory Dual-Head PET System

    Directory of Open Access Journals (Sweden)

    Fanzhen Meng

    2017-01-01

    Full Text Available For a rotatory dual-head positron emission tomography (PET system, how to determine the rotation increments is an open problem. In this study, we simulated the characteristics of a rotatory dual-head PET system. The influences of different rotation increments were compared and analyzed. Based on this simulation, the imaging performance of a prototype system was verified. A reconstruction flowchart was proposed based on a precalculated system response matrix (SRM. The SRM made the relationships between the voxels and lines of response (LORs fixed; therefore, we added the interpolation method into the flowchart. Five metrics, including spatial resolution, normalized mean squared error (NMSE, peak signal-to-noise ratio (PSNR, contrast-to-noise (CNR, and structure similarity (SSIM, were applied to assess the reconstructed image quality. The results indicated that the 60° rotation increments with the bilinear interpolation had advantages in resolution, PSNR, NMSE, and SSIM. In terms of CNR, the 90° rotation increments were better than other increments. In addition, the reconstructed images of 90° rotation increments were also flatter than that of 60° increments. Therefore, both the 60° and 90° rotation increments could be used in the real experiments, and which one to choose may depend on the application requirement.

  17. Carry Select Adder Circuit with A Successively Incremented Carry Number Block

    OpenAIRE

    Deepak; Bal Krishan

    2014-01-01

    This paper reports a conditional carry select (CCS) adder circuit with a successively-incremented-carry-number block (SICNB) structure for low-voltage VLSI implementation. Owing to the successively-incremented-carry-number block (SICNB) structure, the new 16-bit SICNB CCS adder provides a 37% faster speed as compared to the conventional conditional Carry select adder based on the SPICE results

  18. Incremental Beliefs of Ability, Achievement Emotions and Learning of Singapore Students

    Science.gov (United States)

    Luo, Wenshu; Lee, Kerry; Ng, Pak Tee; Ong, Joanne Xiao Wei

    2014-01-01

    This study investigated the relationships of students' incremental beliefs of math ability to their achievement emotions, classroom engagement and math achievement. A sample of 273 secondary students in Singapore were administered measures of incremental beliefs of math ability, math enjoyment, pride, boredom and anxiety, as well as math…

  19. Incremental frequent tree-structured pattern mining from semi-structured data

    Institute of Scientific and Technical Information of China (English)

    Chen Enhong; Lin Le; Wu Gongqing; Wang Shu

    2005-01-01

    The paper studies the problem of incremental pattern mining from semi-structrued data. When a new dataset is added into the original dataset, it is difficult for existing pattern mining algorithms to incrementally update the mined results. To solve the problem, an incremental pattern mining algorithm based on the rightmost expansion technique is proposed here to improve the mining performance by utilizing the original mining results and information obtained in the previous mining process. To improve the efficiency, the algorithm adopts a pruning technique by using the frequent pattern expansion forest obtained in mining processes. Comparative experiments with different volume of initial datasets, incremental datasets and different minimum support thresholds demonstrate that the algorithm has a great improvement in the efficiency compared with that of non-incremental pattern mining algorithm.

  20. NEW PRINCIPLES OF POWER AND ENERGY RATE OF INCREMENTAL RATE TYPE FOR GENERALIZED CONTINUUM FIELD THEORIES

    Institute of Scientific and Technical Information of China (English)

    戴天民

    2001-01-01

    The aim of this paper is to establish new principles of power and energy rate of incremental type in generalized continuum mechanics. By combining new principles of virtual velocity and virtual angular velocity as well as of virtual stress and virtual couple stress with cross terms of incremental rate type a new principle of power and energy rate of incremental rate type with cross terms for micropolar continuum field theories is presented and from it all corresponding equations of motion and boundary conditions as well as power and energy rate equations of incremental rate type for micropolar and nonlocal micropolar continua with the help of generalized Piola's theorems in all and without any additional requirement are derived. Complete results for micromorphic continua could be similarly derived. The derived results in the present paper are believed to be new. They could be used to establish corresponding finite element methods of incremental rate type for generalized continuum mechanics.