Energy Technology Data Exchange (ETDEWEB)
Ung Quoc, H
2003-12-15
This research is achieved in the general framework of the study of the concrete behaviour. It has for objective the development of a new behaviour model satisfying to the particular requirements for an industrial exploitation. After the analysis of different existent models, a first development has concerned models based on the smeared crack theory. A new formulation of the theory permitted to overcome the stress locking problem. However, the analysis showed the persistence of some limits inert to this approach in spite of this improvement. Then, an analysis of the physical mechanisms of the concrete degradation has been achieved and permitted to develop the new damage model MODEV. The general formulation of this model is based on the theory of the thermodynamics and applied to the case of the heterogeneous and brittle materials. The MODEV model considers two damage mechanisms: extension and sliding. The model considers also that the relative tangent displacement between microcracks lips is responsible of the strain irreversibility. Thus, the rate of inelastic strain becomes function of the damage and the heterogeneity index of the material. The unilateral effect is taken in account as an elastic hardening or softening process according to re-closing or reopening of cracks. The model is written within the framework of non standard generalised materials in incremental tangent formulation and implemented in the general finite element code SYMPHONIE. The validation of the model has been achieved on the basis of several tests issued from the literature. The second part of this research has concerned the development of the CHEVILAB software. This simulation tool based on the limit analysis approach permit the evaluation of the ultimate load capacity of anchors bolts. The kinematics approach of the limit analysis has been adapted to the problem of anchors while considering several specific failure mechanisms. This approach has been validated then by comparison with the
A variational formulation for the incremental homogenization of elasto-plastic composites
Brassart, L.; Stainier, L.; Doghri, I.; Delannay, L.
2011-12-01
This work addresses the micro-macro modeling of composites having elasto-plastic constituents. A new model is proposed to compute the effective stress-strain relation along arbitrary loading paths. The proposed model is based on an incremental variational principle (Ortiz, M., Stainier, L., 1999. The variational formulation of viscoplastic constitutive updates. Comput. Methods Appl. Mech. Eng. 171, 419-444) according to which the local stress-strain relation derives from a single incremental potential at each time step. The effective incremental potential of the composite is then estimated based on a linear comparison composite (LCC) with an effective behavior computed using available schemes in linear elasticity. Algorithmic elegance of the time-integration of J 2 elasto-plasticity is exploited in order to define the LCC. In particular, the elastic predictor strain is used explicitly. The method yields a homogenized yield criterion and radial return equation for each phase, as well as a homogenized plastic flow rule. The predictive capabilities of the proposed method are assessed against reference full-field finite element results for several particle-reinforced composites.
Variational formulation for dissipative continua and an incremental J-integral
Rahaman, Md. Masiur; Dhas, Bensingh; Roy, D.; Reddy, J. N.
2018-01-01
Our aim is to rationally formulate a proper variational principle for dissipative (viscoplastic) solids in the presence of inertia forces. As a first step, a consistent linearization of the governing nonlinear partial differential equations (PDEs) is carried out. An additional set of complementary (adjoint) equations is then formed to recover an underlying variational structure for the augmented system of linearized balance laws. This makes it possible to introduce an incremental Lagrangian such that the linearized PDEs, including the complementary equations, become the Euler-Lagrange equations. Continuous groups of symmetries of the linearized PDEs are computed and an analysis is undertaken to identify the variational groups of symmetries of the linearized dissipative system. Application of Noether's theorem leads to the conservation laws (conserved currents) of motion corresponding to the variational symmetries. As a specific outcome, we exploit translational symmetries of the functional in the material space and recover, via Noether's theorem, an incremental J-integral for viscoplastic solids in the presence of inertia forces. Numerical demonstrations are provided through a two-dimensional plane strain numerical simulation of a compact tension specimen of annealed mild steel under dynamic loading.
Complete Tangent Stiffness for eXtended Finite Element Method by including crack growth parameters
DEFF Research Database (Denmark)
Mougaard, J.F.; Poulsen, P.N.; Nielsen, L.O.
2013-01-01
The eXtended Finite Element Method (XFEM) is a useful tool for modeling the growth of discrete cracks in structures made of concrete and other quasi‐brittle and brittle materials. However, in a standard application of XFEM, the tangent stiffness is not complete. This is a result of not including...... the crack geometry parameters, such as the crack length and the crack direction directly in the virtual work formulation. For efficiency, it is essential to obtain a complete tangent stiffness. A new method in this work is presented to include an incremental form the crack growth parameters on equal terms...... with the degrees of freedom in the FEM‐equations. The complete tangential stiffness matrix is based on the virtual work together with the constitutive conditions at the crack tip. Introducing the crack growth parameters as direct unknowns, both equilibrium equations and the crack tip criterion can be handled...
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
Teachers' Conceptions of Tangent Line
Paez Murillo, Rosa Elvira; Vivier, Laurent
2013-01-01
In order to study the conceptions, and their evolutions, of the tangent line to a curve an updating workshop which took place in Mexico was designed for upper secondary school teachers. This workshop was planned using the methodology of cooperative learning, scientific debate and auto reflection (ACODESA) and the conception-knowing-concept model…
Einstein metrics on tangent bundles of spheres
Energy Technology Data Exchange (ETDEWEB)
Dancer, Andrew S [Jesus College, Oxford University, Oxford OX1 3DW (United Kingdom); Strachan, Ian A B [Department of Mathematics, University of Hull, Hull HU6 7RX (United Kingdom)
2002-09-21
We give an elementary treatment of the existence of complete Kaehler-Einstein metrics with nonpositive Einstein constant and underlying manifold diffeomorphic to the tangent bundle of the (n+1)-sphere.
Sharp inequalities for tangent function with applications.
Lv, Hui-Lin; Yang, Zhen-Hang; Luo, Tian-Qi; Zheng, Shen-Zhou
2017-01-01
In the article, we present new bounds for the function [Formula: see text] on the interval [Formula: see text] and find sharp estimations for the Sine integral and the Catalan constant based on a new monotonicity criterion for the quotient of power series, which refine the Redheffer and Becker-Stark type inequalities for tangent function.
On stability of Kummer surfaces' tangent bundle
International Nuclear Information System (INIS)
Bozhkov, Y.D.
1988-10-01
In this paper we propose an explicit approximation of the Kaehler-Einstein-Calabi-Yau metric on the Kummer surfaces, which are manifolds of type K3. It is constructed by gluing 16 pieces of the Eguchi-Hanson metric and 16 pieces of the Euclidean metric. Two estimates on its curvature are proved. Then we prove an estimate on the first eigenvalue of a covariant differential operator of second order. This enables us to apply Taubes' iteration procedure to obtain that there exists an anti-self-dual connection on the considered Kummer surface. In fact, it is a Hermitian-Einstein connection from which we conclude that Kummer surfaces' co-tangent bundle is stable and therefore their tangent bundle is stable too. (author). 40 refs
Sharp inequalities for tangent function with applications
Directory of Open Access Journals (Sweden)
Hui-Lin Lv
2017-05-01
Full Text Available Abstract In the article, we present new bounds for the function e t cot ( t − 1 $e^{t\\cot(t-1}$ on the interval ( 0 , π / 2 $(0, \\pi/2$ and find sharp estimations for the Sine integral and the Catalan constant based on a new monotonicity criterion for the quotient of power series, which refine the Redheffer and Becker-Stark type inequalities for tangent function.
The differential geometry of higher order jets and tangent bundles
International Nuclear Information System (INIS)
De Leon, M.; Rodrigues, P.R.
1985-01-01
This chapter is devoted to the study of basic geometrical notions required for the development of the main object of the text. Some facts about Jet theory are reviewed. A particular case of Jet manifolds is considered: the tangent bundle of higher order. It is shown that this jet bundle possesses in a canonical way a certain kind of geometric structure, the so called almost tangent structure of higher order, and which is a generalization of the almost tangent geometry of the tangent bundle. Another important fact examined is the extension of the notion of 'spray' to higher order tangent bundles. (Auth.)
Directory of Open Access Journals (Sweden)
T. Salahuddin
Full Text Available An analysis is executed to study the influence of heat generation/absorption on tangent hyperbolic nanofluid near the stagnation point over a stretching cylinder. In this study the developed model of a tangent hyperbolic nanofluid in boundary layer flow with Brownian motion and thermophoresis effects are discussed. The governing partial differential equations in terms of continuity, momentum, temperature and concentration are rehabilitated into ordinary differential form and then solved numerically using shooting method. The results specify that the addition of nanoparticles into the tangent hyperbolic fluid yields an increment in the skin friction coefficient and the heat transfer rate at the surface. Comparison of the present results with previously published literature is specified and found in good agreement. It is noticed that velocity profile reduces by enhancing Weissenberg number Î» and power law index n. The skin friction coefficient, local Nusselt number and local Sherwood number enhances for large values of stretching ratio parameter A. Keywords: Stagnation point flow, Tangent hyperbolic nanofluid, Stretching cylinder, Heat generation/absorption, Boundary layer, Shooting method
Tangent hyperbolic circular frequency diverse array radars
Directory of Open Access Journals (Sweden)
Sarah Saeed
2016-03-01
Full Text Available Frequency diverse array (FDA with uniform frequency offset (UFO has been in spot light of research for past few years. Not much attention has been devoted to non-UFOs in FDA. This study investigates tangent hyperbolic (TH function for frequency offset selection scheme in circular FDAs (CFDAs. Investigation reveals a three-dimensional single-maximum beampattern, which promises to enhance system detection capability and signal-to-interference plus noise ratio. Furthermore, by utilising the versatility of TH function, a highly configurable type array system is achieved, where beampatterns of three different configurations of FDA can be generated, just by adjusting a single function parameter. This study further examines the utility of the proposed TH-CFDA in some practical radar scenarios.
Tangent Lines without Derivatives for Quadratic and Cubic Equations
Carroll, William J.
2009-01-01
In the quadratic equation, y = ax[superscript 2] + bx + c, the equation y = bx + c is identified as the equation of the line tangent to the parabola at its y-intercept. This is extended to give a convenient method of graphing tangent lines at any point on the graph of a quadratic or a cubic equation. (Contains 5 figures.)
The Tangent Conoids Family Which Depends on the Ruled Surface
Özyılmaz, E.
1998-01-01
In this study, a new congruence[A••] has been definedby putting a tangent right conoid on each line of a ruled surface (A1(s)) of a line congruence [A]. Then, by considering special case of the congruence [A••] which has been defined in the previous part, the concepts of tangent congruence, drall and the relation among Blaschke vectors of Blaschke trihedrons, having common line Ao has been examined for this special case. At the end of this study, the concept of tangent congruence for some spe...
Toward more accurate loss tangent measurements in reentrant cavities
Energy Technology Data Exchange (ETDEWEB)
Moyer, R. D.
1980-05-01
Karpova has described an absolute method for measurement of dielectric properties of a solid in a coaxial reentrant cavity. His cavity resonance equation yields very accurate results for dielectric constants. However, he presented only approximate expressions for the loss tangent. This report presents more exact expressions for that quantity and summarizes some experimental results.
Recognition of Pitman shorthand text using tangent feature values at ...
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
Keywords. Pitman shorthand; pitman word (stroke); consonant primitives; vowel markers; tangents; contour traversal; thinning. 1. Introduction. Recognition of Pitman shorthand is a challenging pattern recognition problem because of the variety of pattern shapes involved with it (Chen & Lee 1992; Leedham & Nair 1992;.
Adaptive order search and tangent-weighted trade-off for motion estimation in H.264
Directory of Open Access Journals (Sweden)
Srinivas Bachu
2018-04-01
Full Text Available Motion estimation and compensation play a major role in video compression to reduce the temporal redundancies of the input videos. A variety of block search patterns have been developed for matching the blocks with reduced computational complexity, without affecting the visual quality. In this paper, block motion estimation is achieved through integrating the square as well as the hexagonal search patterns with adaptive order. The proposed algorithm is called, AOSH (Adaptive Order Square Hexagonal Search algorithm, and it finds the best matching block with a reduced number of search points. The searching function is formulated as a trade-off criterion here. Hence, the tangent-weighted function is newly developed to evaluate the matching point. The proposed AOSH search algorithm and the tangent-weighted trade-off criterion are effectively applied to the block estimation process to enhance the visual quality and the compression performance. The proposed method is validated using three videos namely, football, garden and tennis. The quantitative performance of the proposed method and the existing methods is analysed using the Structural SImilarity Index (SSIM and the Peak Signal to Noise Ratio (PSNR. The results prove that the proposed method offers good visual quality than the existing methods. Keywords: Block motion estimation, Square search, Hexagon search, H.264, Video coding
Tangent: Automatic Differentiation Using Source Code Transformation in Python
van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan
2017-01-01
Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...
Experience on tangent delta norms adopted for repaired generator
International Nuclear Information System (INIS)
Misra, N.N.; Sood, D.K.
2005-01-01
The repair techniques of the generators are very crucial for avoiding prolonged forced outages. The crucial decisions based on sound knowledge and judgement becomes essential in many cases. The unit under discussions had failed on account of flash over in the Exciter end overhang windings. The failure resulted in damaged to the stator bars as well as generator core. The damaged end packets of the stator core were replaced at site. The total winding bars were removed from stator core and damaged bars were replaced with new bars. The rest of the bars were tested for tangent delta tests for reuse. Acceptance norms of 0.6% tip up from 0.2pu to 0.6pu of rated stator voltage were adopted. Some of the bars outside the acceptable limits of tangent delta were shifted close to neutral so that the standard norms of tan delta are met. This was felt necessary because lead-time for procurement of new bars was more than six months. The above-adopted norms for tangent delta will be of much use for the operating utilities. The unit under discussions was of 67.5 MW operating at 50 Hz, 0.85 pf lag and had logged 66160.46 operating hours before failure. (author)
Plant Leaf Recognition through Local Discriminative Tangent Space Alignment
Directory of Open Access Journals (Sweden)
Chuanlei Zhang
2016-01-01
Full Text Available Manifold learning based dimensionality reduction algorithms have been payed much attention in plant leaf recognition as the algorithms can select a subset of effective and efficient discriminative features in the leaf images. In this paper, a dimensionality reduction method based on local discriminative tangent space alignment (LDTSA is introduced for plant leaf recognition based on leaf images. The proposed method can embrace part optimization and whole alignment and encapsulate the geometric and discriminative information into a local patch. The experiments on two plant leaf databases, ICL and Swedish plant leaf datasets, demonstrate the effectiveness and feasibility of the proposed method.
Measuring the loss tangent of polymer materials with atomic force microscopy based methods
International Nuclear Information System (INIS)
Yablon, Dalia G; Grabowski, Jean; Chakraborty, Ishita
2014-01-01
Atomic force microscopy (AFM) quantitatively maps viscoelastic parameters of polymers on the nanoscale by several methods. The loss tangent, the ratio between dissipated and stored energy, was measured on a blend of thermoplastic polymer materials by a dynamic contact method, contact resonance, and by a recently developed loss tangent measurement by amplitude modulation AFM. Contact resonance measurements were performed both with dual AC resonance tracking and band excitation (BE), allowing for a reference-free measurement of the loss tangent. Amplitude modulation AFM was performed where a recent interpretation of the phase signal under certain operating conditions allows for the loss tangent to be calculated. The loss tangent measurements were compared with values expected from time–temperature superposed frequency-dependent dynamical mechanical curves of materials and reveal that the loss tangents determined from the BE contact resonance method provide the most accurate values. (paper)
Quantum information entropies for a squared tangent potential well
Energy Technology Data Exchange (ETDEWEB)
Dong, Shishan [Information and Engineering College, DaLian University, 116622 (China); Sun, Guo-Hua, E-mail: sunghdb@yahoo.com [Centro Universitario Valle de Chalco, Universidad Autónoma del Estado de México, Valle de Chalco Solidaridad, Estado de México, 56615 (Mexico); Dong, Shi-Hai, E-mail: dongsh2@yahoo.com [Departamento de Física, Escuela Superior de Física y Matemáticas, Instituto Politécnico Nacional, Unidad Profesional Adolfo López Mateos, Edificio 9, México D.F. 07738 (Mexico); Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States); Draayer, J.P., E-mail: draayer@sura.org [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States)
2014-01-10
The particle in a symmetrical squared tangent potential well is studied by examining its Shannon information entropy and standard deviations. The position and momentum information entropy densities ρ{sub s}(x), ρ{sub s}(p) and probability densities ρ(x), ρ(p) are illustrated with different potential range L and potential depth U. We present analytical position information entropies S{sub x} for the lowest two states. We observe that the sum of position and momentum entropies S{sub x} and S{sub p} expressed by Bialynicki-Birula–Mycielski (BBM) inequality is satisfied. Some eigenstates exhibit entropy squeezing in the position. The entropy squeezing in position will be compensated by an increase in momentum entropy. We also note that the S{sub x} increases with the potential range L, while decreases with the potential depth U. The variation of S{sub p} is contrary to that of S{sub x}.
Modified Einstein and Finsler Like Theories on Tangent Lorentz Bundles
Stavrinos, Panayiotis; Vacaru, Sergiu I.
2014-01-01
We study modifications of general relativity, GR, with nonlinear dispersion relations which can be geometrized on tangent Lorentz bundles. Such modified gravity theories, MGTs, can be modeled by gravitational Lagrange density functionals $f(\\mathbf{R},\\mathbf{T},F)$ with generalized/ modified scalar curvature $\\mathbf{R}$, trace of matter field tensors $\\mathbf{T}$ and modified Finsler like generating function $F$. In particular, there are defined extensions of GR with extra dimensional "velocity/ momentum" coordinates. For four dimensional models, we prove that it is possible to decouple and integrate in very general forms the gravitational fields for $f(\\mathbf{R},\\mathbf{T},F)$--modified gravity using nonholonomic 2+2 splitting and nonholonomic Finsler like variables $F$. We study the modified motion and Newtonian limits of massive test particles on nonlinear geodesics approximated with effective extra forces orthogonal to the four-velocity. We compute the constraints on the magnitude of extra-acceleration...
Construction of the Tangent to a Cycloid Proposed by Wallis and Fermat
Directory of Open Access Journals (Sweden)
Loredana Biacino
2017-02-01
Full Text Available In this paper some methods used in the XVII century for the construction of the tangents to a cycloid in a point are exposed: the kinematical method employed by Roberval, the classical geometrical method used by Wallis and the Fermat’s construction as a consequence of his tangents method. Le Costruzioni della Tangente alla Cicloide Proposte da Wallis e da Fermat In questo lavoro sono esposti vari metodi in uso nel ‘600 per la costruzione della tangente ad una cicloide in un suo punto: il metodo cinematico impiegato da Roberval, il metodo geometrico classico usato dal Wallis e la costruzione di Fermat come conseguenza del suo metodo delle tangenti. Parole Chiave: Cicloide, Tangente ad una curva, Metodo cinematico delle tangenti, Metodo delle tangenti di Fermat.
Incremental Risk Vulnerability
Günter Franke; Richard C. Stapleton; Marti G. Subrahmanyam
2005-01-01
We present a necessary and sufficient condition on an agent's utility function for a simple mean preserving spread in an independent background risk to increase the agent's risk aversion (incremental risk vulnerability). Gollier and Pratt (1996) have shown that declining and convex risk aversion as well as standard risk aversion are sufficient for risk vulnerability. We show that these conditions are also sufficient for incremental risk vulnerability. In addition, we present sufficient condit...
Mosca, Alan; Magoulas, George D
2017-01-01
This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...
Mechanisms of Microwave Loss Tangent in High Performance Dielectric Materials
Liu, Lingtao
The mechanism of loss in high performance microwave dielectrics with complex perovskite structure, including Ba(Zn1/3Ta2/3)O 3, Ba(Cd1/3Ta2/3)O3, ZrTiO4-ZnNb 2O6, Ba(Zn1/3Nb2/3)O3, and BaTi4O9-BaZn2Ti4O11, has been investigated. We studied materials synthesized in our own lab and from commercial vendors. Then the measured loss tangent was correlated to the optical, structural, and electrical properties of the material. To accurately and quantitatively determine the microwave loss and Electron Paramagnetic Resonance (EPR) spectra as a function of temperature and magnetic field, we developed parallel plate resonator (PPR) and dielectric resonator (DR) techniques. Our studies found a marked increase in the loss at low temperatures is found in materials containing transition metal with unpaired d-electrons as a result of resonant spin excitations in isolated atoms (light doping) or exchange coupled clusters (moderate to high doping); a mechanism that differs from the usual suspects. The loss tangent can be drastically reduced by applying static magnetic fields. Our measurements also show that this mechanism significantly contributes to room temperature loss, but does not dominate. In order to study the electronic structure of these materials, we grew single crystal thin film dielectrics for spectroscopic studies, including angular resolved photoemission spectroscopy (ARPES) experiment. We have synthesized stoichiometric Ba(Cd1/3Ta2/3)O3 [BCT] (100) dielectric thin films on MgO (100) substrates using Pulsed Laser Deposition. Over 99% of the BCT film was found to be epitaxial when grown with an elevated substrate temperature of 635 °C, an enhanced oxygen pressures of 53 Pa and a Cd-enriched BCT target with a 1 mol BCT: 1.5 mol CdO composition. Analysis of ultra violet optical absorption results indicate that BCT has a bandgap of 4.9 eV.
Tangent-Impulse Interception for a Hyperbolic Target
Directory of Open Access Journals (Sweden)
Dongzhe Wang
2014-01-01
Full Text Available The two-body interception problem with an upper-bounded tangent impulse for the interceptor on an elliptic parking orbit to collide with a nonmaneuvering target on a hyperbolic orbit is studied. Firstly, four special initial true anomalies whose velocity vectors are parallel to either of the lines of asymptotes for the target hyperbolic orbit are obtained by using Newton-Raphson method. For different impulse points, the solution-existence ranges of the target true anomaly for any conic transfer are discussed in detail. Then, the time-of-flight equation is solved by the secant method for a single-variable piecewise function about the target true anomaly. Considering the sphere of influence of the Earth and the upper bound on the fuel, all feasible solutions are obtained for different impulse points. Finally, a numerical example is provided to apply the proposed technique for all feasible solutions and the global minimum-time solution with initial coasting time.
Tangent-impulse transfer from elliptic orbit to an excess velocity vector
Directory of Open Access Journals (Sweden)
Zhang Gang
2014-06-01
Full Text Available The two-body orbital transfer problem from an elliptic parking orbit to an excess velocity vector with the tangent impulse is studied. The direction of the impulse is constrained to be aligned with the velocity vector, then speed changes are enough to nullify the relative velocity. First, if one tangent impulse is used, the transfer orbit is obtained by solving a single-variable function about the true anomaly of the initial orbit. For the initial circular orbit, the closed-form solution is derived. For the initial elliptic orbit, the discontinuous point is solved, then the initial true anomaly is obtained by a numerical iterative approach; moreover, an alternative method is proposed to avoid the singularity. There is only one solution for one-tangent-impulse escape trajectory. Then, based on the one-tangent-impulse solution, the minimum-energy multi-tangent-impulse escape trajectory is obtained by a numerical optimization algorithm, e.g., the genetic method. Finally, several examples are provided to validate the proposed method. The numerical results show that the minimum-energy multi-tangent-impulse escape trajectory is the same as the one-tangent-impulse trajectory.
Directory of Open Access Journals (Sweden)
M. Ridolfi
2014-12-01
Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.
Incremental Gaussian Processes
DEFF Research Database (Denmark)
Quiñonero-Candela, Joaquin; Winther, Ole
2002-01-01
In this paper, we consider Tipping's relevance vector machine (RVM) and formalize an incremental training strategy as a variant of the expectation-maximization (EM) algorithm that we call subspace EM. Working with a subset of active basis functions, the sparsity of the RVM solution will ensure...
Incremental Similarity and Turbulence
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole E.; Hedevang, Emil; Schmiegel, Jürgen
This paper discusses the mathematical representation of an empirically observed phenomenon, referred to as Incremental Similarity. We discuss this feature from the viewpoint of stochastic processes and present a variety of non-trivial examples, including those that are of relevance for turbulence...
Energy Technology Data Exchange (ETDEWEB)
Suparmi, A., E-mail: soeparmi@staff.uns.ac.id; Cari, C., E-mail: cari@staff.uns.ac.id; Pratiwi, B. N., E-mail: namakubetanurpratiwi@gmail.com [Physics Department, Faculty of Mathematics and Science, Sebelas Maret University, Jl. Ir. Sutami 36A Kentingan Surakarta 57126 (Indonesia); Deta, U. A. [Physics Department, Faculty of Science and Mathematics Education and Teacher Training, Surabaya State University, Surabaya (Indonesia)
2016-02-08
The analytical solution of D-dimensional Dirac equation for hyperbolic tangent potential is investigated using Nikiforov-Uvarov method. In the case of spin symmetry the D dimensional Dirac equation reduces to the D dimensional Schrodinger equation. The D dimensional relativistic energy spectra are obtained from D dimensional relativistic energy eigen value equation by using Mat Lab software. The corresponding D dimensional radial wave functions are formulated in the form of generalized Jacobi polynomials. The thermodynamically properties of materials are generated from the non-relativistic energy eigen-values in the classical limit. In the non-relativistic limit, the relativistic energy equation reduces to the non-relativistic energy. The thermal quantities of the system, partition function and specific heat, are expressed in terms of error function and imaginary error function which are numerically calculated using Mat Lab software.
MLS/Aura L1 Orbit/Attitude and Tangent Point Geolocation Data V004
National Aeronautics and Space Administration — ML1OA is the EOS Aura Microwave Limb Sounder (MLS) product containing the level 1 orbit attitude and tangent point geolocation data. The current version is 4.2. Data...
MLS/Aura L1 Orbit/Attitude and Tangent Point Geolocation Data V002
National Aeronautics and Space Administration — ML1OA is the EOS Aura Microwave Limb Sounder (MLS) product containing the level 1 orbit attitude and tangent point geolocation data. The current version is 2.3. Data...
Quantum independent increment processes
Franz, Uwe
2005-01-01
This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.
Quantum independent increment processes
Franz, Uwe
2006-01-01
This is the second of two volumes containing the revised and completed notes of lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald in March, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present second volume contains the following lectures: "Random Walks on Finite Quantum Groups" by Uwe Franz and Rolf Gohm, "Quantum Markov Processes and Applications in Physics" by Burkhard Kümmerer, Classical and Free Infinite Divisibility and Lévy Processes" by Ole E. Barndorff-Nielsen, Steen Thorbjornsen, and "Lévy Processes on Quantum Groups and Dual Groups" by Uwe Franz.
Efficient incremental relaying
Fareed, Muhammad Mehboob
2013-07-01
We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.
Directory of Open Access Journals (Sweden)
Romanas Karkauskas
2011-04-01
Full Text Available The expressions of the finite element method tangent stiffness matrix of geometrically nonlinear constructions are not fully presented in publications. The matrixes of small displacements stiffness are usually presented only. To solve various problems of construction analysis or design and to specify the mode of the real deflection of construction, it is necessary to have a fully described tangent matrix analytical expression. This paper presents a technique of tangent stiffness matrix generation using discrete body total potential energy stationary conditions considering geometrically nonlinear 2D frame element taking account of interelement interaction forces only. The obtained vector-function derivative of internal forces considering nodal displacements is the tangent stiffness matrix. The analytical expressions having nodal displacements of matrixes forming the content of the 2D frame construction element tangent stiffness matrix are presented in the article. The suggested methodology has been checked making symbolical calculations in the medium of MatLAB calculation complex. The analytical expression of the stiffness matrix has been obtained.Article in Lithuanian
Korkmaz, Alper; Hepson, Ozlem Ersoy
2018-01-01
In the study, hyperbolic tangent (tanh) ansatz solution is investigated for the conformable time fractional Zakharov-Kuznetsov Equation (fZKE) in 3D space. Transformation of the fZKE to an ODE by the compatible wave transformation is the first step of the methodology. It is assumed that there exists a solution of positive integer power of hyperbolic tangent form. Determining the power of the predicted solution follows some algebra to find the relations among the other parameters given in the solution. The final step is transforming the solution into original variables.
International Nuclear Information System (INIS)
Caratu, G.; Marmo, G.; Simoni, A.; Vitale, B.; Zaccaria, F.
1976-01-01
The notion of dynamical system in classical mechanics is built starting from the trajectories on the configuration space. Classes of equivalent vector fields on tangent and cotangent bundles are obtained. This leads to a re-examination in detail of the meaning of the Hamiltonian and Lagrangian functions
Sigirli, Deniz; Ercan, Ilker
2015-09-01
Most of the studies in medical and biological sciences are related to the examination of geometrical properties of an organ or organism. Growth and allometry studies are important in the way of investigating the effects of diseases and the environmental factors effects on the structure of the organ or organism. Thus, statistical shape analysis has recently become more important in the medical and biological sciences. Shape is all geometrical information that remains when location, scale and rotational effects are removed from an object. Allometry, which is a relationship between size and shape, plays an important role in the development of statistical shape analysis. The aim of the present study was to compare two different models for allometry which includes tangent coordinates and principal component scores of tangent coordinates as dependent variables in multivariate regression analysis. The results of the simulation study showed that the model constructed by taking tangent coordinates as dependent variables is more appropriate than the model constructed by taking principal component scores of tangent coordinates as dependent variables, for all sample sizes.
Biza, Irene
2011-01-01
In this paper I report a lengthy episode from a teaching experiment in which 15-Year 12 Greek students negotiated their definitions of tangent line to a function graph. The experiment was designed for the purpose of introducing students to the notion of derivative and to the general case of tangent to a function graph. Its design was based on…
[Tangent sign - a reliable predictor of risk for tendon re-rupture in rotator cuff repair].
Smíd, P; Hart, R; Puskeiler, M
2014-01-01
Repair techniques for rotator cuff injury are currently well advanced. However, the risk of re-rupture, particularly when severe damage to the tendons has been repaired, is still high. The causes of failure can be due to the extent of injury, a repair done on a highly degenerated tendon with diminished viability or ischaemic damage to the tendon tissue resulting from suture material. The aim of the study was to ascertain the reliability of the tangent sign, a commonly used indicator of the degree of suprasupinatus muscle atrophy, in the prediction of risk for tendon re-rupture in the post-operative period. In 2011 the rotator cuff torn tendons were repaired by the method of double-row suture in 37 patients. The surgery was done by an open technique using the deltoid-splitting approach. A pre-operative magnetic resonance image (MRI) of the shoulder was obtained in all patients and each was assessed by a competent independent radiology specialist with a focus on the extent of a tendon lesion and the tangent sign. At 2-year follow-up, the results of repeated MRI were evaluated in view of the state of repaired tendons and, if a re-tear was found, its relation to the original suture and its extent in the sagittal plane were determined. The pre- and post-operative MRI findings were compared to find out how the presence or absence of a tangent sign before surgery relates to the incidence of supraspinatus tendon re-rupture at 2 years after surgery. The results were statistically analysed using Student's t-test and the Chi-square test. Of the 37 shoulders, in the pre-operative period, a tangent sign was identified in 21 (56.8%). The average size of a rotator cuff tear was 29.3 mm for the whole group. For the shoulders with no tangent sign, the average value was 21.8 mm, for those with a tangent sign present, it was 39.6 mm. At 2-year follow-up, no tear was found in the patients in whom preoperative MRI showed no tangent sign while re-tears were recorded in 18 of 21
Tangent unit-vector fields: Nonabelian homotopy invariants and the Dirichlet energy
Majumdar, Apala
2009-10-01
Let O be a closed geodesic polygon in S2. Maps from O into S2 are said to satisfy tangent boundary conditions if the edges of O are mapped into the geodesics which contain them. Taking O to be an octant of S2, we evaluate the infimum Dirichlet energy, E (H), for continuous tangent maps of arbitrary homotopy type H. The expression for E (H) involves a topological invariant - the spelling length - associated with the (nonabelian) fundamental group of the n-times punctured two-sphere, π1 (S2 - {s1, ..., sn}, *). These results have applications for the theoretical modelling of nematic liquid crystal devices. To cite this article: A. Majumdar et al., C. R. Acad. Sci. Paris, Ser. I 347 (2009). © 2009 Académie des sciences.
Blocking sets of tangent and external lines to a hyperbolic quadric in ...
Indian Academy of Sciences (India)
42
Abstract. Let H be a fixed hyperbolic quadric in the three dimensional projective space. PG(3,q), where q is a power of 2. Let E (respectively; T) denote the set of all lines of. PG(3,q) which are external (respectively; tangent) to H. We characterize the minimum size blocking sets of PG(3,q) with respect to each of the line sets T ...
On kinematical minimum principles for rates and increments in plasticity
International Nuclear Information System (INIS)
Zouain, N.
1984-01-01
The optimization approach for elastoplastic analysis is discussed showing that some minimum principles related to numerical methods can be derived by means of duality and penalization procedures. Three minimum principles for velocity and plastic multiplier rate fields are presented in the framework of perfect plasticity. The first one is the classical Greenberg formulation. The second one, due to Capurso, is developed here with different motivation, and modified by penalization of constraints so as to arrive at a third principle for rates. The counterparts of these optimization formulations in terms of discrete increments of displacements of displacements and plastic multipliers are discussed. The third one of these minimum principles for finite increments is recognized to be closely related to Maier's formulation of holonomic plasticity. (Author) [pt
MHD flow of tangent hyperbolic fluid over a stretching cylinder: Using Keller box method
Energy Technology Data Exchange (ETDEWEB)
Malik, M.Y.; Salahuddin, T., E-mail: taimoor_salahuddin@yahoo.com; Hussain, Arif; Bilal, S.
2015-12-01
A numerical solution of MHD flow of tangent hyperbolic fluid model over a stretching cylinder is obtained in this paper. The governing boundary layer equation of tangent hyperbolic fluid is converted into an ordinary differential equation using similarity transformations, which is then solved numerically by applying the implicit finite difference Keller box method. The effects of various parameters on velocity profiles are analyzed and discussed in detail. The values of skin friction coefficient are tabulated and plotted in order to understand the flow behavior near the surface of the cylinder. For validity of the model a comparison of the present work with the literature has been made. - Highlights: • Non-Newtonian (tangent hyperbolic) fluid is taken by using boundary layer approximation. • MHD effects are assumed. • To solve the highly non-linear equations by numerical approach (Keller box Method). • Keller box method is one of the best computational methods capable of solving different engineering problems in fluid mechanics. • Keller box method is an implicit method and has truncation error of order h{sup 2}.
Algebraic formulation of higher gauge theory
Zucchini, Roberto
2017-06-01
In this paper, we present a purely algebraic formulation of higher gauge theory and gauged sigma models based on the abstract theory of graded commutative algebras and their morphisms. The formulation incorporates naturally Becchi - Rouet -Stora - Tyutin (BRST) symmetry and is also suitable for Alexandrov - Kontsevich - Schwartz-Zaboronsky (AKSZ) type constructions. It is also shown that for a full-fledged Batalin-Vilkovisky formulation including ghost degrees of freedom, higher gauge and gauged sigma model fields must be viewed as internal smooth functions on the shifted tangent bundle of a space-time manifold valued in a shifted L∞-algebroid encoding symmetry. The relationship to other formulations where the L∞-algebroid arises from a higher Lie groupoid by Lie differentiation is highlighted.
Enabling Incremental Query Re-Optimization.
Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau
2016-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.
Incremental Visualizer for Visible Objects
DEFF Research Database (Denmark)
Bukauskas, Linas; Bøhlen, Michael Hanspeter
This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...
Incremental data compression -extended abstract-
Jeuring, J.T.
1992-01-01
Data may be compressed using textual substitution. Textual substitution identifies repeated substrings and replaces some or all substrings by pointers to another copy. We construct an incremental algorithm for a specific textual substitution method: coding a text with respect to a dictionary. With
Ullah, Zakir; Zaman, Gul
2017-11-01
In this paper, we studied MHD two dimensional flow of an incompressible tangent hyperbolic fluid flow and heat transfer towards a stretching sheet with velocity and thermal slip. Lie group analysis is used to develop new similarity transformation, using these similarity transformation the governing nonlinear partial differential equation are reduced into a system of coupled nonlinear ordinary differential equation. The obtained system is solved numerically by applying shooting method. Effects of pertinent parameters on the velocity and temperature profiles, skin friction, local Nusselt number are graphically presented and discussed. Comparison between the present and previous results are shown in special cases.
Directory of Open Access Journals (Sweden)
Zakir Ullah
2017-11-01
Full Text Available In this paper, we studied MHD two dimensional flow of an incompressible tangent hyperbolic fluid flow and heat transfer towards a stretching sheet with velocity and thermal slip. Lie group analysis is used to develop new similarity transformation, using these similarity transformation the governing nonlinear partial differential equation are reduced into a system of coupled nonlinear ordinary differential equation. The obtained system is solved numerically by applying shooting method. Effects of pertinent parameters on the velocity and temperature profiles, skin friction, local Nusselt number are graphically presented and discussed. Comparison between the present and previous results are shown in special cases. Keywords: Applied mathematics, Mechanics
Maximal violation of Bell's inequalities for algebras of observables in tangent spacetime regions
International Nuclear Information System (INIS)
Summers, S.J.; Werner, R.
1988-01-01
We continue our study of Bell's inequalities and quantum field theory. It is shown in considerably broader generality than in our previous work that algebras of local observables corresponding to complementary wedge regions maximally violate Bell's inequality in all normal states. Pairs of commuting von Neumann algebras that maximally violate Bell's inequalities in all normal states are characterized. Algebras of local observables corresponding to tangent double cones are shown to maximally violate Bell's inequalities in all normal states in dilatation-invariant theories, in free quantum field models, and in a class of interacting models. Further, it is proven that such algebras are not split in any theory with an ultraviolet scaling limit
Study on Remote Monitoring System of Crossing and Spanning Tangent Tower
Chen, Da-bing; Zhang, Nai-long; Zhang, Meng-ge; Wang, Ze-hua; Zhang, Yan
2017-05-01
In order to grasp the vibration state of overhead transmission line and ensure the operational security of transmission line, the remote monitoring system of crossing and spanning tangent tower was studied. By use of this system, the displacement, velocity and acceleration of the tower, and the local weather data are collected automatically, displayed on computer of remote monitoring centre through wireless network, real-time collection and transmission of vibration signals are realized. The applying results show that the system is excellent in reliability and accuracy and so on. The system can be used to remote monitoring of transmission tower of UHV power transmission lines and in large spanning areas.
Evolution of cooperation driven by incremental learning
Li, Pei; Duan, Haibin
2015-02-01
It has been shown that the details of microscopic rules in structured populations can have a crucial impact on the ultimate outcome in evolutionary games. So alternative formulations of strategies and their revision processes exploring how strategies are actually adopted and spread within the interaction network need to be studied. In the present work, we formulate the strategy update rule as an incremental learning process, wherein knowledge is refreshed according to one's own experience learned from the past (self-learning) and that gained from social interaction (social-learning). More precisely, we propose a continuous version of strategy update rules, by introducing the willingness to cooperate W, to better capture the flexibility of decision making behavior. Importantly, the newly gained knowledge including self-learning and social learning is weighted by the parameter ω, establishing a strategy update rule involving innovative element. Moreover, we quantify the macroscopic features of the emerging patterns to inspect the underlying mechanisms of the evolutionary process using six cluster characteristics. In order to further support our results, we examine the time evolution course for these characteristics. Our results might provide insights for understanding cooperative behaviors and have several important implications for understanding how individuals adjust their strategies under real-life conditions.
Standard test method for Young's modulus, tangent modulus, and chord modulus
American Society for Testing and Materials. Philadelphia
2004-01-01
1.1 This test method covers the determination of Young's modulus, tangent modulus, and chord modulus of structural materials. This test method is limited to materials in which and to temperatures and stresses at which creep is negligible compared to the strain produced immediately upon loading and to elastic behavior. 1.2 Because of experimental problems associated with the establishment of the origin of the stress-strain curve described in 8.1, the determination of the initial tangent modulus (that is, the slope of the stress-strain curve at the origin) and the secant modulus are outside the scope of this test method. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory require...
Kent, James; Holdaway, Daniel
2015-01-01
A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.
Hossain, Mokarram; Steinmann, Paul
2013-06-01
Rubber-like materials can deform largely and nonlinearly upon loading, and they return to the initial configuration when the load is removed. Such rubber elasticity is achieved due to very flexible long-chain molecules and a three-dimensional network structure that is formed via cross-linking or entanglements between molecules. Over the years, to model the mechanical behavior of such randomly oriented microstructures, several phenomenological and micromechanically motivated network models for nearly incompressible hyperelastic polymeric materials have been proposed in the literature. To implement these models for polymeric material (undoubtedly with widespread engineering applications) in the finite element framework for solving a boundary value problem, one would require two important ingredients, i.e., the stress tensor and the consistent fourth-order tangent operator, where the latter is the result of linearization of the former. In our previous work, 14 such material models are reviewed by deriving the accurate stress tensors and tangent operators from a group of phenomenological and micromechanical models at large deformations. The current contribution will supplement some further important models that were not included in the previous work. For comparison of all selected models in reproducing the well-known Treloar data, the analytical expressions for the three homogeneous defomation modes, i.e., uniaxial tension, equibiaxial tension, and pure shear, have been derived and the performances of the models are analyzed.
Stagnation point flow of hyperbolic tangent fluid with Soret-Dufour effects
Directory of Open Access Journals (Sweden)
Tasawar Hayat
Full Text Available Combined effects of Soret (thermal-diffusion and Dufour (diffusion-thermo in MHD stagnation point flow of tangent hyperbolic fluid by a stretching sheet are discussed in the present article. The laws of conservation of mass, momentum, energy and concentration are employed to develop the mathematical model of physical phenomenon. Suitable transformations lead to convert the nonlinear partial differential equations into the ordinary differential equations. The series solutions of boundary layer equations along with boundary conditions are obtained. Convergence of the developed series solutions is discussed via plots and numerical values. The behaviors of different physical parameters on the velocity, temperature and concentration fields are plotted and analyzed. Numerical values of skin friction coefficient, local Nusselt and Sherwood numbers are computed and analyzed. It is found that Dufour and Soret numbers result in the enhancement of temperature and concentration distributions, respectively. Furthermore a comparison is presented with the previous published results in a limiting way to justify the present solutions. Keywords: Magnetohydrodynamics (MHD, Stagnation point flow, Tangent hyperbolic fluid, Soret-Dufour effects
Robot training through incremental learning
Karlsen, Robert E.; Hunt, Shawn; Witus, Gary
2011-05-01
The real world is too complex and variable to directly program an autonomous ground robot's control system to respond to the inputs from its environmental sensors such as LIDAR and video. The need for learning incrementally, discarding prior data, is important because of the vast amount of data that can be generated by these sensors. This is crucial because the system needs to generate and update its internal models in real-time. There should be little difference between the training and execution phases; the system should be continually learning, or engaged in "life-long learning". This paper explores research into incremental learning systems such as nearest neighbor, Bayesian classifiers, and fuzzy c-means clustering.
Laboratory measurement of dielectric constant and loss tangent of Indian rock samples
Directory of Open Access Journals (Sweden)
MAHANDRA P. SINGH
1980-06-01
Full Text Available The data on dielectric properties of Indian rock samples are very
sparsely reported in the literature. Keeping this in view, we have carried
out laboratory measurements of the dielectric constant and loss tangents
of some samples of sandstone, quartzite, limestone, marble, dolerite and
basalt. LCR bridge and Q-meter have been employed for measurements
in the frequency range 10 - 10' Hz. The results have been presented in the
form of variation of these parameters with frequency which show characteristic
features. Observations on the effect of moisture content of these
samples on the dielectric properties have also been reported. Further,
correlation of petrographic studies of rock samples with measured variations
of dielectric properties has been attempted.
Incremental Trust in Grid Computing
DEFF Research Database (Denmark)
Brinkløv, Michael Hvalsøe; Sharp, Robin
2007-01-01
This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust...... and reputation (discrete combination and combination based on fuzzy logic) are considered. The various combinations of these methods are evaluated from the point of view of their ability to respond to changes in behaviour and the ease with which suitable parameters for the algorithms can be found in the context...... of Grid computing systems....
Incremental deformation: A literature review
Directory of Open Access Journals (Sweden)
Nasulea Daniel
2017-01-01
Full Text Available Nowadays the customer requirements are in permanent changing and according with them the tendencies in the modern industry is to implement flexible manufacturing processes. In the last decades, metal forming gained attention of the researchers and considerable changes has occurred. Because for a small number of parts, the conventional metal forming processes are expensive and time-consuming in terms of designing and manufacturing preparation, the manufacturers and researchers became interested in flexible processes. One of the most investigated flexible processes in metal forming is incremental sheet forming (ISF. ISF is an advanced flexible manufacturing process which allows to manufacture complex 3D products without expensive dedicated tools. In most of the cases it is needed for an ISF process the following: a simple tool, a fixing device for sheet metal blank and a universal CNC machine. Using this process it can be manufactured axis-symmetric parts, usually using a CNC lathe but also complex asymmetrical parts using CNC milling machines, robots or dedicated equipment. This paper aim to present the current status of incremental sheet forming technologies in terms of process parameters and their influences, wall thickness distribution, springback effect, formability, surface quality and the current main research directions.
Directory of Open Access Journals (Sweden)
Eugenia Kalnay
2012-10-01
Full Text Available We introduce a new formulation of the ensemble forecast sensitivity developed by Liu and Kalnay with a small correction from Li et al. The new formulation, like the original one, is tested on the simple Lorenz 40-variable model. We find that, except for short-range forecasts, the use of localization in the analysis, necessary in ensemble Kalman filter (EnKF when the number of ensemble members is much smaller than the model's degrees of freedom, has a negative impact on the accuracy of the sensitivity. This is because the impact of an observation during the analysis (i.e. the analysis increment associated with the observation is transported by the flow during the integration, and this is ignored when the ensemble sensitivity uses a fixed localization. To address this problem, we introduce two approaches that could be adapted to evolve the localization during the estimation of forecast sensitivity to the observations. The first one estimates the non-linear evolution of the initial localization but is computationally expensive. The second one moves the localization with a constant estimation of the group velocity. Both methods succeed in improving the ensemble estimations for longer forecasts.Overall, the adjoint and ensemble forecast impact estimations give similarly accurate results for short-range forecasts, except that the new formulation gives an estimation of the fraction of observations that improve the forecast closer to that obtained by data denial (Observing System Experiments. For longer-range forecasts, they both deteriorate for different reasons. The adjoint sensitivity becomes noisy due to the forecast non-linearities not captured in the linear tangent model and the adjoint. The ensemble sensitivity becomes less accurate due to the use of a fixed localization, a problem that could be ameliorated with an evolving adaptive localization. Advantages of the new formulation include it being simpler than the original formulation and
Directory of Open Access Journals (Sweden)
Cristinel Popescu
2015-09-01
Full Text Available The paper aims to identify how to determine the dielectric loss angle tangent of the electric transformers from the transformer stations. Autors of the paper managed a case study on the dielectric established between high respectively medium voltage windings of an electrical rated 40 MVA transformer.
Jaubert, Jean-Noël; Privat, Romain
2014-01-01
The double-tangent construction of coexisting phases is an elegant approach to visualize all the multiphase binary systems that satisfy the equality of chemical potentials and to select the stable state. In this paper, we show how to perform the double-tangent construction of coexisting phases for binary systems modeled with the gamma-phi…
Incremental Observer Relative Data Extraction
DEFF Research Database (Denmark)
Bukauskas, Linas; Bøhlen, Michael Hanspeter
2004-01-01
The visual exploration of large databases calls for a tight coupling of database and visualization systems. Current visualization systems typically fetch all the data and organize it in a scene tree that is then used to render the visible data. For immersive data explorations in a Cave...... or a Panorama, where an observer is data space this approach is far from optimal. A more scalable approach is to make the observer-aware database system and to restrict the communication between the database and visualization systems to the relevant data. In this paper VR-tree, an extension of the R......-tree, is used to index visibility ranges of objects. We introduce a new operator for incremental Observer Relative data Extraction (iORDE). We propose the Volatile Access STructure (VAST), a lightweight main memory structure that is created on the fly and is maintained during visual data explorations. VAST...
Local electron tomography using angular variations of surface tangents: Stomo version 2
Petersen, T. C.; Ringer, S. P.
2012-03-01
In a recent publication, we investigated the prospect of measuring the outer three-dimensional (3D) shapes of nano-scale atom probe specimens from tilt-series of images collected in the transmission electron microscope. For this purpose alone, an algorithm and simplified reconstruction theory were developed to circumvent issues that arise in commercial "back-projection" computations in this context. In our approach, we give up the difficult task of computing the complete 3D continuum structure and instead seek only the 3D morphology of internal and external scattering interfaces. These interfaces can be described as embedded 2D surfaces projected onto each image in a tilt series. Curves and other features in the images are interpreted as inscribed sets of tangent lines, which intersect the scattering interfaces at unknown locations along the direction of the incident electron beam. Smooth angular variations of the tangent line abscissa are used to compute the surface tangent intersections and hence the 3D morphology as a "point cloud". We have published the explicit details of our alternative algorithm along with the source code entitled "stomo_version_1". For this work, we have further modified the code to efficiently handle rectangular image sets, perform much faster tangent-line "edge detection" and smoother tilt-axis image alignment using simple bi-linear interpolation. We have also adapted the algorithm to detect tangent lines as "ridges", based upon 2nd order partial derivatives of the image intensity; the magnitude and orientation of which is described by a Hessian matrix. Ridges are more appropriate descriptors for tangent-line curves in phase contrast images outlined by Fresnel fringes or absorption contrast data from fine-scale objects. Improved accuracy, efficiency and speed for "stomo_version_2" is demonstrated in this paper using both high resolution electron tomography data of a nano-sized atom probe tip and simulated absorption-contrast images
On excursion increments in heartbeat dynamics
International Nuclear Information System (INIS)
Guzmán-Vargas, L.; Reyes-Ramírez, I.; Hernández-Pérez, R.
2013-01-01
We study correlation properties of excursion increments of heartbeat time series from healthy subjects and heart failure patients. We construct the excursion time based on the original heartbeat time series, representing the time employed by the walker to return to the local mean value. Next, the detrended fluctuation analysis and the fractal dimension method are applied to the magnitude and sign of the increments in the time excursions between successive excursions for the mentioned groups. Our results show that for magnitude series of excursion increments both groups display long-range correlations with similar correlation exponents, indicating that large (small) increments (decrements) are more likely to be followed by large (small) increments (decrements). For sign sequences and for both groups, we find that increments are short-range anti-correlated, which is noticeable under heart failure conditions
International Nuclear Information System (INIS)
Fong, Andrew; Bromley, Regina; Beat, Mardi; Vien, Din; Dineley, Jude; Morgan, Graeme
2009-01-01
Full text: Prior to introducing intensity modulated radiotherapy (IMRT) for whole breast radiotherapy (WBRT) into our department we undertook a comparison of the dose parameters of several IMRT techniques and standard wedged tangents (SWT). Our aim was to improve the dose distribution to the breast and to decrease the dose to organs at risk (OAR): heart, lung and contralateral breast (Contra Br). Treatment plans for 20 women (10 right-sided and 10 left-sided) previously treated with SWT for WBRT were used to compare (a) SWT; (b) electronic compensators IMRT (E-IMRT); (c) tangential beam IMRT (T-IMRT); (d) coplanar multi-field IMRT (CP-IMRT); and (e) non-coplanar multi-field IMRT (NCP-IMRT). Plans for the breast were compared for (i) dose homogeneity (DH); (ii) conformity index (CI); (iii) mean dose; (iv) maximum dose; (v) minimum dose; and dose to OAR were calculated (vi) heart; (vii) lung and (viii) Contra Br. Compared with SWT, all plans except CP-IMRT gave improvement in at least two of the seven parameters evaluated. T-IMRT and NCP-IMRT resulted in significant improvement in all parameters except DH and both gave significant reduction in doses to OAR. As on initial evaluation NCP-IMRT is likely to be too time consuming to introduce on a large scale, T-IMRT is the preferred technique for WBRT for use in our department.
Directory of Open Access Journals (Sweden)
M. Ali Abbas
2016-03-01
Full Text Available In this present analysis, three dimensional peristaltic flow of hyperbolic tangent fluid in a non-uniform channel has been investigated. We have considered that the pressure is uniform over the whole cross section and the interial effects have been neglected. For this purpose we consider laminar flow under the assumptions of long wavelength (λ→∞ and creeping flow (Re→0 approximations. The attained highly nonlinear equations are solved with the help of Homotopy perturbation method. The influence of various physical parameters of interest is demonstrated graphically for wall tension, mass characterization, damping nature of the wall, wall rigidity, wall elastance, aspect ratio and the Weissenberg number. In this present investigation we found that the magnitude of the velocity is maximum in the center of the channel whereas it is minimum near the walls. Stream lines are also drawn to discuss the trapping mechanism for all the physical parameters. Comparison has also been presented between Newtonian and non-Newtonian fluid.
Building Program Models Incrementally from Informal Descriptions.
1979-10-01
AD-AOB6 50 STANFORD UNIV CA DEPT OF COMPUTER SCIENCE F/G 9/2 BUILDING PROGRAM MODELS INCREMENTALLY FROM INFORMAL DESCRIPTION--ETC(U) OCT 79 B P...port SCI.ICS.U.79.2 t Building Program Models Incrementally from Informal Descriptions by Brian P. McCune Research sponsored by Defense Advanced...TYPE OF REPORT & PERIOD COVERED Building Program Models Incrementally from Informal Descriptions. , technical, October 1979 6. PERFORMING ORG
Kuruvilla, K.; Kuruvilla, Anju
2010-01-01
Writing a ?Diagnostic Formulation? is a skill expected of candidates in the post-graduate examinations in psychiatry in most universities in India. However there is ambiguity regarding what the term means and how it should be written. This article is an attempt to provide some guidelines on this topic.
Directory of Open Access Journals (Sweden)
Borbon Martin de
2017-02-01
Full Text Available The goal of this article is to provide a construction and classification, in the case of two complex dimensions, of the possible tangent cones at points of limit spaces of non-collapsed sequences of Kähler-Einstein metrics with cone singularities. The proofs and constructions are completely elementary, nevertheless they have an intrinsic beauty. In a few words; tangent cones correspond to spherical metrics with cone singularities in the projective line by means of the Kähler quotient construction with respect to the S1-action generated by the Reeb vector field, except in the irregular case ℂβ₁×ℂβ₂ with β₂/ β₁ ∉ Q.
Automatic incrementalization of Prolog based static analyses
DEFF Research Database (Denmark)
Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan
2007-01-01
Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... incrementalizing a broad range of static analyses....
Directory of Open Access Journals (Sweden)
Poruba Z.
2009-06-01
Full Text Available For the numerical solution of elasto-plastic problems with use of Newton-Raphson method in global equilibrium equation it is necessary to determine the tangent modulus in each integration point. To reach the parabolic convergence of Newton-Raphson method it is convenient to use so called algorithmic tangent modulus which is consistent with used integration scheme. For more simple models for example Chaboche combined hardening model it is possible to determine it in analytical way. In case of more robust macroscopic models it is in many cases necessary to use the approximation approach. This possibility is presented in this contribution for radial return method on Chaboche model. An example solved in software Ansys corresponds to line contact problem with assumption of Coulomb's friction. The study shows at the end that the number of iteration of N-R method is higher in case of continuum tangent modulus and many times higher with use of modified N-R method, initial stiffness method.
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
Incremental Support Vector Machine Framework for Visual Sensor Networks
Directory of Open Access Journals (Sweden)
Yuichi Motai
2007-01-01
Full Text Available Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
Kent, James; Holdaway, Daniel
2015-04-01
Data assimilation is one of the most common inverse problems encountered in geophysical models. One of the leading techniques used for data assimilation in numerical weather prediction is four dimensional variational data assimilation (4DVAR). In 4DVAR the tangent linear and adjoint versions of the nonlinear model are used to perform a minimization with time dependent observations. In order for the minimization to perform well requires a certain degree of linearity in both the underlying equations and numerical methods used to solve them. Advection is central to the underlying equations used for numerical weather prediction, as well as many other geophysical models. From the advection of momentum, temperature and moisture to passive tracers such as smoke from wildfires, accurate transport is paramount. Over recent decades much effort has been directed toward the development of positive definite, non-oscillatory, mass conserving advection schemes. These schemes are capable of giving excellent representation of transport, but by definition introduce nonlinearity into equations that are otherwise quite linear. One such example is the flux limited piecewise parabolic method (PPM) used in NASA's Goddard Earth Observing System version 5 (GEOS-5), which can perform very poorly when linearized. With a view to an optimal representation of transport in the linear versions of atmospheric models and 4DVAR we analyse the performance of a number of different linear and nonlinear advection schemes. The schemes are analysed using a one dimensional case study, a passive tracer in GEOS-5 experiment and using the full linearized version of GEOS-5. Using the three studies it is shown that higher order linear schemes provide the best representation of the transport of perturbations and sensitivities. In certain situations the nonlinear schemes give the best performance but are subject to issues. It is also shown that many of the desirable properties of the nonlinear schemes are
An efficient CT-simulation procedure for breast treatment using tangent beams
International Nuclear Information System (INIS)
Lu, H.-M.; Cheng Pan; Lee, Chin; Svensson, Goran; Harris, Jay
1997-01-01
Purpose:Breast treatment planning using CT-simulations provides a number of advantages, but presents several unique problems. One concern is the ability to evaluate coverage of the external target volume, since CT scanners cannot provide field light projections on skin. Another is whether treatment portals can be marked on a patient as in a regular simulation, so that the usual level of setup accuracy can be achieved without additional effort at the treatment unit. Finally, the planning procedure must be performed with efficiency, so that it could be used routinely for most or all patients. To address these issues, we report our CT-simulation procedure for breast treatment using tangent beams, which includes, all in one session, CT data acquisition, field determination, external target volume evaluation, and the marking of treatment portals on the patient. Methods and Materials: A General Electric CT scanner and a virtual simulation software package 'Advantage-Sim' were used to perform the simulations. We have developed two tools to assist the simulation process. One is a digitization system consisting of multimedia software interacting with a sonic digitizer that can capture the coordinates of a point in space with the standard deviation of 1.4 mm. Given the planned beam geometry, the system can establish a virtual beam projection on the patient's body in real space, so that for any digitized point on the skin, its position relative to the field can be calculated and shown in a beam's eye view display. With convenient audio and visual signals, the system allows one to see if any skin area of concern is included in the field with sufficient margin, or to rapidly locate field borders or marking points by cruising the digitizer probe on patient skin. The accuracy of the system has been studied by using a breast phantom. The other is a breast planning software tool which augments the virtual simulation software to speed up the generation of tangent beam pairs
Power calculation of linear and angular incremental encoders
Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.
2016-04-01
Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and
Screening of mucoadhesive vaginal gel formulations
Directory of Open Access Journals (Sweden)
Ana Ochoa Andrade
2014-12-01
Full Text Available Rational design of vaginal drug delivery formulations requires special attention to vehicle properties that optimize vaginal coating and retention. The aim of the present work was to perform a screening of mucoadhesive vaginal gels formulated with carbomer or carrageenan in binary combination with a second polymer (carbomer, guar or xanthan gum. The gels were characterised using in vitroadhesion, spreadability and leakage potential studies, as well as rheological measurements (stress and frequency sweep tests and the effect of dilution with simulated vaginal fluid (SVF on spreadability. Results were analysed using analysis of variance and multiple factor analysis. The combination of polymers enhanced adhesion of both primary gelling agents, carbomer and carrageenan. From the rheological point of view all formulations presented a similar behaviour, prevalently elastic and characterised by loss tangent values well below 1. No correlation between rheological and adhesion behaviour was found. Carbomer and carrageenan gels containing the highest percentage of xanthan gum displayed good in vitro mucoadhesion and spreadability, minimal leakage potential and high resistance to dilution. The positive results obtained with carrageenan-xanthan gum-based gels can encourage the use of natural biocompatible adjuvants in the composition of vaginal products, a formulation field that is currently under the synthetic domain.
Efficient Incremental Checkpointing of Java Programs
DEFF Research Database (Denmark)
Lawall, Julia Laetitia; Muller, Gilles
2000-01-01
This paper investigates the optimization of language-level checkpointing of Java programs. First, we describe how to systematically associate incremental checkpoints with Java classes. While being safe, the genericness of this solution induces substantial execution overhead. Second, to solve...
VT Tax Increment Financing (TIF) Districts
Vermont Center for Geographic Information — Tax Increment Financing (TIF) Districts is established by a municipality around an area that requires public infrastructure to encourage public and private real...
Jin, Bonan; Xu, Xiaosu; Zhang, Tao
2018-03-04
Finding the position of a radiative source based on time-difference-of-arrival (TDOA) measurements from spatially separated receivers has been widely applied in sonar, radar, mobile communications and sensor networks. For the nonlinear model in the process of positioning, Taylor series and other novel methods are proposed. The idea of cone constraint provides a new way of solving this problem. However, these approaches do not always perform well and are away from the Cramer-Rao-Lower-Bound (CRLB) in the situations when the source is set at the array edge, the noise in measurement is loud, or the initial position is biased. This paper presents a weighted-least-squares (WLS) algorithm with the cone tangent plane constraint for hyperbolic positioning. The method adds the range between the source and the reference sensor as a dimension. So, the space-range frame is established. Different from other cone theories, this paper sets the reference sensor as the apex and finds the optimal source estimation on the cone. WLS is used for the optimal result from the measurement plane equations, a vertical constraint and a cone constraint. The cone constraint equation is linearized by a tangent plane. This method iterates through loops and updates the tangent plane, which approximates the truth-value on the cone. The proposed algorithm was simulated and verified under various conditions of different source positions and noises. Besides, some state-of-the-art algorithms were compared in these simulations. The results show that this algorithm is accurate and robust under poor external environment.
Directory of Open Access Journals (Sweden)
Bonan Jin
2018-03-01
Full Text Available Finding the position of a radiative source based on time-difference-of-arrival (TDOA measurements from spatially separated receivers has been widely applied in sonar, radar, mobile communications and sensor networks. For the nonlinear model in the process of positioning, Taylor series and other novel methods are proposed. The idea of cone constraint provides a new way of solving this problem. However, these approaches do not always perform well and are away from the Cramer-Rao-Lower-Bound (CRLB in the situations when the source is set at the array edge, the noise in measurement is loud, or the initial position is biased. This paper presents a weighted-least-squares (WLS algorithm with the cone tangent plane constraint for hyperbolic positioning. The method adds the range between the source and the reference sensor as a dimension. So, the space-range frame is established. Different from other cone theories, this paper sets the reference sensor as the apex and finds the optimal source estimation on the cone. WLS is used for the optimal result from the measurement plane equations, a vertical constraint and a cone constraint. The cone constraint equation is linearized by a tangent plane. This method iterates through loops and updates the tangent plane, which approximates the truth-value on the cone. The proposed algorithm was simulated and verified under various conditions of different source positions and noises. Besides, some state-of-the-art algorithms were compared in these simulations. The results show that this algorithm is accurate and robust under poor external environment.
Energy Technology Data Exchange (ETDEWEB)
Afsar, M.N.; Chi, H. [Tufts Univ., Medford, MA (United States)
1994-07-01
Single crystal high resistivity (11,000 ohm-cm) boron doped silicon was found to exhibit lowest absorption loss at room temperature (25 C) in the entire millimeter wave region. At 140 GHz it`s loss tangent value is as low as 40 microradians. The study of dielectric properties of silicon as a function of resistivity reveals that the low frequency free carrier absorption present in all silicon (and other semiconductors) vanishes with increasing resistivity. It is then possible to use such a silicon in substrate applications in microwave integrated circuitry. The unique broadband dispersive Fourier transform spectroscopic technique was utilized for these measurement.
Directory of Open Access Journals (Sweden)
Alvarez Gabriel
2006-12-01
Full Text Available In this paper, we propose a method to attenuate diffracted multiples with an apex-shifted tangent-squared Radon transform in angle domain common image gathers (ADCIG . Usually, where diffracted multiples are a problem, the wave field propagation is complex and the moveout of primaries and multiples in data space is irregular. The method handles the complexity of the wave field propagation by wave-equation migration provided that migration velocities are reasonably accurate. As a result, the moveout of the multiples is well behaved in the ADCIGs. For 2D data, the apex-shifted tangent-squared Radon transform maps the 2D space image into a 3D space-cube model whose dimensions are depth, curvature and apex-shift distance.
Well-corrected primaries map to or near the zero curvature plane and specularly-reflected multiples map to or near the zero apex-shift plane. Diffracted multiples map elsewhere in the cube according to their curvature and apex-shift distance. Thus, specularly reflected as well as diffracted multiples can be attenuated simultaneously. This approach is illustrated with a segment of a 2D seismic line over a large salt body in the Gulf of Mexico. It is shown that ignoring the apex shift compromises the attenuation of the diffracted multiples, whereas the approach proposed attenuates both the specularly-reflected and the diffracted multiples without compromising the primaries.
Liu, Haofei; Sun, Wei
2016-01-01
In this study, we evaluated computational efficiency of finite element (FE) simulations when a numerical approximation method was used to obtain the tangent moduli. A fiber-reinforced hyperelastic material model for nearly incompressible soft tissues was implemented for 3D solid elements using both the approximation method and the closed-form analytical method, and validated by comparing the components of the tangent modulus tensor (also referred to as the material Jacobian) between the two methods. The computational efficiency of the approximation method was evaluated with different perturbation parameters and approximation schemes, and quantified by the number of iteration steps and CPU time required to complete these simulations. From the simulation results, it can be seen that the overall accuracy of the approximation method is improved by adopting the central difference approximation scheme compared to the forward Euler approximation scheme. For small-scale simulations with about 10,000 DOFs, the approximation schemes could reduce the CPU time substantially compared to the closed-form solution, due to the fact that fewer calculation steps are needed at each integration point. However, for a large-scale simulation with about 300,000 DOFs, the advantages of the approximation schemes diminish because the factorization of the stiffness matrix will dominate the solution time. Overall, as it is material model independent, the approximation method simplifies the FE implementation of a complex constitutive model with comparable accuracy and computational efficiency to the closed-form solution, which makes it attractive in FE simulations with complex material models.
International Nuclear Information System (INIS)
Veness, M.J.; Delaney, G.; Berry, M.
1999-01-01
The breast is a complex anatomical structure where achieving a homogeneous dose distribution with radiation treatment is difficult. Despite obvious similarities in the approach to such treatment (using tangents) there is variation in the process of simulation, planning and treatment between radiation oncologists. Previous Australasian studies in the treatment of lung cancer, prostate cancer and Hodgkin's disease highlighted considerable variation in many areas of treatment. As part of a multicentre breast phantom study involving 10 radiation oncology departments throughout New South Wales (NSW) and the Australian Capital Territory (ACT), a 22-question survey was distributed. The aim of the survey was to assess the extent of variation in the approach to the simulation, planning and treatment of early breast cancer using tangents. Responses from 10 different radiation oncology departments revealed variation in most areas of the survey. There is no reason to assume similar variations do not occur Australasia wide. Studies involving overseas radiation oncologists also reveal a wide variation in treating early breast cancer. The consequences of such variations remain unclear. Copyright (1999) Blackwell Science Pty Ltd
International Nuclear Information System (INIS)
Krasin, Matthew; McCall, Anne; King, Stephanie; Olson, Mary; Emami, Bahman
2000-01-01
Purpose: A thorough dose-volume analysis of a standard tangential radiation technique has not been published. We evaluated the adequacy of a tangential radiation technique in delivering dose to the breast and regional lymphatics, as well as dose delivered to underlying critical structures. Methods and Materials: Treatment plans of 25 consecutive women with breast cancer undergoing lumpectomy and adjuvant breast radiotherapy were studied. Patients underwent two-dimensional (2D) treatment planning followed by treatment with standard breast tangents. These 2D plans were reconstructed without modification on our three-dimensional treatment planning system and analyzed with regard to dose-volume parameters. Results: Adequate coverage of the breast (defined as 95% of the target receiving at least 95% of the prescribed dose) was achieved in 16 of 25 patients, with all patients having at least 85% of the breast volume treated to 95% of the prescribed dose. Only 1 patient (4%) had adequate coverage of the Level I axilla, and no patient had adequate coverage of the Level II axilla, Level III axilla, or the internal mammary lymph nodes. Conclusion: Three-dimensional treatment planning is superior in quantification of the dose received by the breast, regional lymphatics, and critical structures. The standard breast tangent technique delivers an adequate dose to the breast but does not therapeutically treat the regional lymph nodes in the majority of patients. If coverage of the axilla or internal mammary lymph nodes is desired, alternate beam arrangements or treatment fields will be necessary
Automatic incrementalization of Prolog based static analyses
DEFF Research Database (Denmark)
Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan
2007-01-01
Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...
Incremental Integrity Checking: Limitations and Possibilities
DEFF Research Database (Denmark)
Christiansen, Henning; Martinenghi, Davide
2005-01-01
Integrity checking is an essential means for the preservation of the intended semantics of a deductive database. Incrementality is the only feasible approach to checking and can be obtained with respect to given update patterns by exploiting query optimization techniques. By reducing the problem...... to query containment, we show that no procedure exists that always returns the best incremental test (aka simplification of integrity constraints), and this according to any reasonable criterion measuring the checking effort. In spite of this theoretical limitation, we develop an effective procedure...
Theory of Single Point Incremental Forming
DEFF Research Database (Denmark)
Martins, P.A.F.; Bay, Niels; Skjødt, Martin
2008-01-01
This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in-plan...
Incremental Integrity Checking: Limitations and Possibilities
DEFF Research Database (Denmark)
Christiansen, Henning; Martinenghi, Davide
2005-01-01
Integrity checking is an essential means for the preservation of the intended semantics of a deductive database. Incrementality is the only feasible approach to checking and can be obtained with respect to given update patterns by exploiting query optimization techniques. By reducing the problem...
The Cognitive Underpinnings of Incremental Rehearsal
Varma, Sashank; Schleisman, Katrina B.
2014-01-01
Incremental rehearsal (IR) is a flashcard technique that has been developed and evaluated by school psychologists. We discuss potential learning and memory effects from cognitive psychology that may explain the observed superiority of IR over other flashcard techniques. First, we propose that IR is a form of "spaced practice" that…
Design of methodology for incremental compiler construction
Directory of Open Access Journals (Sweden)
Pavel Haluza
2011-01-01
Full Text Available The paper deals with possibilities of the incremental compiler construction. It represents the compiler construction possibilities for languages with a fixed set of lexical units and for languages with a variable set of lexical units, too. The methodology design for the incremental compiler construction is based on the known algorithms for standard compiler construction and derived for both groups of languages. Under the group of languages with a fixed set of lexical units there belong languages, where each lexical unit has its constant meaning, e.g., common programming languages. For this group of languages the paper tries to solve the problem of the incremental semantic analysis, which is based on incremental parsing. In the group of languages with a variable set of lexical units (e.g., professional typographic system TEX, it is possible to change arbitrarily the meaning of each character on the input file at any time during processing. The change takes effect immediately and its validity can be somehow limited or is given by the end of the input. For this group of languages this paper tries to solve the problem case when we use macros temporarily changing the category of arbitrary characters.
The balanced scorecard: an incremental approach model to health care management.
Pineno, Charles J
2002-01-01
The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.
Directory of Open Access Journals (Sweden)
Koh Kim Jie
2017-01-01
Full Text Available Quadratic damping nonlinearity is challenging for displacement based structural dynamics problem as the problem is nonlinear in time derivative of the primitive variable. For such nonlinearity, the formulation of tangent stiffness matrix is not lucid in the literature. Consequently, ambiguity related to kinematics update arises when implementing the time integration-iterative algorithm. In present work, an Euler-Bernoulli beam vibration problem with quadratic damping nonlinearity is addressed as the main source of quadratic damping nonlinearity arises from drag force estimation, which is generally valid only for slender structures. Employing Newton-Raphson formulation, tangent stiffness components associated with quadratic damping nonlinearity requires velocity input for evaluation purpose. For this reason, two mathematically equivalent algorithm structures with different kinematics arrangement are tested. Both algorithm structures result in the same accuracy and convergence characteristic of solution.
Salahuddin, T.; Khan, Imad; Malik, M. Y.; Khan, Mair; Hussain, Arif; Awais, Muhammad
2017-05-01
The present work examines the internal resistance between fluid particles of tangent hyperbolic fluid flow due to a non-linear stretching sheet with heat generation. Using similarity transformations, the governing system of partial differential equations is transformed into a coupled non-linear ordinary differential system with variable coefficients. Unlike the current analytical works on the flow problems in the literature, the main concern here is to numerically work out and find the solution by using Runge-Kutta-Fehlberg coefficients improved by Cash and Karp (Naseer et al., Alexandria Eng. J. 53, 747 (2014)). To determine the relevant physical features of numerous mechanisms acting on the deliberated problem, it is sufficient to have the velocity profile and temperature field and also the drag force and heat transfer rate all as given in the current paper.
Hussain, Arif; Malik, M. Y.; Salahuddin, T.; Rubab, A.; Khan, Mair
Present analysis is concentrating on the thermo-physical aspects of MHD tangent hyperbolic fluid flow over a non-linear stretching sheet with viscous dissipation and convective boundary conditions. Mathematical modelling yields non-linear partial differential equations. The governing equations are transformed into corresponding coupled ordinary differential equations via using local similarity variables. The accomplished boundary layer ordinary differential equations are solved with the aid of both homotopy analysis method and shooting method. The effects of flow govern parameters are visualized on velocity and temperature in both qualitative and quantitative manners. The computations of wall friction factor and local Nusselt number are performed to analyze the behavior in the vicinity of stretching sheet. The contrast between analytically and numerically computed values of wall friction factor and local Nusselt number is presented. It is worth mentioning that the both results are in excellent agreement, this favorable comparison led to confidence on computed results.
Zhang, Yizhuo; Xu, Guanghua; Wang, Jing; Liang, Lin
2010-01-01
Epileptic seizure features always include the morphology and spatial distribution of nonlinear waveforms in the electroencephalographic (EEG) signals. In this study, we propose a novel incremental learning scheme based on nonlinear dimensionality reduction for automatic patient-specific seizure onset detection. The method allows for identification of seizure onset times in long-term EEG signals acquired from epileptic patients. Firstly, a nonlinear dimensionality reduction (NDR) method called local tangent space alignment (LTSA) is used to reduce the dimensionality of available initial feature sets extracted with continuous wavelet transform (CWT). One-dimensional manifold which reflects the intrinsic dynamics of seizure onset is obtained. For each patient, IEEG recordings containing one seizure onset is sufficient to train the initial one-dimensional manifold. Secondly, an unsupervised incremental learning scheme is proposed to update the initial manifold when the unlabelled EEG segments flow in sequentially. The incremental learning scheme can cluster the new coming samples into the trained patterns (containing or not containing seizure onsets). Intracranial EEG recordings from 21 patients with duration of 193.8h and 82 seizures are used for the evaluation of the method. Average sensitivity of 98.8%, average uninteresting false positive rate of 0.24/h, average interesting false positives rate of 0.25/h, and average detection delay of 10.8s are obtained. Our method offers simple, accurate training with less human intervening and can be well used in off-line seizure detection. The unsupervised incremental learning scheme has the potential in identifying novel IEEG classes (different onset patterns) within the data. Copyright © 2010 Elsevier Ltd. All rights reserved.
Evolving effective incremental SAT solvers with GP
Bader, Mohamed; Poli, R.
2008-01-01
Hyper-Heuristics could simply be defined as heuristics to choose other heuristics, and it is a way of combining existing heuristics to generate new ones. In a Hyper-Heuristic framework, the framework is used for evolving effective incremental (Inc*) solvers for SAT. We test the evolved heuristics (IncHH) against other known local search heuristics on a variety of benchmark SAT problems.
Sustained mahogany (Swietenia macrophylla) plantation heartwood increment.
Frank H. Wadsworth; Edgardo. Gonzalez
2008-01-01
In a search for an increment-based rotation for plantation mahogany(Swietenia macrophylla King), heartwood volume per tree was regressed on DBH (trunk diameter outside bark at 1.4 m above the ground) and merchantable height measurements. We updated a previous study [Wadsworth, F.H., González González, E., Figuera Colón, J.C., Lugo P...
Crystallization Formulation Lab
Federal Laboratory Consortium — The Crystallization Formulation Lab fills a critical need in the process development and optimization of current and new explosives and energetic formulations. The...
Systematic Luby Transform codes as incremental redundancy scheme
CSIR Research Space (South Africa)
Grobler, TL
2011-09-01
Full Text Available Systematic Luby Transform (fountain) codes are investigated as a possible incremental redundancy scheme for EDGE. The convolutional incremental redundancy scheme currently used by EDGE is replaced by the fountain approach. The results...
Incremental product development : four essays on activities, resources, and actors
Olsen, Nina Veflen
2006-01-01
Most innovations are incremental, and incremental innovations play an important role for the firm. In spite of that, traditional NPD studies most often emphasize moderate to highly innovative product development projects. In this dissertation the overall objective is to increase our understanding of incremental innovation. The dissertation is organized around four essays that emphasize different aspects of incremental innovation. NPD in hotels, retailers and food manufact...
Incremental Nonnegative Matrix Factorization for Face Recognition
Directory of Open Access Journals (Sweden)
Wen-Sheng Chen
2008-01-01
Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.
Incremental Scheduling Engines: Cost Savings through Automation
Jaap, John; Phillips, Shaun
2005-01-01
As humankind embarks on longer space missions farther from home, the requirements and environments for scheduling the activities performed on these missions are changing. As we begin to prepare for these missions it is appropriate to evaluate the merits and applicability of the different types of scheduling engines. Scheduling engines temporally arrange tasks onto a timeline so that all constraints and ob.jectives are met and resources are not over-booked. Scheduling engines used to schedule space missions fall into three general categories: batch, mixed-initiative, and incremental. This paper, presents an assessment of the engine types, a discussion of the impact of human exploration of the moon and Mars on planning and scheduling, and the applicability of the different types of scheduling engines. This paper will pursue the hypothesis that incremental scheduling engines may have a place in the new environment; they have the potential to reduce cost, to improve the satisfaction of those who execute or benefit from a particular timeline (the customers), and to allow astronauts to plan their own tasks and those of their companion robots.
ENERGY SYSTEM CONTRIBUTIONS DURING INCREMENTAL EXERCISE TEST
Directory of Open Access Journals (Sweden)
Rômulo Bertuzzi
2013-09-01
Full Text Available The main purpose of this study was to determine the relative contributions of the aerobic and glycolytic systems during an incremental exercise test (IET. Ten male recreational long-distance runners performed an IET consisting of three-minute incremental stages on a treadmill. The fractions of the contributions of the aerobic and glycolytic systems were calculated for each stage based on the oxygen uptake and the oxygen energy equivalents derived by blood lactate accumulation, respectively. Total metabolic demand (WTOTAL was considered as the sum of these two energy systems. The aerobic (WAER and glycolytic (WGLYCOL system contributions were expressed as a percentage of the WTOTAL. The results indicated that WAER (86-95% was significantly higher than WGLYCOL (5-14% throughout the IET (p < 0.05. In addition, there was no evidence of the sudden increase in WGLYCOL that has been previously reported to support to the "anaerobic threshold" concept. These data suggest that the aerobic metabolism is predominant throughout the IET and that energy system contributions undergo a slow transition from low to high intensity
Incremental learning for automated knowledge capture
Energy Technology Data Exchange (ETDEWEB)
Benz, Zachary O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilico, Justin Derrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dixon, Kevin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wendt, Jeremy Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2013-12-01
People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.
[Incremental cost effectiveness of multifocal cataract surgery].
Pagel, N; Dick, H B; Krummenauer, F
2007-02-01
Supplementation of cataract patients with multifocal intraocular lenses involves an additional financial investment when compared to the corresponding monofocal supplementation, which usually is not funded by German health care insurers. In the context of recent resource allocation discussions, however, the cost effectiveness of multifocal cataract surgery could become an important rationale. Therefore an evidence-based estimation of its cost effectiveness was carried out. Three independent meta-analyses were implemented to estimate the gain in uncorrected near visual acuity and best corrected visual acuity (vision lines) as well as the predictability (fraction of patients without need for reading aids) of multifocal supplementation. Study reports published between 1995 and 2004 (English or German language) were screened for appropriate key words. Meta effects in visual gain and predictability were estimated by means and standard deviations of the reported effect measures. Cost data were estimated by German DRG rates and individual lens costs; the cost effectiveness of multifocal cataract surgery was then computed in terms of its marginal cost effectiveness ratio (MCER) for each clinical benefit endpoint; the incremental costs of multifocal versus monofocal cataract surgery were further estimated by means of their respective incremental cost effectiveness ratio (ICER). An independent meta-analysis estimated the complication profiles to be expected after monofocal and multifocal cataract surgery in order to evaluate expectable complication-associated additional costs of both procedures; the marginal and incremental cost effectiveness estimates were adjusted accordingly. A sensitivity analysis comprised cost variations of +/- 10 % and utility variations alongside the meta effect estimate's 95 % confidence intervals. Total direct costs from the health care insurer's perspective were estimated 3363 euro, associated with a visual meta benefit in best corrected visual
International Nuclear Information System (INIS)
Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua
2017-01-01
Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification. (paper)
Incremental nonlinear dimensionality reduction by manifold learning.
Law, Martin H C; Jain, Anil K
2006-03-01
Understanding the structure of multidimensional patterns, especially in unsupervised cases, is of fundamental importance in data mining, pattern recognition, and machine learning. Several algorithms have been proposed to analyze the structure of high-dimensional data based on the notion of manifold learning. These algorithms have been used to extract the intrinsic characteristics of different types of high-dimensional data by performing nonlinear dimensionality reduction. Most of these algorithms operate in a "batch" mode and cannot be efficiently applied when data are collected sequentially. In this paper, we describe an incremental version of ISOMAP, one of the key manifold learning algorithms. Our experiments on synthetic data as well as real world images demonstrate that our modified algorithm can maintain an accurate low-dimensional representation of the data in an efficient manner.
Efficient Incremental Checkpointing of Java Programs
DEFF Research Database (Denmark)
Lawall, Julia Laetitia; Muller, Gilles
2000-01-01
This paper investigates the optimization of language-level checkpointing of Java programs. First, we describe how to systematically associate incremental checkpoints with Java classes. While being safe, the genericness of this solution induces substantial execution overhead. Second, to solve...... the dilemma of genericness versus performance, we use automatic program specialization to transform the generic checkpointing methods into highly optimized ones. Specialization exploits two kinds of information: (i) structural properties about the program classes, (ii) knowledge of unmodified data structures...... in specific program phases. The latter information allows us to generate phase-specific checkpointing methods. We evaluate our approach on two benchmarks, a realistic application which consists of a program analysis engine, and a synthetic program which can serve as a metric. Specialization gives a speedup...
IT Supporting Strategy Formulation
Achterbergh, J.M.I.M.; Khosrow-Pour, M.
2005-01-01
This overview approaches information and communication technology (ICT) for competitive intelligence from the perspective of strategy formulation. It provides an ICT architecture for supporting the knowledge processes producing relevant knowledge for strategy formulation. To determine what this
Zeiger, Matthew D.
2014-01-01
Although complex, inconsistent and fickle, the time-averaged flow over a stationary slender forebody is generally well-understood. However, the nature of unsteady, time-varying flows over slender forebodies - whether due to the natural unsteadiness or forced maneuvering - is not well-understood. This body of work documents three experimental investigations into the unsteadiness of the flow over a 3.5 caliber tangent-ogive cylinder at high angles of incidence. The goal of the investigations...
Zhang, Zhi Jie; Fu, Siu Ngor
2013-01-01
Characterization of the elastic properties of a tendon could enhance the diagnosis and treatment of tendon injuries. The purpose of this study was to examine the correlation between the shear elastic modulus on the patellar tendon captured from a Supersonic Shear Imaging (SSI) and the tangent traction modulus computed from a Material testing system (MTS) on 8 fresh patellar pig tendons (Experiment I). Test-retest reliability of the shear elastic modulus captured from the SSI was established in Experiment II on 22 patellar tendons of 11 healthy human subjects using the SSI. Spearman Correlation coefficients for the shear elastic modulus and tangent traction modulus ranged from 0.82 to 1.00 (all p<0.05) on the 8 tendons. The intra and inter-operator reliabilities were 0.98 (95% CI: 0.93-0.99) and 0.97 (95% CI: 0.93-0.98) respectively. The results from this study demonstrate that the shear elastic modulus of the patellar tendon measured by the SSI is related to the tangent traction modulus quantified by the MTS. The SSI shows good intra and inter-operator repeatability. Therefore, the present study shows that SSI can be used to assess elastic properties of a tendon.
International Nuclear Information System (INIS)
Tse, T.L.J; Bromley, R.; Booth, J.; Gray, A.
2011-01-01
Full text: Objective This study aims to evaluate the accuracy of calculated dose with the Eclipse analytical anisotropic algorithm (AAA) for contralateral breast (CB) in left-sided breast radiotherapy for dual-arc VMA T and standard wedged tangent (SWT) techniques. Methods and materials Internal and surface CB doses were measured with EBT2 film in an anthropomorphic phantom mounted with C-cup and D-cup breasts. The measured point dose was approximated by averaging doses over the 4 x 4 mm 2 central region of each 2 x 2 cm2 piece of film. The dose in the target region of the breast was also measured. The measured results were compared to AAA calculations with calculation grids of I, 2.5 and 5 mm. Results In SWT plans, the average ratios of calculation to measurement for internal doses were 0.63 ± 0.081 and 0.5 I ± 0.28 in the medial and lateral aspects, respectively. Corresponding ratios for surface doses were 0.88 ± 0.22 and 0.38 ± 0.38. In VMAT plans, however, the calculation accuracies showed little dependence on the measurement locations, the ratios were 0.78 ± O. I I and 0.81 ± 0.085 for internal and surface doses. In general, finer calculation resolutions did not inevitably improve the dose estimates of internal doses. For surface doses, using smaller grid size I mm could improve the calculation accuracies on the medial but not the lateral aspects of CB. Conclusion In all plans, AAA had a tendency to underestimate both internal and surface CB doses. Overall, it produces more accurate results in VMAT than SWT plans.
Performance Evaluation of Incremental K-means Clustering Algorithm
Chakraborty, Sanjay; Nagwani, N. K.
2014-01-01
The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...
String graph construction using incremental hashing.
Ben-Bassat, Ilan; Chor, Benny
2014-12-15
New sequencing technologies generate larger amount of short reads data at decreasing cost. De novo sequence assembly is the problem of combining these reads back to the original genome sequence, without relying on a reference genome. This presents algorithmic and computational challenges, especially for long and repetitive genome sequences. Most existing approaches to the assembly problem operate in the framework of de Bruijn graphs. Yet, a number of recent works use the paradigm of string graph, using a variety of methods for storing and processing suffixes and prefixes, like suffix arrays, the Burrows-Wheeler transform or the FM index. Our work is motivated by a search for new approaches to constructing the string graph, using alternative yet simple data structures and algorithmic concepts. We introduce a novel hash-based method for constructing the string graph. We use incremental hashing, and specifically a modification of the Karp-Rabin fingerprint, and Bloom filters. Using these probabilistic methods might create false-positive and false-negative edges during the algorithm's execution, but these are all detected and corrected. The advantages of the proposed approach over existing methods are its simplicity and the incorporation of established probabilistic techniques in the context of de novo genome sequencing. Our preliminary implementation is favorably comparable with the first string graph construction of Simpson and Durbin (2010) (but not with subsequent improvements). Further research and optimizations will hopefully enable the algorithm to be incorporated, with noticeable performance improvement, in state-of-the-art string graph-based assemblers. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Multidimensional incremental parsing for universal source coding.
Bae, Soo Hyun; Juang, Biing-Hwang
2008-10-01
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
Aquaculture project formulation
National Research Council Canada - National Science Library
Insull, David; Nash, Colin E
1990-01-01
.... The first part of the document contains a broad introduction to project formulation, describing the integration of aquaculture projects within development plans, the organization and management...
Explosive Formulation Pilot Plant
Federal Laboratory Consortium — The Pilot Plant for Explosive Formulation supports the development of new explosives that are comprised of several components. This system is particularly beneficial...
Directory of Open Access Journals (Sweden)
Cristiano Mauro Assis Gomes
2012-01-01
Full Text Available Este trabalho investiga a validade incremental da abordagem à aprendizagem (AbA sobre o desempenho escolar, além da inteligência. São analisados dados de 684 estudantes da sexta série ao terceiro ano do ensino médio de uma escola particular de Belo Horizonte, Minas Gerais, Brasil. A inteligência é mensurada por itens marcadores da inteligência fluida da Bateria de Fatores Cognitivos de Alta-Ordem. AbA é medida através da Escala de Abordagens de Aprendizagem. O desempenho acadêmico é medido através das notas escolares em Matemática, Português, Geografia e História. Três hipóteses sobre a relação entre inteligência, abordagem à aprendizagem e proficiência acadêmica são testadas através da modelagem por equação estrutural. O modelo da relação direta foi o mais adequado aos dados e apresentou bom grau de ajuste. Inteligência e AbA apresentam efeito direto sobre o desempenho escolar. AbA possui validade incremental independente da inteligência sobre as diferenças individuais do rendimento acadêmico.This paper investigates the incremental validity of learning approach in academic achievement. Participants were 684 junior and high school students from a private school in Belo Horizonte, Minas Gerais, Brazil. Intelligence is measured by fluid intelligence items of the Higher-Order Cognitive Factors Kit. Learning approach is measured by the Learning Approaches Scale. Academic achievement is measured by annual grades in Mathematics, Portuguese (native language, Geography, and History. Three hypotheses about the relation among intelligence, learning approach and academic achievement are tested through structural equation modeling. The direct relation model was the most adequate and showed good fit. Intelligence and learning approach show direct effect in academic achievement. Learning approach has incremental validity of individual differences in academic achievement, independently of intelligence.
The Time Course of Incremental Word Processing during Chinese Reading
Zhou, Junyi; Ma, Guojie; Li, Xingshan; Taft, Marcus
2018-01-01
In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words…
Public Key Infrastructure Increment 2 (PKI Inc 2)
2016-03-01
2016 Major Automated Information System Annual Report Public Key Infrastructure Increment 2 (PKI Inc 2) Defense Acquisition Management...6615 DSN Phone: 244-4900 DSN Fax: Date Assigned: July 1, 2015 Program Information Program Name Public Key Infrastructure Increment 2 (PKI Inc 2... Public Key Infrastructure (PKI) is a critical enabling technology for Information Assurance (IA) services to support seamless secure information flows
Validation of the periodicity of growth increment deposition in ...
African Journals Online (AJOL)
Validation of the periodicity of growth increment deposition in otoliths from the larval and early juvenile stages of two cyprinids from the Orange–Vaal river ... Linear regression models were fitted to the known age post-fertilisation and the age estimated using increment counts to test the correspondence between the two for ...
Incrementality in naming and reading complex numerals : Evidence from eyetracking
Korvorst, M.H.W.; Roelofs, A.P.A.; Levelt, W.J.M.
2006-01-01
Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported
Incremental Seismic Rehabilitation of School Buildings (K-12).
Krimgold, Frederick; Hattis, David; Green, Melvyn
Asserting that the strategy of incremental seismic rehabilitation makes it possible for schools to get started now on improving earthquake safety, this manual provides school administrators with the information necessary to assess the seismic vulnerability of their buildings and to implement a program of incremental seismic rehabilitation for…
Incrementality in naming and reading complex numerals: Evidence from eyetracking
Korvorst, M.H.W.; Roelofs, A.P.A.; Levelt, W.J.M.
2006-01-01
Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported
Creating Helical Tool Paths for Single Point Incremental Forming
DEFF Research Database (Denmark)
Skjødt, Martin; Hancock, Michael H.; Bay, Niels
2007-01-01
Single point incremental forming (SPIF) is a relatively new sheet forming process. A sheet is clamped in a rig and formed incrementally using a rotating single point tool in the form of a rod with a spherical end. The process is often performed on a CNC milling machine and the tool movement...
On Incremental Condition Estimators in the 2-norm
Czech Academy of Sciences Publication Activity Database
Duintjer Tebbens, Jurjen; Tůma, Miroslav
2014-01-01
Roč. 35, č. 1 (2014), s. 174-197 ISSN 0895-4798 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : condition number estimation * matrix inverses * incremental condition estimator * incremental norm estimator Subject RIV: BA - General Mathematics Impact factor: 1.590, year: 2014
Lifetime costs of lung transplantation : Estimation of incremental costs
VanEnckevort, PJ; Koopmanschap, MA; Tenvergert, EM; VanderBij, W; Rutten, FFH
1997-01-01
Despite an expanding number of centres which provide lung transplantation, information about the incremental costs of lung transplantation is scarce. From 1991 until 1995, in The Netherlands a technology assessment was performed which provided information about the incremental costs of lung
Power-law confusion: You say incremental, I say differential
Colwell, Joshua E.
1993-01-01
Power-law distributions are commonly used to describe the frequency of occurrences of crater diameters, stellar masses, ring particle sizes, planetesimal sizes, and meteoroid masses to name a few. The distributions are simple, and this simplicity has led to a number of misstatements in the literature about the kind of power-law that is being used: differential, cumulative, or incremental. Although differential and cumulative power-laws are mathematically trivial, it is a hybrid incremental distribution that is often used and the relationship between the incremental distribution and the differential or cumulative distributions is not trivial. In many cases the slope of an incremental power-law will be nearly identical to the slope of the cumulative power-law of the same distribution, not the differential slope. The discussion that follows argues for a consistent usage of these terms and against the oft-made implicit claim that incremental and differential distributions are indistinguishable.
An incremental approach to automated protein localisation
Directory of Open Access Journals (Sweden)
Kummert Franz
2008-10-01
Full Text Available Abstract Background The subcellular localisation of proteins in intact living cells is an important means for gaining information about protein functions. Even dynamic processes can be captured, which can barely be predicted based on amino acid sequences. Besides increasing our knowledge about intracellular processes, this information facilitates the development of innovative therapies and new diagnostic methods. In order to perform such a localisation, the proteins under analysis are usually fused with a fluorescent protein. So, they can be observed by means of a fluorescence microscope and analysed. In recent years, several automated methods have been proposed for performing such analyses. Here, two different types of approaches can be distinguished: techniques which enable the recognition of a fixed set of protein locations and methods that identify new ones. To our knowledge, a combination of both approaches – i.e. a technique, which enables supervised learning using a known set of protein locations and is able to identify and incorporate new protein locations afterwards – has not been presented yet. Furthermore, associated problems, e.g. the recognition of cells to be analysed, have usually been neglected. Results We introduce a novel approach to automated protein localisation in living cells. In contrast to well-known techniques, the protein localisation technique presented in this article aims at combining the two types of approaches described above: After an automatic identification of unknown protein locations, a potential user is enabled to incorporate them into the pre-trained system. An incremental neural network allows the classification of a fixed set of protein location as well as the detection, clustering and incorporation of additional patterns that occur during an experiment. Here, the proposed technique achieves promising results with respect to both tasks. In addition, the protein localisation procedure has been adapted
Gou, Kaiyu; Gan, Chaoqin; Zhang, Xiaoyu; Zhang, Yuchao
2018-03-01
An optical time-and-wavelength-division-multiplexing metro-access network (TWDM-MAN) is proposed and demonstrated in this paper. By the reuse of tangent-ring optical distribution network and the design of distributed control mechanism, ONUs needing to communicate with each other can be flexibly accessed to successfully make up three kinds of reconfigurable networks. By the nature advantage of ring topology in protection, three-level comprehensive protections covering both feeder and distribution fibers are also achieved. Besides, a distributed wavelength allocation (DWA) is designed to support efficient parallel upstream transmission. The analyses including capacity, congestion and transmission simulation show that this network has a great performance.
Diaz-Ruelas, Alvaro; Jeldtoft Jensen, Henrik; Piovani, Duccio; Robledo, Alberto
2016-12-01
It is well known that low-dimensional nonlinear deterministic maps close to a tangent bifurcation exhibit intermittency and this circumstance has been exploited, e.g., by Procaccia and Schuster [Phys. Rev. A 28, 1210 (1983)], to develop a general theory of 1/f spectra. This suggests it is interesting to study the extent to which the behavior of a high-dimensional stochastic system can be described by such tangent maps. The Tangled Nature (TaNa) Model of evolutionary ecology is an ideal candidate for such a study, a significant model as it is capable of reproducing a broad range of the phenomenology of macroevolution and ecosystems. The TaNa model exhibits strong intermittency reminiscent of punctuated equilibrium and, like the fossil record of mass extinction, the intermittency in the model is found to be non-stationary, a feature typical of many complex systems. We derive a mean-field version for the evolution of the likelihood function controlling the reproduction of species and find a local map close to tangency. This mean-field map, by our own local approximation, is able to describe qualitatively only one episode of the intermittent dynamics of the full TaNa model. To complement this result, we construct a complete nonlinear dynamical system model consisting of successive tangent bifurcations that generates time evolution patterns resembling those of the full TaNa model in macroscopic scales. The switch from one tangent bifurcation to the next in the sequences produced in this model is stochastic in nature, based on criteria obtained from the local mean-field approximation, and capable of imitating the changing set of types of species and total population in the TaNa model. The model combines full deterministic dynamics with instantaneous parameter random jumps at stochastically drawn times. In spite of the limitations of our approach, which entails a drastic collapse of degrees of freedom, the description of a high-dimensional model system in terms of a low
Audits of radiopharmaceutical formulations
International Nuclear Information System (INIS)
Castronovo, F.P. Jr.
1992-01-01
A procedure for auditing radiopharmaceutical formulations is described. To meet FDA guidelines regarding the quality of radiopharmaceuticals, institutional radioactive drug research committees perform audits when such drugs are formulated away from an institutional pharmacy. All principal investigators who formulate drugs outside institutional pharmacies must pass these audits before they can obtain a radiopharmaceutical investigation permit. The audit team meets with the individual who performs the formulation at the site of drug preparation to verify that drug formulations meet identity, strength, quality, and purity standards; are uniform and reproducible; and are sterile and pyrogen free. This team must contain an expert knowledgeable in the preparation of radioactive drugs; a radiopharmacist is the most qualified person for this role. Problems that have been identified by audits include lack of sterility and apyrogenicity testing, formulations that are open to the laboratory environment, failure to use pharmaceutical-grade chemicals, inadequate quality control methods or records, inadequate training of the person preparing the drug, and improper unit dose preparation. Investigational radiopharmaceutical formulations, including nonradiolabeled drugs, must be audited before they are administered to humans. A properly trained pharmacist should be a member of the audit team
Height increment of understorey Norway spruces under different tree canopies
Directory of Open Access Journals (Sweden)
Olavi Laiho
2014-02-01
Full Text Available Background Stands having advance regeneration of spruce are logical places to start continuous cover forestry (CCF in fertile and mesic boreal forests. However, the development of advance regeneration is poorly known. Methods This study used regression analysis to model the height increment of spruce understorey as a function of seedling height, site characteristics and canopy structure. Results An admixture of pine and birch in the main canopy improves the height increment of understorey. When the stand basal area is 20 m2ha-1 height increment is twice as fast under pine and birch canopies, as compared to spruce. Height increment of understorey spruce increases with increasing seedling height. Between-stand and within-stand residual variation in the height increment of understorey spruces is high. The increment of 1/6 fastest-growing seedlings is at least 50% greater than the average. Conclusions The results of this study help forest managers to regulate the density and species composition of the stand, so as to obtain a sufficient height development of the understorey. In pure and almost pure spruce stands, the stand basal area should be low for a good height increment of the understorey.
Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition
Directory of Open Access Journals (Sweden)
Chang Liu
2014-01-01
Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.
An Incremental Support Vector Machine based Speech Activity Detection Algorithm.
Xianbo, Xiao; Guangshu, Hu
2005-01-01
Traditional voice activity detection algorithms are mostly threshold-based or statistical model-based. All those methods are absent of the ability to react quickly to variations of environments. This paper describes an incremental SVM (Support Vector Machine) method for speech activity detection. The proposed incremental procedure makes it adaptive to variation of environments and the special construction of incremental training data set decreases computing consumption effectively. Experiments results demonstrated its higher end point detection accuracy. Further work will be focused on decreasing computing consumption and importing multi-class SVM classifiers.
Incremental Sampling Algorithms for Robust Propulsion Control, Phase I
National Aeronautics and Space Administration — Aurora Flight Sciences proposes to develop a system for robust engine control based on incremental sampling, specifically Rapidly-Expanding Random Tree (RRT)...
MRI: Modular reasoning about interference in incremental programming
Oliveira, Bruno C. D. S; Schrijvers, Tom; Cook, William R
2012-01-01
Incremental Programming (IP) is a programming style in which new program components are defined as increments of other components. Examples of IP mechanisms include: Object-oriented programming (OOP) inheritance, aspect-oriented programming (AOP) advice and feature-oriented programming (FOP). A characteristic of IP mechanisms is that, while individual components can be independently defined, the composition of components makes those components become tightly coupled, sh...
On the instability increments of a stationary pinch
International Nuclear Information System (INIS)
Bud'ko, A.B.
1989-01-01
The stability of stationary pinch to helical modes is numerically studied. It is shown that in the case of a rather fast plasma pressure decrease to the pinch boundary, for example, for an isothermal diffusion pinch with Gauss density distribution instabilities with m=0 modes are the most quickly growing. Instability increments are calculated. A simple analytical expression of a maximum increment of growth of sausage instability for automodel Gauss profiles is obtained
Entity versus incremental theories predict older adults' memory performance.
Plaks, Jason E; Chasteen, Alison L
2013-12-01
The authors examined whether older adults' implicit theories regarding the modifiability of memory in particular (Studies 1 and 3) and abilities in general (Study 2) would predict memory performance. In Study 1, individual differences in older adults' endorsement of the "entity theory" (a belief that one's ability is fixed) or "incremental theory" (a belief that one's ability is malleable) of memory were measured using a version of the Implicit Theories Measure (Dweck, 1999). Memory performance was assessed with a free-recall task. Results indicated that the higher the endorsement of the incremental theory, the better the free recall. In Study 2, older and younger adults' theories were measured using a more general version of the Implicit Theories Measure that focused on the modifiability of abilities in general. Again, for older adults, the higher the incremental endorsement, the better the free recall. Moreover, as predicted, implicit theories did not predict younger adults' memory performance. In Study 3, participants read mock news articles reporting evidence in favor of either the entity or incremental theory. Those in the incremental condition outperformed those in the entity condition on reading span and free-recall tasks. These effects were mediated by pretask worry such that, for those in the entity condition, higher worry was associated with lower performance. Taken together, these studies suggest that variation in entity versus incremental endorsement represents a key predictor of older adults' memory performance. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Assessment of strategy formulation
DEFF Research Database (Denmark)
Acur, Nuran; Englyst, Linda
2006-01-01
approaches to strategy assessment, namely the goal-centred, comparative and improvement approaches, as found in the literature. Furthermore, it encompasses three phases of strategy formulation processes: strategic thinking, strategic planning and embedding of strategy. The tool reflects that the different......, but cases and managerial perceptions indicate that the need for accurate and detailed plans might be overrated in the literature, as implementation relies heavily on continuous improvement and empowerment. Concerning embedding, key aspects relate both to the goal-centred and improvement approaches, while...... for strategy formulation processes that ensures high quality in process and outcome. Design/methodology/approach – A literature review was conducted to identify success criteria for strategy formulation processes. Then, a simple questionnaire and assessment tool was developed and used to test the validity...
Nonlinear finite element formulation for analyzing shape memory alloy cylindrical panels
International Nuclear Information System (INIS)
Mirzaeifar, R; Shakeri, M; Sadighi, M
2009-01-01
In this paper, a general incremental displacement based finite element formulation capable of modeling material nonlinearities based on first-order shear deformation theory (FSDT) is developed for cylindrical shape memory alloy (SMA) shells. The Boyd–Lagoudas phenomenological model with polynomial hardening in conjunction with 3D incremental convex cutting plane explicit algorithm is implemented for preparing the SMA constitutive model in the finite element formulation. Several numerical examples are presented for demonstrating the performance of the proposed formulation in stress, deflection and phase transformation analysis of pseudoelastic behavior of shape memory cylindrical panels with various boundary conditions. Also, it is shown that the presented formulation can be implemented for studying plates and beams with rectangular cross section
Lubrication in tablet formulations.
Wang, Jennifer; Wen, Hong; Desai, Divyakant
2010-05-01
Theoretical aspects and practical considerations of lubrication in tablet compression are reviewed in this paper. Properties of the materials that are often used as lubricants, such as magnesium stearate, in tablet dosage form are summarized. The manufacturing process factors that may affect tablet lubrication are discussed. As important as the lubricants in tablet formulations are, their presence can cause some changes to the tablet physical and chemical properties. Furthermore, a detailed review is provided on the methodologies used to characterize lubrication process during tablet compression with relevant process analytical technologies. Finally, the Quality-by-Design considerations for tablet formulation and process development in terms of lubrication are discussed.
International Nuclear Information System (INIS)
Rius, J.; Sane, J.; Miravitlles, C.; Gies, H.; Marler, B.; Oberhagemann, U.
1995-01-01
The viability of solving the structure type of zeolitic and layered materials applying multisolution direct methods to low-resolution (∼2.2 A) powder diffraction data is shown. The phases are refined with the tangent formula derived from Patterson-function arguments and the correct phase sets are discriminated with the conventional figures of merit. The two test examples presented are (a) the already known tetragonal zeolite ZSM-11 (space group I anti 4m2) at 2.3 A resolution and (b) the hitherto unknown layer silicate RUB-15 (Ibam) at 2.2 A resolution. In both cases, the tetrahedral Si units appear as resolved peaks in the Fourier maps computed with the phases of the highest-ranked direct-methods solutions. (orig.)
Three routes forward for biofuels: Incremental, leapfrog, and transitional
International Nuclear Information System (INIS)
Morrison, Geoff M.; Witcover, Julie; Parker, Nathan C.; Fulton, Lew
2016-01-01
This paper examines three technology routes for lowering the carbon intensity of biofuels: (1) a leapfrog route that focuses on major technological breakthroughs in lignocellulosic pathways at new, stand-alone biorefineries; (2) an incremental route in which improvements are made to existing U.S. corn ethanol and soybean biodiesel biorefineries; and (3) a transitional route in which biotechnology firms gain experience growing, handling, or chemically converting lignocellulosic biomass in a lower-risk fashion than leapfrog biorefineries by leveraging existing capital stock. We find the incremental route is likely to involve the largest production volumes and greenhouse gas benefits until at least the mid-2020s, but transitional and leapfrog biofuels together have far greater long-term potential. We estimate that the Renewable Fuel Standard, California's Low Carbon Fuel Standard, and federal tax credits provided an incentive of roughly $1.5–2.5 per gallon of leapfrog biofuel between 2012 and 2015, but that regulatory elements in these policies mostly incentivize lower-risk incremental investments. Adjustments in policy may be necessary to bring a greater focus on transitional technologies that provide targeted learning and cost reduction opportunities for leapfrog biofuels. - Highlights: • Three technological pathways are compared that lower carbon intensity of biofuels. • Incremental changes lead to faster greenhouse gas reductions. • Leapfrog changes lead to greatest long-term potential. • Two main biofuel policies (RFS and LCFS) are largely incremental in nature. • Transitional biofuels offer medium-risk, medium reward pathway.
Directory of Open Access Journals (Sweden)
Wubshet Ibrahim
Full Text Available This article presents the effect of thermal radiation on magnetohydrodynamic flow of tangent hyperbolic fluid with nanoparticle past an enlarging sheet with second order slip and convective boundary condition. Condition of zero normal flux of nanoparticles at the wall is used for the concentration boundary condition, which is the current topic that have yet to be studied extensively. The solution for the velocity, temperature and nanoparticle concentration is governed by parameters viz. power-law index (n, Weissenberg number We, Biot number Bi, Prandtl number Pr, velocity slip parameters δ and γ, Lewis number Le, Brownian motion parameter Nb and the thermophoresis parameter Nt. Similarity transformation is used to metamorphosed the governing non-linear boundary-value problem into coupled higher order non-linear ordinary differential equation. The succeeding equations were numerically solved using the function bvp4c from the matlab for different values of emerging parameters. Numerical results are deliberated through graphs and tables for velocity, temperature, concentration, the skin friction coefficient and local Nusselt number. The results designate that the skin friction coefficient Cf deplete as the values of Weissenberg number We, slip parameters γ and δ upturn and it rises as the values of power-law index n increase. The local Nusselt number -θ′(0 decreases as slip parameters γ and δ, radiation parameter Nr, Weissenberg number We, thermophoresis parameter Nt and power-law index n increase. However, the local Nusselt number increases as the Biot number Bi increase. Keywords: Tangent hyperbolic fluid, Second order slip flow, MHD, Convective boundary condition, Radiation effect, Passive control of nanoparticles
Sadowski, P.; Kowalczyk-Gajewska, K.; Stupkiewicz, S.
2017-09-01
A consistent algorithmic treatment of the incremental Mori-Tanaka (MT) model for elasto-plastic composites is proposed. The aim is to develop a computationally efficient and robust micromechanical constitutive model suitable for large-scale finite-element computations. The resulting overall computational scheme is a doubly-nested iteration-subiteration scheme. The Newton method is used to solve the nonlinear equations at each level involved. Exact linearization is thus performed at each level so that a quadratic convergence rate can be achieved. To this end, the automatic differentiation (AD) technique is used, and the corresponding AD-based formulation is provided. Excellent overall performance of the present MT scheme in three-dimensional finite-element computations is illustrated.
Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking
Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua
2014-01-01
To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252
Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking
Directory of Open Access Journals (Sweden)
Ming Xue
2014-02-01
Full Text Available To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks.
A heuristic approach to incremental and reactive scheduling
Odubiyi, Jide B.; Zoch, David R.
1989-01-01
An heuristic approach to incremental and reactive scheduling is described. Incremental scheduling is the process of modifying an existing schedule if the initial schedule does not meet its stated initial goals. Reactive scheduling occurs in near real-time in response to changes in available resources or the occurrence of targets of opportunity. Only minor changes are made during both incremental and reactive scheduling because a goal of re-scheduling procedures is to minimally impact the schedule. The described heuristic search techniques, which are employed by the Request Oriented Scheduling Engine (ROSE), a prototype generic scheduler, efficiently approximate the cost of reaching a goal from a given state and effective mechanisms for controlling search.
Increment definitions for scale-dependent analysis of stochastic data.
Waechter, Matthias; Kouzmitchev, Alexei; Peinke, Joachim
2004-11-01
It is common for scale-dependent analysis of stochastic data to use the increment Delta(t,r) =xi(t+r)-xi(t) of a data set xi(t) as a stochastic measure, where r denotes the scale. For joint statistics of Delta(t,r) and Delta(t, r') the question of how to nest the increments on different scales r, r' is investigated. Here we show that in some cases spurious correlations between scales can be introduced by the common left-justified definition. The consequences for a Markov process are discussed. These spurious correlations can be avoided by an appropriate nesting of increments. We demonstrate this effect for different data sets and show how it can be detected and quantified. The problem allows to propose a unique method to distinguish between experimental data generated by a noiselike or a Langevin-like random-walk process, respectively.
Making context explicit for explanation and incremental knowledge acquisition
Energy Technology Data Exchange (ETDEWEB)
Brezillon, P. [Univ. Paris (France)
1996-12-31
Intelligent systems may be improved by making context explicit in problem solving. This is a lesson drawn from a study of the reasons why a number of knowledge-based systems (KBSs) failed. We discuss the interest to make context explicit in explanation generation and incremental knowledge acquisition, two important aspects of intelligent systems that aim to cooperate with users. We show how context can be used to better explain and incrementally acquire knowledge. The advantages of using context in explanation and incremental knowledge acquisition are discussed through SEPIT, an expert system for supporting diagnosis and explanation through simulation of power plants. We point out how the limitations of such systems may be overcome by making context explicit.
Springback law study and application in incremental bending process
Zhang, Feifei; He, Kai; Dang, Xiaobing; Du, Ruxu
2018-02-01
One incremental bending process has been proposed for manufacturing the complex and thick ship-hull plates. The accuracy and efficiency for this novel process is mainly dependent on the loading path and thus the unavoidable springback behavior should be considered in the loading path determination. In this paper, firstly, the numerical simulation method is verified by the corresponding experiment, and then the springback law during the incremental bending process is investigated based on numerical simulation, and later the loading path based on the springback law and the minimum energy method is achieved for specific machining shape. Comparison between the designed curve based on springback law and the new simulation results verifies that the springback law obtained by numerical simulation is believable, so this study provides a new perspective for the further research on incremental bending process.
Motion-Induced Blindness Using Increments and Decrements of Luminance
Directory of Open Access Journals (Sweden)
Stine Wm Wren
2017-10-01
Full Text Available Motion-induced blindness describes the disappearance of stationary elements of a scene when other, perhaps non-overlapping, elements of the scene are in motion. We measured the effects of increment (200.0 cd/m2 and decrement targets (15.0 cd/m2 and masks presented on a grey background (108.0 cd/m2, tapping into putative ON- and OFF-channels, on the rate of target disappearance psychophysically. We presented two-frame motion, which has coherent motion energy, and dynamic Glass patterns and dynamic anti-Glass patterns, which do not have coherent motion energy. Using the method of constant stimuli, participants viewed stimuli of varying durations (3.1 s, 4.6 s, 7.0 s, 11 s, or 16 s in a given trial and then indicated whether or not the targets vanished during that trial. Psychometric function midpoints were used to define absolute threshold mask duration for the disappearance of the target. 95% confidence intervals for threshold disappearance times were estimated using a bootstrap technique for each of the participants across two experiments. Decrement masks were more effective than increment masks with increment targets. Increment targets were easier to mask than decrement targets. Distinct mask pattern types had no effect, suggesting that perceived coherence contributes to the effectiveness of the mask. The ON/OFF dichotomy clearly carries its influence to the level of perceived motion coherence. Further, the asymmetry in the effects of increment and decrement masks on increment and decrement targets might lead one to speculate that they reflect the ‘importance’ of detecting decrements in the environment.
Rapid Prototyping of wax foundry models in an incremental process
Directory of Open Access Journals (Sweden)
B. Kozik
2011-04-01
Full Text Available The paper presents an analysis incremental methods of creating wax founding models. There are two methods of Rapid Prototypingof wax models in an incremental process which are more and more often used in industrial practice and in scientific research.Applying Rapid Prototyping methods in the process of making casts allows for acceleration of work on preparing prototypes. It isespecially important in case of element having complicated shapes. The time of making a wax model depending on the size and the appliedRP method may vary from several to a few dozen hours.
Incremental Learning of Skill Collections based on Intrinsic Motivation
Jan Hendrik Metzen; Frank eKirchner; Frank eKirchner
2013-01-01
Life-long learning of reusable, versatile skills is a key prerequisite forembodied agents that act in a complex, dynamic environment and are faced withdifferent tasks over their lifetime. We address the question of how an agentcan learn useful skills efficiently during a developmental period,i.e., when no task is imposed on him and no external reward signal is provided.Learning of skills in a developmental period needs to be incremental andself-motivated. We propose a new incremental, task-i...
First incremental buy for Increment 2 of the Space Transportation System (STS)
1989-01-01
Thiokol manufactured and delivered 9 flight motors to KSC on schedule. All test flights were successful. All spent SRMs were recovered. Design, development, manufacture, and delivery of required transportation, handling, and checkout equipment to MSFC and to KSC were completed on schedule. All items of data required by DPD 400 were prepared and delivered as directed. In the system requirements and analysis area, the point of departure from Buy 1 to the operational phase was developed in significant detail with a complete set of transition documentation available. The documentation prepared during the Buy 1 program was maintained and updated where required. The following flight support activities should be continued through other production programs: as-built materials usage tracking on all flight hardware; mass properties reporting for all flight hardware until sample size is large enough to verify that the weight limit requirements were met; ballistic predictions and postflight performance assessments for all production flights; and recovered SRM hardware inspection and anomaly identification. In the safety, reliability, and quality assurance area, activities accomplished were assurance oriented in nature and specifically formulated to prevent problems and hardware failures. The flight program to date has adequately demonstrated the success of this assurance approach. The attention focused on details of design, analysis, manufacture, and inspection to assure the production of high-quality hardware has resulted in the absence of flight failures. The few anomalies which did occur were evaluated, design or manufacturing changes incorporated, and corrective actions taken to preclude recurrence.
Bipower variation for Gaussian processes with stationary increments
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Corcuera, José Manuel; Podolskij, Mark
2009-01-01
Convergence in probability and central limit laws of bipower variation for Gaussian processes with stationary increments and for integrals with respect to such processes are derived. The main tools of the proofs are some recent powerful techniques of Wiener/Itô/Malliavin calculus for establishing...
76 FR 73475 - Immigration Benefits Business Transformation, Increment I; Correction
2011-11-29
... [CIS No. 2481-09; Docket No. USCIS-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY: U.S. Citizenship and Immigration Services, DHS. ACTION: Final... to enable U.S. Citizenship and Immigration Services (USCIS) to transform its business processes. The...
Next Generation Diagnostic System (NGDS) Increment 1 Early Fielding Report
2017-06-07
laptop in order to utilize the Windows 10 operating system required by the Department of Defense. Additional developmental and cybersecurity testing...upgraded the NGDS laptop in order to implement the Windows 10 operating system required by the Department of Defense. Additional cybersecurity...Director, Operational Test and Evaluation Next Generation Diagnostic System (NGDS) Increment 1 Early Fielding Report June 2017
Single-point incremental forming and formability-failure diagrams
DEFF Research Database (Denmark)
Silva, M.B.; Skjødt, Martin; Atkins, A.G.
2008-01-01
In a recent work [1], the authors constructed a closed-form analytical model that is capable of dealing with the fundamentals of single point incremental forming and explaining the experimental and numerical results published in the literature over the past couple of years. The model is based on ...
Incremental concept learning with few training examples and hierarchical classification
Bouma, Henri; Eendebak, Pieter T.; Schutte, Klamer; Azzopardi, George; Burghouts, Gertjan J.
2015-01-01
Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible
Factors for Radical Creativity, Incremental Creativity, and Routine, Noncreative Performance
Madjar, Nora; Greenberg, Ellen; Chen, Zheng
2011-01-01
This study extends theory and research by differentiating between routine, noncreative performance and 2 distinct types of creativity: radical and incremental. We also use a sensemaking perspective to examine the interplay of social and personal factors that may influence a person's engagement in a certain level of creative action versus routine,…
Geometry of finite deformations and time-incremental analysis
Czech Academy of Sciences Publication Activity Database
Fiala, Zdeněk
2016-01-01
Roč. 81, May (2016), s. 230-244 ISSN 0020-7462 Institutional support: RVO:68378297 Keywords : solid mechanics * finite deformations * time-incremental analysis * Lagrangian system * evolution equation of Lie type Subject RIV: BE - Theoretical Physics Impact factor: 2.074, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020746216000330
Respiratory ammonia output and blood ammonia concentration during incremental exercise
Ament, W; Huizenga, [No Value; Kort, E; van der Mark, TW; Grevink, RG; Verkerke, GJ
The aim of this study was to investigate whether the increase of ammonia concentration and lactate concentration in blood was accompanied by an increased expiration of ammonia during graded exercise. Eleven healthy subjects performed an incremental cycle ergometer test. Blood ammonia, blood lactate
Predicting Robust Vocabulary Growth from Measures of Incremental Learning
Frishkoff, Gwen A.; Perfetti, Charles A.; Collins-Thompson, Kevyn
2011-01-01
We report a study of incremental learning of new word meanings over multiple episodes. A new method called MESA (Markov Estimation of Semantic Association) tracked this learning through the automated assessment of learner-generated definitions. The multiple word learning episodes varied in the strength of contextual constraint provided by…
Simultaneous versus incremental learning of multiple skills by modular robots
Rossi, C.; Eiben, A.E.
2014-01-01
This paper is concerned with the problem of learning multiple skills by modular robots. The main question we address is whether it is better to learn multiple skills simultaneously (all-at-once) or incrementally (one-by-one). We conduct an experimental study with modular robots of various
Gust Disturbance Alleviation with Incremental Nonlinear Dynamic Inversion
Smeur, E.J.J.; de Croon, G.C.H.E.; Chu, Q.P.
2016-01-01
Micro Aerial Vehicles (MAVs) are limited in their operation outdoors near obstacles by their ability to withstand wind gusts. Currently widespread position control methods such as Proportional Integral Derivative control do not perform well under the influence of gusts. Incremental Nonlinear Dynamic
Incremental cryptography and security of public hash functions ...
African Journals Online (AJOL)
An investigation of incremental algorithms for crytographic functions was initiated. The problem, for collision-free hashing, is to design a scheme for which there exists an efficient “update” algorithm: this algorithm is given the hash function H, the hash h = H(M) of message M and the “replacement request” (j, m), and outputs ...
Predicting Robust Vocabulary Growth from Measures of Incremental Learning
Frishkoff, G.A.; Perfetti, C.A.; Collins-Thompson, K.
2011-01-01
We report a study of incremental learning of new word meanings over multiple episodes. A new method called MESA (Markov Estimation of Semantic Association) tracked this learning through the automated assessment of learner-generated definitions. The multiple word learning episodes varied in the
Flexible Aircraft Gust Load Alleviation with Incremental Nonlinear Dynamic Inversion
Wang, X.; van Kampen, E.; Chu, Q.; De Breuker, R.
2018-01-01
In this paper, an Incremental Nonlinear Dynamic Inversion (INDI) controller is
developed for the flexible aircraft gust load alleviation (GLA) problem. First, a flexible aircraft model captures both inertia and aerodynamic coupling effects between flight dynamics and structural vibration
Revisiting the fundamentals of single point incremental forming by
DEFF Research Database (Denmark)
Silva, Beatriz; Skjødt, Martin; Martins, Paulo A.F.
2008-01-01
Knowledge of the physics behind the fracture of material at the transition between the inclined wall and the corner radius of the sheet is of great importance for understanding the fundamentals of single point incremental forming (SPIF). How the material fractures, what is the state of strain and...
The Incremental Validity of Positive Emotions in Predicting School Functioning
Lewis, Ashley D.; Huebner, E. Scott; Reschly, Amy L.; Valois, Robert F.
2009-01-01
Proponents of positive psychology have argued for more comprehensive assessments incorporating positive measures (e.g., student strengths) as well as negative measures (e.g., psychological symptoms). However, few variable-centered studies have addressed the incremental validity of positive assessment data. The authors investigated the incremental…
Some theoretical aspects of capacity increment in gaseous diffusion
International Nuclear Information System (INIS)
Coates, J.H.; Guais, J.C.; Lamorlette, G.
1975-01-01
Facing to the sharply growing needs of enrichment services, the problem of implementing new capacities must be included in an optimized scheme spread out in time. In this paper the alternative solutions will be studied first for an unique increment decision, and then in an optimum schedule. The limits of the analysis will be discussed [fr
Incremental validity of a measure of emotional intelligence.
Chapman, Benjamin P; Hayslip, Bert
2005-10-01
After the Schutte Self-Report Inventory of Emotional Intelligence (SSRI; Schutte et al., 1998) was found to predict college grade point average, subsequent emotional intelligence (EI)-college adjustment research has used inconsistent measures and widely varying criteria, resulting in confusion about the construct's predictive validity. In this study, we assessed the SSRI's incremental validity for a wide range of adjustment criteria, pitting it against a competing trait measure, the NEO Five-Factor Inventory (NEO-FFI; Costa & McCrae, 1992), and tests of fluid and crystallized intelligence. At a broad bandwidth, the SSRI total score significantly and uniquely predicted variance beyond NEO-FFI domain scores in the UCLA Loneliness Scale, Revised (Russell, Peplau, & Cutrono, 1980) scores. Higher fidelity analyses using previously identified SSRI factors and NEO-FFI item clusters revealed that the SSRI's Optimism/Mood Regulation and Emotion Appraisal factors contributed unique variance to self-reported study habits and social stress, respectively. The potential moderation of incremental validity by gender did not reach significance due to loss of power from splitting the sample, and mediational analyses revealed the SSRI Optimism/Mood Regulation factor was both directly and indirectly related to various criteria. We discuss the small magnitude of incremental validity coefficients and the differential incremental validity of SSRI factor and total scores.
A sequential tree approach for incremental sequential pattern mining
Indian Academy of Sciences (India)
Data mining; STISPM; sequential tree; incremental mining; backward tracking. Abstract. ''Sequential pattern mining'' is a prominent and significant method to explore the knowledge and innovation from the large database. Common sequential pattern mining algorithms handle static databases.Pragmatically, looking into the ...
Cascaded incremental nonlinear dynamic inversion for MAV disturbance rejection
Smeur, E.J.J.; de Croon, G.C.H.E.; Chu, Q.
2018-01-01
This paper presents the cascaded integration of Incremental Nonlinear Dynamic Inversion (INDI) for attitude control and INDI for position control of micro air vehicles. Significant improvements over a traditional Proportional Integral Derivative (PID) controller are demonstrated in an experiment
Playing by the rules? Analysing incremental urban developments
Karnenbeek, van Lilian; Janssen-Jansen, Leonie
2018-01-01
Current urban developments are often considered outdated and static, and the argument follows that they should become more adaptive. In this paper, we argue that existing urban development are already adaptive and incremental. Given this flexibility in urban development, understanding changes in the
a model for incremental grounding in spoken dialogue systems
Visser, Thomas; Traum, David; DeVault, David; op den Akker, Hendrikus J.A.
2012-01-01
Recent advances in incremental language processing for dialogue systems promise to enable more natural conversation between humans and computers. By analyzing the user's utterance while it is still in progress, systems can provide more human-like overlapping and backchannel responses to convey their
A model for incremental grounding in spoken dialogue systems
Visser, Thomas; Traum, David; DeVault, David; op den Akker, Hendrikus J.A.
2014-01-01
We present a computational model of incremental grounding, including state updates and action selection. The model is inspired by corpus-based examples of overlapping utterances of several sorts, including backchannels and completions. The model has also been partially implemented within a virtual
A syntactic language model based on incremental CCG parsing
Hassan, H.; Sima'an, K.; Way, A.
2008-01-01
Syntactically-enriched language models (parsers) constitute a promising component in applications such as machine translation and speech-recognition. To maintain a useful level of accuracy, existing parsers are non-incremental and must span a combinatorially growing space of possible structures as
Average-case analysis of incremental topological ordering
DEFF Research Database (Denmark)
Ajwani, Deepak; Friedrich, Tobias
2010-01-01
Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...
Against the Odds: Academic Underdogs Benefit from Incremental Theories
Davis, Jody L.; Burnette, Jeni L.; Allison, Scott T.; Stone, Heather
2011-01-01
An implicit theory of ability approach to motivation argues that students who believe traits to be malleable (incremental theorists), relative to those who believe traits to be fixed (entity theorists), cope more effectively when academic challenges arise. In the current work, we integrated the implicit theory literature with research on top dog…
Systematic Equation Formulation
DEFF Research Database (Denmark)
Lindberg, Erik
2007-01-01
A tutorial giving a very simple introduction to the set-up of the equations used as a model for an electrical/electronic circuit. The aim is to find a method which is as simple and general as possible with respect to implementation in a computer program. The “Modified Nodal Approach”, MNA, and th......, and the “Controlled Source Approach”, CSA, for systematic equation formulation are investigated. It is suggested that the kernel of the P Spice program based on MNA is reprogrammed....
Development of corotational formulated FEM for application to 30m class large deployable reflector
International Nuclear Information System (INIS)
Ozawa, Satoru; Fujiwara, Yuuichi; Tsujihata, Akio
2010-01-01
JAXA, Japan Aerospace Exploration Agency, is now developing a corotational formulated finite element analysis method and its software 'Origami/ETS' for the development of 30m class large deployable reflectors. For the reason that the deployable reflector is composed of beams, cables and mesh, this analysis method is generalized for finite elements with multiple nodes, which are commonly used in linear finite element analyses. The large displacement and rotation are taken into account by the corotational formulation. The tangent stiffness matrix for finite elements with multiple nodes is obtained as follows; the geometric stiffness matrix of two node elements is derived by taking variation of the element's corotational matrix from the virtual work of finite elements with large displacement; similarly the geometric stiffness matrix for three node elements is derived; as the extension of two and three node element theories, the geometric stiffness matrix for multiple node elements is derived; with the geometric stiffness matrix for multiple node elements, the tangent stiffness matrix is obtained. The analysis method is applied for the deployment analysis and static structural analysis of the 30m class large deployable reflector. In the deployment analysis, it is confirmed that this method stably analyzes the deployment motion from the deployment configuration to the stowed configuration of the reflector. In the static analysis, it is confirmed that the mesh structure is analyzed successfully. The 30m class large deployable reflector is now still being developed and is about to undergo several tests with its prototypes. This analysis method will be used in the tests and verifications of the reflector.
Drug delivery and formulations.
Breitkreutz, Jörg; Boos, Joachim
2011-01-01
Paediatric drug delivery is a major challenge in drug development. Because of the heterogeneous nature of the patient group, ranging from newborns to adolescents, there is a need to use appropriate excipients, drug dosage forms and delivery devices for different age groups. So far, there is a lack of suitable and safe drug formulations for children, especially for the very young and seriously ill patients. The new EU legislation will enforce paediatric clinical trials and drug development. Current advances in paediatric drug delivery include interesting new concepts such as fast-dissolving drug formulations, including orodispersible tablets and oral thin strips (buccal wafers), and multiparticulate dosage forms based on mini-tabletting or pelletization technologies. Parenteral administration is likely to remain the first choice for children in the neonatal period and for emergency cases. Alternative routes of administration include transdermal, pulmonary and nasal drug delivery systems. A few products are already available on the market, but others still need further investigations and clinical proof of concept.
The Incremental Hybrid Natural Element Method for Elastoplasticity Problems
Directory of Open Access Journals (Sweden)
Yongqi Ma
2014-01-01
Full Text Available An incremental hybrid natural element method (HNEM is proposed to solve the two-dimensional elasto-plastic problems in the paper. The corresponding formulae of this method are obtained by consolidating the hybrid stress element and the incremental Hellinger-Reissner variational principle into the NEM. Using this method, the stress and displacement variables at each node can be directly obtained after the stress and displacement interpolation functions are properly constructed. The numerical examples are given to show the advantages of the proposed algorithm of the HNEM, and the solutions for the elasto-plastic problems are better than those of the NEM. In addition, the performance of the proposed algorithm is better than the recover stress method using moving least square interpolation.
Numerical and experimental microscale analysis of the incremental forming process
Szyndler, Joanna; Delannay, Laurent; Muszka, Krzysztof; Madej, Lukasz
2017-10-01
Development of the 2D concurrent multiscale numerical model of novel incremental forming (IF) process is the main aim of the paper. The IF process is used to obtain light and durable integral parts, especially useful in aerospace or automotive industries. Particular attention in the present work is put on numerical investigation of material behavior at both, macro and micro scale levels. A Finite Element Method (FEM) supported by Digital Material Representation (DMR) concept is used during the investigation. Also, the Crystal Plasticity (CP) theory is applied to describe material flow at the grain level. Examples of obtained results both from the macro and micro scales are presented in the form of strain distributions, grain shapes and pole figures at different process stages. Moreover, Electron Backscatter Diffraction (EBSD) analysis is used to obtain detailed information regarding material morphology changes during the incremental forming for the comparison purposes.
Thermomechanical simulations and experimental validation for high speed incremental forming
Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia
2016-10-01
Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.
Lackner, J. R.; Lobovits, D. N.
1978-01-01
Visual-target pointing experiments were performed on 24 adult volunteers in order to compare the relative effectiveness of incremental (stepwise) and single-step exposure conditions on adaptation to visual rearrangement. The differences between the preexposure and postexposure scores served as an index of the adaptation elicited during the exposure period. It is found that both single-step and stepwise exposure to visual rearrangement elicit compensatory changes in sensorimotor coordination. However, stepwise exposure, when compared to single-step exposur in terms of the average magnitude of visual displacement over the exposure period, clearly enhances the rate of adaptation. It seems possible that the enhancement of adaptation to unusual patterns of sensory stimulation produced by incremental exposure reflects a general principle of sensorimotor function.
Will Incremental Hemodialysis Preserve Residual Function and Improve Patient Survival?
Davenport, Andrew
2015-01-01
The progressive loss of residual renal function in peritoneal dialysis patients is associated with increased mortality. It has been suggested that incremental dialysis may help preserve residual renal function and improve patient survival. Residual renal function depends upon both patient related and dialysis associated factors. Maintaining patients in an over-hydrated state may be associated with better preservation of residual renal function but any benefit comes with a significant risk of cardiovascular consequences. Notably, it is only observational studies that have reported an association between dialysis patient survival and residual renal function; causality has not been established for dialysis patient survival. The tenuous connections between residual renal function and outcomes and between incremental hemodialysis and residual renal function should temper our enthusiasm for interventions in this area. PMID:25385441
Creating Helical Tool Paths for Single Point Incremental Forming
DEFF Research Database (Denmark)
Skjødt, Martin; Hancock, Michael H.; Bay, Niels
2007-01-01
Single point incremental forming (SPIF) is a relatively new sheet forming process. A sheet is clamped in a rig and formed incrementally using a rotating single point tool in the form of a rod with a spherical end. The process is often performed on a CNC milling machine and the tool movement...... is programed using CAM software intended for surface milling. Often the function called profile milling or contour milling is applied. Using this milling function the tool only has a continuous feed rate in two directions X and Y, which is the plane of the undeformed sheet. The feed in the vertical Z direction...... from the profile milling code and converts them into a helical tool path with continuous feed in all three directions. Using the helical tool path the scarring is removed, the part is otherwise unchanged and a major disadvantage of using milling software for SPIF is removed. The solution...
An online incremental orthogonal component analysis method for dimensionality reduction.
Zhu, Tao; Xu, Ye; Shen, Furao; Zhao, Jinxi
2017-01-01
In this paper, we introduce a fast linear dimensionality reduction method named incremental orthogonal component analysis (IOCA). IOCA is designed to automatically extract desired orthogonal components (OCs) in an online environment. The OCs and the low-dimensional representations of original data are obtained with only one pass through the entire dataset. Without solving matrix eigenproblem or matrix inversion problem, IOCA learns incrementally from continuous data stream with low computational cost. By proposing an adaptive threshold policy, IOCA is able to automatically determine the dimension of feature subspace. Meanwhile, the quality of the learned OCs is guaranteed. The analysis and experiments demonstrate that IOCA is simple, but efficient and effective. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Analysis of Forming Forces in Single Point Incremental Forming
Directory of Open Access Journals (Sweden)
Koh Kyung Hee
2016-01-01
Full Text Available Incremental forming is a process to produce sheet metal parts in quick. Because there is no need for dedicated dies and molds, this process is less cost and time spent. The purpose of this study is to investigate forming forces in single point incremental forming. Producing a cone frustum of aluminum is tested for forming forces. A dynamometer is used to collect forming forces and analyze them. These forces are compared with cutting forces upon producing same geometrical shapes of experimental parts. The forming forces in Z direction are 40 times larger than the machining forces. A spindle and its axis of a forming machine should be designed enough to withstand the forming forces.
Incremental Knowledge Base Construction Using DeepDive.
Shin, Jaeho; Wu, Sen; Wang, Feiran; De Sa, Christopher; Zhang, Ce; Ré, Christopher
2015-07-01
Populating a database with unstructured information is a long-standing problem in industry and research that encompasses problems of extraction, cleaning, and integration. Recent names used for this problem include dealing with dark data and knowledge base construction (KBC). In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based respectively on sampling and variational techniques. We also study the tradeoff space of these methods and develop a simple rule-based optimizer. DeepDive includes all of these contributions, and we evaluate Deep-Dive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality.
Final Safety Analysis Report (FSAR) for Building 332, Increment III
Energy Technology Data Exchange (ETDEWEB)
Odell, B. N.; Toy, Jr., A. J.
1977-08-31
This Final Safety Analysis Report (FSAR) supplements the Preliminary Safety Analysis Report (PSAR), dated January 18, 1974, for Building 332, Increment III of the Plutonium Materials Engineering Facility located at the Lawrence Livermore Laboratory (LLL). The FSAR, in conjunction with the PSAR, shows that the completed increment provides facilities for safely conducting the operations as described. These documents satisfy the requirements of ERDA Manual Appendix 6101, Annex C, dated April 8, 1971. The format and content of this FSAR complies with the basic requirements of the letter of request from ERDA San to LLL, dated March 10, 1972. Included as appendices in support of th FSAR are the Building 332 Operational Safety Procedure and the LLL Disaster Control Plan.
Table incremental slow injection CE-CT in lung cancer
International Nuclear Information System (INIS)
Yoshida, Shoji; Maeda, Tomoho; Morita, Masaru
1988-01-01
The purpose of this study is to evaluate tumor enhancement in lung cancer under the table incremental study with slow injection of contrast media. The early serial 8 sliced images during the slow injection (1.5 ml/sec) of contrant media were obtained. Following the early images, delayed 8 same sliced images were taken in 2 minutes later. Chacteristic enhanced patterns of the primary cancer and metastatic mediastinal lymphnode were recognized in this study. Enhancement of the primary lesion was classified in 4 patterns, irregular geographic pattern, heterogeneous pattern, homogeneous pattern and rim-enhanced pattern. In mediastinal metastatic lymphadenopathy, three enhanced patterns were obtained, heterogeneous, homogeneous and ring enhanced pattern. Some characteristic enhancement patterns according to the histopathological finding of the lung cancer were obtained. With using this incremental slow injection CE-CT, precise information about the relationship between lung cancer and adjacent mediastinal structure, and obvious staining patterns of the tumor and mediastinal lymphnode were recognized. (author)
Automobile sheet metal part production with incremental sheet forming
Directory of Open Access Journals (Sweden)
İsmail DURGUN
2016-02-01
Full Text Available Nowadays, effect of global warming is increasing drastically so it leads to increased interest on energy efficiency and sustainable production methods. As a result of adverse conditions, national and international project platforms, OEMs (Original Equipment Manufacturers, SMEs (Small and Mid-size Manufacturers perform many studies or improve existing methodologies in scope of advanced manufacturing techniques. In this study, advanced manufacturing and sustainable production method "Incremental Sheet Metal Forming (ISF" was used for sheet metal forming process. A vehicle fender was manufactured with or without die by using different toolpath strategies and die sets. At the end of the study, Results have been investigated under the influence of method and parameters used.Keywords: Template incremental sheet metal, Metal forming
Energy Technology Data Exchange (ETDEWEB)
Churcher, P.L. (Petroleum Recovery Inst., (Calgary, AB Canada)); Majid, A.H. (Inst. of Sedimentary and Petroleum Geology, Calgary, AB (Canada))
1989-06-01
Devonian reservoirs in the Tangent area of Alberta have been found to display many features similar to some Ordovician reservoirs in the Michigan Basin, including structural control on dolomitization, seal mechanism, source type, and diagenetic sequence. These two reservoirs are compared and contrasted to demonstrate that these plays may have had a similar origin. Both reservoirs are hosted by texturally and lithologically similar limestone, and both are overlain by silty, organic-rich shales that may be the source rocks for the hydrocarbons. In the reservoirs of both basins, strucure plays a major role in localizing the development of the secondary porosity and permeability; productive dolomitized zones are restricted to an area within 400-500 m on either side of faults. Detailed structure contour data have revealed horst-graben systems in both areas. The most intriguing feature is the similarity between the type and sequence of diagenetic mineralization. The breccias and cavities in both reservoirs are lined or filled with dolomite crystals, which frequently have ferroan dolomite rims. This diagenetic event was followed by sulphide mineralization and the precipitation of late-stage anhydrite and calcite. Hydrocarbon migration likely preceded the precipitation of these minerals. The integration of information from both basins provides insight into the origin of this type of play and provides valuable clues necessary for exploration and exploitation, particularly in the Wabamun Formation of the Peace River Arch area in Alberta. 12 refs., 6 figs.
Qayyum, Sajid; Hayat, Tasawar; Shehzad, Sabir Ali; Alsaedi, Ahmed
2018-03-01
This article concentrates on the magnetohydrodynamic (MHD) stagnation point flow of tangent hyperbolic nanofluid in the presence of buoyancy forces. Flow analysis caused due to stretching surface. Characteristics of heat transfer are examined under the influence of thermal radiation and heat generation/absorption. Newtonian conditions for heat and mass transfer are employed. Nanofluid model includes Brownian motion and thermophoresis. The governing nonlinear partial differential systems of the problem are transformed into a systems of nonlinear ordinary differential equations through appropriate variables. Impact of embedded parameters on the velocity, temperature and nanoparticle concentration fields are presented graphically. Numerical computations are made to obtain the values of skin friction coefficient, local Nusselt and Sherwood numbers. It is concluded that velocity field enhances in the frame of mixed convection parameter while reverse situation is observed due to power law index. Effect of Brownian motion parameter on the temperature and heat transfer rate is quite reverse. Moreover impact of solutal conjugate parameter on the concentration and local Sherwood number is quite similar.
Directory of Open Access Journals (Sweden)
M. Amrani
2008-06-01
Full Text Available Es posible diseñar filtros pasa bajos y pasa bandas por medio de una combinación de funciones tangente hiperbólica en el dominio de la frecuencia, usando los teoremas de escalamiento y deslizamiento de las transformadas de Fourier. Las funciones de filtro correspondientes en el dominio del tiempo pueden ser derivadas analíticamente a partir de las expresiones en el dominio de la frecuencia. Los parámetros de suavidad controlan las pendientes en las regiones de corte y permiten la construcción de filtros relativamente pequeños al mismo tiempo que reducen las oscilaciones de la respuesta del filtro en el dominio del tiempo. Se pueden elegir diferentes parámetros de suavidad para las frecuencias de corte alta y baja en el diseño de filtros pasa banda. Siguiendo el esquema propuesto en este artículo se pueden derivar fácilmente los otros tipos de filtro.
Predicting Robust Vocabulary Growth from Measures of Incremental Learning
Frishkoff, Gwen A.; Perfetti, Charles A.; Collins-Thompson, Kevyn
2011-01-01
We report a study of incremental learning of new word meanings over multiple episodes. A new method called MESA (Markov Estimation of Semantic Association) tracked this learning through the automated assessment of learner-generated definitions. The multiple word learning episodes varied in the strength of contextual constraint provided by sentences, in the consistency of this constraint, and in the spacing of sentences provided for each trained word. Effects of reading skill were also examine...
Nonparametric causal effects based on incremental propensity score interventions
Kennedy, Edward H.
2017-01-01
Most work in causal inference considers deterministic interventions that set each unit's treatment to some fixed value. However, under positivity violations these interventions can lead to non-identification, inefficiency, and effects with little practical relevance. Further, corresponding effects in longitudinal studies are highly sensitive to the curse of dimensionality, resulting in widespread use of unrealistic parametric models. We propose a novel solution to these problems: incremental ...
Diagnosis of small hepatocellular carcinoma by incremental dynamic CT
International Nuclear Information System (INIS)
Uchida, Masafumi; Kumabe, Tsutomu; Edamitsu, Osamu
1993-01-01
Thirty cases of pathologically confirmed small hepatocellular carcinoma were examined by Incremental Dynamic CT (ICT). ICT scanned the whole liver with single-breath-hold technique; therefore, effective early contrast enhancement could be obtained for diagnosis. Among the 30 tumors, 26 were detected. The detection rate was 87%. A high detection rate was obtained in tumors more than 20 mm in diameter. Twenty-two of 26 tumors could be diagnosed correctly. ICT examination was useful for detection of small hepatocellular carcinoma. (author)
Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality
Acikmese, Ahmet Behcet; Corless, Martin
2004-01-01
We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.
Retroactive Operations: On 'increments' in Mandarin Chinese conversations
Lim, Ni Eng
2014-01-01
Conversation Analysis (CA) has established repair (Schegloff, Jefferson & Sacks 1977; Schegloff 1979; Kitzinger 2013) as a conversational mechanism for managing contingencies of talk-in-interaction. In this dissertation, I look at a particular sort of `repair' termed TCU-continuations (or otherwise known increments in other literature) in Mandarin Chinese (henceforth "Chinese"), broadly defined as speakers producing further talk after a possibly complete utterance, which is fashioned not as a...
Incremental validity of emotional intelligence ability in predicting academic achievement.
Lanciano, Tiziana; Curci, Antonietta
2014-01-01
We tested the incremental validity of an ability measure of emotional intelligence (El) in predicting academic achievement in undergraduate students, controlling for cognitive abilities and personality traits. Academic achievement has been conceptualized in terms of the number of exams, grade point average, and study time taken to prepare for each exam. Additionally, gender differences were taken into account in these relationships. Participants filled in the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT), the Raven's Advanced Progressive Matrices, the reduced version of the Eysenck Personality Questionnaire, and academic achievement measures. Results showed that El abilities were positively related to academic achievement indices, such as the number of exams and grade point average; total El ability and the Perceiving branch were negatively associated with the study time spent preparing for exams. Furthermore, El ability adds a percentage of incremental variance with respect to cognitive ability and personality variables in explaining scholastic success. The magnitude of the associations between El abilities and academic achievement measures was generally higher for men than for women. Jointly considered, the present findings support the incremental validity of the MSCEIT and provide positive indications of the importance of El in students' academic development. The helpfulness of El training in the context of academic institutions is discussed.
Health level seven interoperability strategy: big data, incrementally structured.
Dolin, R H; Rogers, B; Jaffe, C
2015-01-01
Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.
Information bottleneck based incremental fuzzy clustering for large biomedical data.
Liu, Yongli; Wan, Xing
2016-08-01
Incremental fuzzy clustering combines advantages of fuzzy clustering and incremental clustering, and therefore is important in classifying large biomedical literature. Conventional algorithms, suffering from data sparsity and high-dimensionality, often fail to produce reasonable results and may even assign all the objects to a single cluster. In this paper, we propose two incremental algorithms based on information bottleneck, Single-Pass fuzzy c-means (spFCM-IB) and Online fuzzy c-means (oFCM-IB). These two algorithms modify conventional algorithms by considering different weights for each centroid and object and scoring mutual information loss to measure the distance between centroids and objects. spFCM-IB and oFCM-IB are used to group a collection of biomedical text abstracts from Medline database. Experimental results show that clustering performances of our approaches are better than such prominent counterparts as spFCM, spHFCM, oFCM and oHFCM, in terms of accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.
Design and Performance Analysis of Incremental Networked Predictive Control Systems.
Pang, Zhong-Hua; Liu, Guo-Ping; Zhou, Donghua
2016-06-01
This paper is concerned with the design and performance analysis of networked control systems with network-induced delay, packet disorder, and packet dropout. Based on the incremental form of the plant input-output model and an incremental error feedback control strategy, an incremental networked predictive control (INPC) scheme is proposed to actively compensate for the round-trip time delay resulting from the above communication constraints. The output tracking performance and closed-loop stability of the resulting INPC system are considered for two cases: 1) plant-model match case and 2) plant-model mismatch case. For the former case, the INPC system can achieve the same output tracking performance and closed-loop stability as those of the corresponding local control system. For the latter case, a sufficient condition for the stability of the closed-loop INPC system is derived using the switched system theory. Furthermore, for both cases, the INPC system can achieve a zero steady-state output tracking error for step commands. Finally, both numerical simulations and practical experiments on an Internet-based servo motor system illustrate the effectiveness of the proposed method.
Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs
Jamour, Fuad Tarek
2017-10-17
Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.
Context-dependent incremental timing cells in the primate hippocampus.
Sakon, John J; Naya, Yuji; Wirth, Sylvia; Suzuki, Wendy A
2014-12-23
We examined timing-related signals in primate hippocampal cells as animals performed an object-place (OP) associative learning task. We found hippocampal cells with firing rates that incrementally increased or decreased across the memory delay interval of the task, which we refer to as incremental timing cells (ITCs). Three distinct categories of ITCs were identified. Agnostic ITCs did not distinguish between different trial types. The remaining two categories of cells signaled time and trial context together: One category of cells tracked time depending on the behavioral action required for a correct response (i.e., early vs. late release), whereas the other category of cells tracked time only for those trials cued with a specific OP combination. The context-sensitive ITCs were observed more often during sessions where behavioral learning was observed and exhibited reduced incremental firing on incorrect trials. Thus, single primate hippocampal cells signal information about trial timing, which can be linked with trial type/context in a learning-dependent manner.
Accelerated Optimization in the PDE Framework: Formulations for the Active Contour Case
Yezzi, Anthony
2017-11-27
Following the seminal work of Nesterov, accelerated optimization methods have been used to powerfully boost the performance of first-order, gradient-based parameter estimation in scenarios where second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it also performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with a basis of attraction large enough to contain the initial overshoot. This behavior has made accelerated and stochastic gradient search methods particularly popular within the machine learning community. In their recent PNAS 2016 paper, Wibisono, Wilson, and Jordan demonstrate how a broad class of accelerated schemes can be cast in a variational framework formulated around the Bregman divergence, leading to continuum limit ODE\\'s. We show how their formulation may be further extended to infinite dimension manifolds (starting here with the geometric space of curves and surfaces) by substituting the Bregman divergence with inner products on the tangent space and explicitly introducing a distributed mass model which evolves in conjunction with the object of interest during the optimization process. The co-evolving mass model, which is introduced purely for the sake of endowing the optimization with helpful dynamics, also links the resulting class of accelerated PDE based optimization schemes to fluid dynamical formulations of optimal mass transport.
Baseline LAW Glass Formulation Testing
Energy Technology Data Exchange (ETDEWEB)
Kruger, Albert A. [USDOE Office of River Protection, Richland, WA (United States); Mooers, Cavin [The Catholic University of America, Washington, DC (United States). Vitreous State Lab.; Bazemore, Gina [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Pegg, Ian L. [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Hight, Kenneth [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Lai, Shan Tao [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Buechele, Andrew [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Rielley, Elizabeth [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Gan, Hao [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Muller, Isabelle S. [The Catholic University of America, Washington, DC (United States). Vitreous State Lab; Cecil, Richard [The Catholic University of America, Washington, DC (United States). Vitreous State Lab
2013-06-13
The major objective of the baseline glass formulation work was to develop and select glass formulations that are compliant with contractual and processing requirements for each of the LAW waste streams. Other objectives of the work included preparation and characterization of glasses with respect to the properties of interest, optimization of sulfate loading in the glasses, evaluation of ability to achieve waste loading limits, testing to demonstrate compatibility of glass melts with melter materials of construction, development of glass formulations to support ILAW qualification activities, and identification of glass formulation issues with respect to contract specifications and processing requirements.
Product Quality Modelling Based on Incremental Support Vector Machine
International Nuclear Information System (INIS)
Wang, J; Zhang, W; Qin, B; Shi, W
2012-01-01
Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.
Product Quality Modelling Based on Incremental Support Vector Machine
Wang, J.; Zhang, W.; Qin, B.; Shi, W.
2012-05-01
Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.
Incremental Closed-loop Identification of Linear Parameter Varying Systems
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, Klaus
2011-01-01
, closed-loop system identification is more difficult than open-loop identification. In this paper we prove that the so-called Hansen Scheme, a technique known from linear time-invariant systems theory for transforming closed-loop system identification problems into open-loop-like problems, can be extended......This paper deals with system identification for control of linear parameter varying systems. In practical applications, it is often important to be able to identify small plant changes in an incremental manner without shutting down the system and/or disconnecting the controller; unfortunately...... to accommodate linear parameter varying systems as well....
Minimizing System Modification in an Incremental Design Approach
DEFF Research Database (Denmark)
Pop, Paul; Eles, Petru; Pop, Traian
2001-01-01
In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. We...... are interested to implement new functionality so that the already running applications are dis-turbed as little as possible and there is a good chance that, later, new functionality can easily be added to the resulted system. The mapping and scheduling problem are considered in the context of a realistic...
An Approach to Incremental Design of Distributed Embedded Systems
DEFF Research Database (Denmark)
Pop, Paul; Eles, Petru; Pop, Traian
2001-01-01
In this paper we present an approach to incremental design of distributed embedded systems for hard real-time applications. We start from an already existing system running a set of applications and the design problem is to implement new functionality on this system. Thus, we propose mapping...... strategies of functionality so that the already running functionality is not disturbed and there is a good chance that, later, new functionality can easily be mapped on the resulted system. The mapping and scheduling for hard real-time embedded systems are considered the context of a realistic communication...
Spectral evolution with incremental nanocoating of long period fiber gratings
Del Villar, Ignacio; Corres, Jesus M.; Achaerandio, Miguel; Arregui, Francisco J.; Matias, Ignacio R.
2006-12-01
The incremental deposition of a thin overlay on the cladding of a long-period fiber grating (LPFG) induces important resonance wavelength shifts in the transmission spectrum. The phenomenon is proved theoretically with a vectorial method based on hybrid modes and coupled mode theory, and experimentally with electrostatic self-assembly monolayer process. The phenomenon is repeated periodically for specific overlay thickness values with the particularity that the shape of the resonance wavelength shift depends on the thickness of the overlay. The main applications are the design of wide optical filters and multiparameter sensing devices.
Resolution enhancement of terahertz imaging by incremental Wiener filtering
Hong, Zhi; Xiao, Wenhua; Chen, Haibin
2009-07-01
A two dimensional raster scanning terahertz imaging system based on a continuous backward-wave oscillator (BWO) terahertz source is built up. To improve the spatial resolution of the system, a scanning step smaller than the focused spot size of the terahertz beam is used to get the transmission image. After pre-processing of the image with wavelet filtering, and using incremental Wiener filtering for image restoration, a terahertz image with resolution higher than the diffraction limit of the system can be obtained. Our imaging experiment shows that this method can both enhance spatial resolution and suppress imaging noise effectively.
Testing single point incremental forming molds for thermoforming operations
Afonso, Daniel; de Sousa, Ricardo Alves; Torcato, Ricardo
2016-10-01
Low pressure polymer processing processes as thermoforming or rotational molding use much simpler molds then high pressure processes like injection. However, despite the low forces involved with the process, molds manufacturing for this operations is still a very material, energy and time consuming operation. The goal of the research is to develop and validate a method for manufacturing plastically formed sheets metal molds by single point incremental forming (SPIF) operation for thermoforming operation. Stewart platform based SPIF machines allow the forming of thick metal sheets, granting the required structural stiffness for the mold surface, and keeping the short lead time manufacture and low thermal inertia.
Saltstone Clean Cap Formulation
Energy Technology Data Exchange (ETDEWEB)
Langton, C
2005-04-22
The current operation strategy for using Saltstone Vault 4 to receive 0.2 Ci/gallon salt solution waste involves pouring a clean grout layer over the radioactive grout prior to initiating pour into another cell. This will minimize the radiating surface area and reduce the dose rate at the vault and surrounding area. The Clean Cap will be used to shield about four feet of Saltstone poured into a Z-Area vault cell prior to moving to another cell. The minimum thickness of the Clean Cap layer will be determined by the cesium concentration and resulting dose levels and it is expected to be about one foot thick based on current calculations for 0.1 Ci Saltstone that is produced in the Saltstone process by stabilization of 0.2 Ci salt solution. This report documents experiments performed to identify a formulation for the Clean Cap. Thermal transient calculations, adiabatic temperature rise measurements, pour height, time between pour calculations and shielding calculations were beyond the scope and time limitations of this study. However, data required for shielding calculations (composition and specific gravity) are provided for shielding calculations. The approach used to design a Clean Cap formulation was to produce a slurry from the reference premix (10/45/45 weight percent cement/slag/fly ash) and domestic water that resembled as closely as possible the properties of the Saltstone slurry. In addition, options were investigated that may offer advantages such as less bleed water and less heat generation. The options with less bleed water required addition of dispersants. The options with lower heat contained more fly ash and less slag. A mix containing 10/45/45 weight percent cement/slag/fly ash with a water to premix ratio of 0.60 is recommended for the Clean Cap. Although this mix may generate more than 3 volume percent standing water (bleed water), it has rheological, mixing and flow properties that are similar to previously processed Saltstone. The recommended
Saltstone Clean Cap Formulation
International Nuclear Information System (INIS)
Langton, C
2005-01-01
The current operation strategy for using Saltstone Vault 4 to receive 0.2 Ci/gallon salt solution waste involves pouring a clean grout layer over the radioactive grout prior to initiating pour into another cell. This will minimize the radiating surface area and reduce the dose rate at the vault and surrounding area. The Clean Cap will be used to shield about four feet of Saltstone poured into a Z-Area vault cell prior to moving to another cell. The minimum thickness of the Clean Cap layer will be determined by the cesium concentration and resulting dose levels and it is expected to be about one foot thick based on current calculations for 0.1 Ci Saltstone that is produced in the Saltstone process by stabilization of 0.2 Ci salt solution. This report documents experiments performed to identify a formulation for the Clean Cap. Thermal transient calculations, adiabatic temperature rise measurements, pour height, time between pour calculations and shielding calculations were beyond the scope and time limitations of this study. However, data required for shielding calculations (composition and specific gravity) are provided for shielding calculations. The approach used to design a Clean Cap formulation was to produce a slurry from the reference premix (10/45/45 weight percent cement/slag/fly ash) and domestic water that resembled as closely as possible the properties of the Saltstone slurry. In addition, options were investigated that may offer advantages such as less bleed water and less heat generation. The options with less bleed water required addition of dispersants. The options with lower heat contained more fly ash and less slag. A mix containing 10/45/45 weight percent cement/slag/fly ash with a water to premix ratio of 0.60 is recommended for the Clean Cap. Although this mix may generate more than 3 volume percent standing water (bleed water), it has rheological, mixing and flow properties that are similar to previously processed Saltstone. The recommended
Incremental Beliefs About Ability Ameliorate Self-Doubt Effects
Directory of Open Access Journals (Sweden)
Qin Zhao
2015-12-01
Full Text Available Past research has typically shown negative effects of self-doubt on performance and psychological well-being. We suggest that these self-doubt effects largely may be due to an underlying assumption that ability is innate and fixed. The present research investigated the main hypothesis that incremental beliefs about ability might ameliorate negative effects of self-doubt. We examined our hypotheses using two lab tasks: verbal reasoning and anagram tasks. Participants’ self-doubt was measured and beliefs about ability were measured after participants read articles advocating either for incremental or entity theories of ability. American College Testing (ACT scores were obtained to index actual ability level. Consistent with our hypothesis, for participants who believed ability was relatively fixed, higher self-doubt was associated with increased negative affect and lower task performance and engagement. In contrast, for participants who believed that ability was malleable, negative self-doubt effects were ameliorated; self-doubt was even associated with better task performance. These effects were further moderated by participants’ academic ability. These findings suggest that mind-sets about ability moderate self-doubt effects. Self-doubt may have negative effects only when it is interpreted as signaling that ability is immutably low.
Incremental Scheduling Engines for Human Exploration of the Cosmos
Jaap, John; Phillips, Shaun
2005-01-01
As humankind embarks on longer space missions farther from home, the requirements and environments for scheduling the activities performed on these missions are changing. As we begin to prepare for these missions it is appropriate to evaluate the merits and applicability of the different types of scheduling engines. Scheduling engines temporally arrange tasks onto a timeline so that all constraints and objectives are met and resources are not overbooked. Scheduling engines used to schedule space missions fall into three general categories: batch, mixed-initiative, and incremental. This paper presents an assessment of the engine types, a discussion of the impact of human exploration of the moon and Mars on planning and scheduling, and the applicability of the different types of scheduling engines. This paper will pursue the hypothesis that incremental scheduling engines may have a place in the new environment; they have the potential to reduce cost, to improve the satisfaction of those who execute or benefit from a particular timeline (the customers), and to allow astronauts to plan their own tasks and those of their companion robots.
Communication: Phase incremented echo train acquisition in NMR spectroscopy.
Baltisberger, Jay H; Walder, Brennan J; Keeler, Eric G; Kaseman, Derrick C; Sanders, Kevin J; Grandinetti, Philip J
2012-06-07
We present an improved and general approach for implementing echo train acquisition (ETA) in magnetic resonance spectroscopy, particularly where the conventional approach of Carr-Purcell-Meiboom-Gill (CPMG) acquisition would produce numerous artifacts. Generally, adding ETA to any N-dimensional experiment creates an N + 1 dimensional experiment, with an additional dimension associated with the echo count, n, or an evolution time that is an integer multiple of the spacing between echo maxima. Here we present a modified approach, called phase incremented echo train acquisition (PIETA), where the phase of the mixing pulse and every other refocusing pulse, φ(P), is incremented as a single variable, creating an additional phase dimension in what becomes an N + 2 dimensional experiment. A Fourier transform with respect to the PIETA phase, φ(P), converts the φ(P) dimension into a Δp dimension where desired signals can be easily separated from undesired coherence transfer pathway signals, thereby avoiding cumbersome or intractable phase cycling schemes where the receiver phase must follow a master equation. This simple modification eliminates numerous artifacts present in NMR experiments employing CPMG acquisition and allows "single-scan" measurements of transverse relaxation and J-couplings. Additionally, unlike CPMG, we show how PIETA can be appended to experiments with phase modulated signals after the mixing pulse.
Incremental concept learning with few training examples and hierarchical classification
Bouma, Henri; Eendebak, Pieter T.; Schutte, Klamer; Azzopardi, George; Burghouts, Gertjan J.
2015-10-01
Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible with only a few training samples. Secondly, we show that novel objects can be added incrementally without retraining existing objects, which is important for fast interaction. Thirdly, we show that an unbalanced number of positive training samples leads to biased classifier scores that can be corrected by modifying weights. Fourthly, we show that the detector performance can deteriorate due to hard-negative mining for similar or closely related classes (e.g., for Barbie and dress, because the doll is wearing a dress). This can be solved by our hierarchical classification. We introduce a new dataset, which we call TOSO, and use it to demonstrate the effectiveness of the proposed method for the localization and recognition of multiple objects in images.
Efficient incremental relaying for packet transmission over fading channels
Fareed, Muhammad Mehboob
2014-07-01
In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying (EIR) scheme with both amplify and forward and decode and forward relaying. We compare the performance of the EIR scheme with the threshold-based incremental relaying (TIR) scheme. It is shown that the efficiency of the TIR scheme is better for lower values of the threshold. However, the efficiency of the TIR scheme for higher values of threshold is outperformed by the EIR. In addition, three new threshold-based adaptive EIR are devised to further improve the efficiency of the EIR scheme. We calculate the packet error rate and the efficiency of these new schemes to provide the analytical insight. © 2014 IEEE.
Incremental parameter estimation of kinetic metabolic network models
Directory of Open Access Journals (Sweden)
Jia Gengjie
2012-11-01
Full Text Available Abstract Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE. Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified. Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates exceeds that of metabolites (chemical species. Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.
Incremental learning of skill collections based on intrinsic motivation
Metzen, Jan H.; Kirchner, Frank
2013-01-01
Life-long learning of reusable, versatile skills is a key prerequisite for embodied agents that act in a complex, dynamic environment and are faced with different tasks over their lifetime. We address the question of how an agent can learn useful skills efficiently during a developmental period, i.e., when no task is imposed on him and no external reward signal is provided. Learning of skills in a developmental period needs to be incremental and self-motivated. We propose a new incremental, task-independent skill discovery approach that is suited for continuous domains. Furthermore, the agent learns specific skills based on intrinsic motivation mechanisms that determine on which skills learning is focused at a given point in time. We evaluate the approach in a reinforcement learning setup in two continuous domains with complex dynamics. We show that an intrinsically motivated, skill learning agent outperforms an agent which learns task solutions from scratch. Furthermore, we compare different intrinsic motivation mechanisms and how efficiently they make use of the agent's developmental period. PMID:23898265
Incremental Learning of Skill Collections based on Intrinsic Motivation
Directory of Open Access Journals (Sweden)
Jan Hendrik Metzen
2013-07-01
Full Text Available Life-long learning of reusable, versatile skills is a key prerequisite forembodied agents that act in a complex, dynamic environment and are faced withdifferent tasks over their lifetime. We address the question of how an agentcan learn useful skills efficiently during a developmental period,i.e., when no task is imposed on him and no external reward signal is provided.Learning of skills in a developmental period needs to be incremental andself-motivated. We propose a new incremental, task-independent skill discoveryapproach that is suited for continuous domains. Furthermore, the agent learnsspecific skills based on intrinsic motivation mechanisms thatdetermine on which skills learning is focused at a given point in time. Weevaluate the approach in a reinforcement learning setup in two continuousdomains with complex dynamics. We show that an intrinsically motivated, skilllearning agent outperforms an agent which learns task solutions from scratch.Furthermore, we compare different intrinsic motivation mechanisms and howefficiently they make use of the agent's developmental period.
Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking
Zhang, Chuxu
2017-08-02
Predicting the fast-rising young researchers (the Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt years. We explore a series of factors that can drive an author to be fast-rising and design a novel pairwise citation increment ranking (PCIR) method that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of PCIR. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.
Incremental learning of skill collections based on intrinsic motivation.
Metzen, Jan H; Kirchner, Frank
2013-01-01
Life-long learning of reusable, versatile skills is a key prerequisite for embodied agents that act in a complex, dynamic environment and are faced with different tasks over their lifetime. We address the question of how an agent can learn useful skills efficiently during a developmental period, i.e., when no task is imposed on him and no external reward signal is provided. Learning of skills in a developmental period needs to be incremental and self-motivated. We propose a new incremental, task-independent skill discovery approach that is suited for continuous domains. Furthermore, the agent learns specific skills based on intrinsic motivation mechanisms that determine on which skills learning is focused at a given point in time. We evaluate the approach in a reinforcement learning setup in two continuous domains with complex dynamics. We show that an intrinsically motivated, skill learning agent outperforms an agent which learns task solutions from scratch. Furthermore, we compare different intrinsic motivation mechanisms and how efficiently they make use of the agent's developmental period.
Failure mechanisms in single-point incremental forming of metals
DEFF Research Database (Denmark)
Silva, Maria B.; Nielsen, Peter Søe; Bay, Niels
2011-01-01
The last years saw the development of two different views on how failure develops in single-point incremental forming (SPIF). Today, researchers are split between those claiming that fracture is always preceded by necking and those considering that fracture occurs with suppression of necking. Eac...... and involves independent determination of formability limits by necking and fracture using tensile and hydraulic bulge tests in conjunction with SPIF of benchmark shapes under laboratory conditions.......The last years saw the development of two different views on how failure develops in single-point incremental forming (SPIF). Today, researchers are split between those claiming that fracture is always preceded by necking and those considering that fracture occurs with suppression of necking. Each...... on formability limits and development of fracture. The unified view conciliates the aforementioned different explanations on the role of necking in fracture and is consistent with the experimental observations that have been reported in the past years. The work is performed on aluminium AA1050-H111 sheets...
Incremental implicit learning of bundles of statistical patterns.
Qian, Ting; Jaeger, T Florian; Aslin, Richard N
2016-12-01
Forming an accurate representation of a task environment often takes place incrementally as the information relevant to learning the representation only unfolds over time. This incremental nature of learning poses an important problem: it is usually unclear whether a sequence of stimuli consists of only a single pattern, or multiple patterns that are spliced together. In the former case, the learner can directly use each observed stimulus to continuously revise its representation of the task environment. In the latter case, however, the learner must first parse the sequence of stimuli into different bundles, so as to not conflate the multiple patterns. We created a video-game statistical learning paradigm and investigated (1) whether learners without prior knowledge of the existence of multiple "stimulus bundles" - subsequences of stimuli that define locally coherent statistical patterns - could detect their presence in the input and (2) whether learners are capable of constructing a rich representation that encodes the various statistical patterns associated with bundles. By comparing human learning behavior to the predictions of three computational models, we find evidence that learners can handle both tasks successfully. In addition, we discuss the underlying reasons for why the learning of stimulus bundles occurs even when such behavior may seem irrational. Copyright © 2016 Elsevier B.V. All rights reserved.
Incremental support vector machines for fast reliable image recognition
Energy Technology Data Exchange (ETDEWEB)
Makili, L., E-mail: makili_le@yahoo.com [Instituto Superior Politécnico da Universidade Katyavala Bwila, Benguela (Angola); Vega, J. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Dormido-Canto, S. [Dpto. Informática y Automática – UNED, Madrid (Spain)
2013-10-15
Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency.
Incremental Aerodynamic Coefficient Database for the USA2
Richardson, Annie Catherine
2016-01-01
In March through May of 2016, a wind tunnel test was conducted by the Aerosciences Branch (EV33) to visually study the unsteady aerodynamic behavior over multiple transition geometries for the Universal Stage Adapter 2 (USA2) in the MSFC Aerodynamic Research Facility's Trisonic Wind Tunnel (TWT). The purpose of the test was to make a qualitative comparison of the transonic flow field in order to provide a recommended minimum transition radius for manufacturing. Additionally, 6 Degree of Freedom force and moment data for each configuration tested was acquired in order to determine the geometric effects on the longitudinal aerodynamic coefficients (Normal Force, Axial Force, and Pitching Moment). In order to make a quantitative comparison of the aerodynamic effects of the USA2 transition geometry, the aerodynamic coefficient data collected during the test was parsed and incorporated into a database for each USA2 configuration tested. An incremental aerodynamic coefficient database was then developed using the generated databases for each USA2 geometry as a function of Mach number and angle of attack. The final USA2 coefficient increments will be applied to the aerodynamic coefficients of the baseline geometry to adjust the Space Launch System (SLS) integrated launch vehicle force and moment database based on the transition geometry of the USA2.
Incremental learning of concept drift in nonstationary environments.
Elwell, Ryan; Polikar, Robi
2011-10-01
We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. © 2011 IEEE
Incremental first pass technique to measure left ventricular ejection fraction
International Nuclear Information System (INIS)
Kocak, R.; Gulliford, P.; Hoggard, C.; Critchley, M.
1980-01-01
An incremental first pass technique was devised to assess the acute effects of any drug on left ventricular ejection fraction (LVEF) with or without a physiological stress. In particular, the effects of the vasodilater isosorbide dinitrate on LVEF before and after exercise were studied in 11 patients who had suffered cardiac failure. This was achieved by recording the passage of sup(99m)Tc pertechnetate through the heart at each stage of the study using a gamma camera computer system. Consistent values for four consecutive first pass values without exercise or drug in normal subjects illustrated the reproducibility of the technique. There was no significant difference between LVEF values obtained at rest and exercise before or after oral isosorbide dinitrate with the exception of one patient with gross mitral regurgitation. The advantages of the incremental first pass technique are that the patient need not be in sinus rhythm, the effects of physiological intervention may be studied and tests may also be repeated at various intervals during long term follow-up of patients. A disadvantage of the method is the limitation in the number of sequential measurements which can be carried out due to the amount of radioactivity injected. (U.K.)
Optimal Output of Distributed Generation Based On Complex Power Increment
Wu, D.; Bao, H.
2017-12-01
In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.
2010-06-01
characterize ten decision units $62,725 $39,500 $18,900 E2S2 - June 20102 2 –40National Defense Center for Energy and Environment Validation of EVC Soil...Stick • EVC Soil Stick used in Fort Lewis to collect eight replicate samples (0-2.5 cm depth) of 100 increments each • Same decision unit as...Energy and Environment Fort Lewis Live-Fire Laboratory Replicates NG Results Using EVC Tool (mg/kg) Sample Type Replicates Mean Std Dev % RSD 1 2 3
Incremental Optimization of Hub and Spoke Network for the Spokes’ Numbers and Flow
Directory of Open Access Journals (Sweden)
Yanfeng Wang
2015-01-01
Full Text Available Hub and spoke network problem is solved as part of a strategic decision making process which may have a profound effect on the future of enterprises. In view of the existing network structure, as time goes on, the number of spokes and the flow change because of different sources of uncertainty. Hence, the incremental optimization of hub and spoke network problem is considered in this paper, and the policy makers should adopt a series of strategies to cope with the change, such as setting up new hubs, adjusting the capacity level of original hubs, or closing some original hubs. The objective is to minimize the total cost, which includes the setup costs for the new hubs, the closure costs, and the adjustment costs for the original hubs as well as the flow routing costs. Two mixed-integer linear programming formulations are proposed and analyzed for this problem. China Deppon Logistics as an example is performed to present computational analysis, and we analyze the changes in the solutions driven by the number of spokes and the flow. The tests also allow an analysis to consider the effect of variation in parameters on network.
New and incremental FDA black box warnings from 2008 to 2015.
Solotke, Michael T; Dhruva, Sanket S; Downing, Nicholas S; Shah, Nilay D; Ross, Joseph S
2018-02-01
The boxed warning (also known as 'black box warning [BBW]') is one of the strongest drug safety actions that the U.S. Food & Drug Administration (FDA) can implement, and often warns of serious risks. The objective of this study was to comprehensively characterize BBWs issued for drugs after FDA approval. We identified all post-marketing BBWs from January 2008 through June 2015 listed on FDA's MedWatch and Drug Safety Communications websites. We used each drug's prescribing information to classify its BBW as new, major update to a preexisting BBW, or minor update. We then characterized these BBWs with respect to pre-specified BBW-specific and drug-specific features. There were 111 BBWs issued to drugs on the US market, of which 29% (n = 32) were new BBWs, 32% (n = 35) were major updates, and 40% (n = 44) were minor updates. New BBWs and major updates were most commonly issued for death (51%) and cardiovascular risk (27%). The new BBWs and major updates impacted 200 drug formulations over the study period, of which 64% were expected to be used chronically and 58% had available alternatives without a BBW. New BBWs and incremental updates to existing BBWs are frequently added to drug labels after regulatory approval.
Incremental Innovation and Competitive Pressure in the Presence of Discrete Innovation
DEFF Research Database (Denmark)
Ghosh, Arghya; Kato, Takao; Morita, Hodaka
2017-01-01
Technical progress consists of improvements made upon the existing technology (incremental innovation) and innovative activities aiming at entirely new technology (discrete innovation). Incremental innovation is often of limited relevance to the new technology invented by successful discrete...... innovation. Previous theoretical studies have indicated that higher competitive pressure measured by product substitutability increases incremental innovation. In contrast, we find that intensified competition can decrease incremental innovation. A firm's market share upon its failure in discrete innovation...... decreases as competition intensifies. This effect decreases firms’ incentives for incremental innovation because the innovation outcome can be applied to a smaller amount of units....
Incremental Volumetric Remapping Method: Analysis and Error Evaluation
International Nuclear Information System (INIS)
Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.
2007-01-01
In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the
Hamainza, Busiku; Sikaala, Chadwick H; Moonga, Hawela B; Chanda, Javan; Chinula, Dingani; Mwenda, Mulenga; Kamuliwo, Mulakwa; Bennett, Adam; Seyoum, Aklilu; Killeen, Gerry F
2016-02-18
Long-lasting, insecticidal nets (LLINs) and indoor residual spraying (IRS) are the most widely accepted and applied malaria vector control methods. However, evidence that incremental impact is achieved when they are combined remains limited and inconsistent. Fourteen population clusters of approximately 1000 residents each in Zambia's Luangwa and Nyimba districts, which had high pre-existing usage rates (81.7 %) of pyrethroid-impregnated LLINs were quasi-randomly assigned to receive IRS with either of two pyrethroids, namely deltamethrin [Wetable granules (WG)] and lambdacyhalothrin [capsule suspension (CS)], with an emulsifiable concentrate (EC) or CS formulation of the organophosphate pirimiphos methyl (PM), or with no supplementary vector control measure. Diagnostic positivity of patients tested for malaria by community health workers in these clusters was surveyed longitudinally over pre- and post-treatment periods spanning 29 months, over which the treatments were allocated and re-allocated in advance of three sequential rainy seasons. Supplementation of LLINs with PM CS offered the greatest initial level of protection against malaria in the first 3 months of application (incremental protective efficacy (IPE) [95 % confidence interval (CI)] = 0.63 [CI 0.57, 0.69], P pyrethroid formulation provided protection beyond 3 months after spraying, but the protection provided by both PM formulations persisted undiminished for longer periods: 6 months for CS and 12 months for EC. The CS formulation of PM provided greater protection than the combined pyrethroid IRS formulations throughout its effective life IPE [95 % CI] = 0.79 [0.75, 0.83] over 6 months. The EC formulation of PM provided incremental protection for the first 3 months (IPE [95 % CI] = 0.23 [0.15, 0.31]) that was approximately equivalent to the two pyrethroid formulations (lambdacyhalothrin, IPE [95 % CI] = 0.31 [0.10, 0.47] and deltamethrin, IPE [95 % CI] = 0.19 [-0.01, 0.35]) but the additional
Novel Formulations for Antimicrobial Peptides
Directory of Open Access Journals (Sweden)
Ana Maria Carmona-Ribeiro
2014-10-01
Full Text Available Peptides in general hold much promise as a major ingredient in novel supramolecular assemblies. They may become essential in vaccine design, antimicrobial chemotherapy, cancer immunotherapy, food preservation, organs transplants, design of novel materials for dentistry, formulations against diabetes and other important strategical applications. This review discusses how novel formulations may improve the therapeutic index of antimicrobial peptides by protecting their activity and improving their bioavailability. The diversity of novel formulations using lipids, liposomes, nanoparticles, polymers, micelles, etc., within the limits of nanotechnology may also provide novel applications going beyond antimicrobial chemotherapy.
Improving process performance in Incremental Sheet Forming (ISF)
International Nuclear Information System (INIS)
Ambrogio, G.; Filice, L.; Manco, G. L.
2011-01-01
Incremental Sheet Forming (ISF) is a relatively new process in which a sheet clamped along the borders is progressively deformed through a hemispherical tool. The tool motion is CNC controlled and the path is designed using a CAD-CAM approach, with the aim to reproduce the final shape contour such as in the surface milling. The absence of a dedicated setup and the related high flexibility is the main point of strength and the reason why several researchers focused their attentions on the ISF process.On the other hand the process slowness is the most relevant drawback which reduces a wider industrial application. In the paper, a first attempt to overcome this process limitation is presented taking into account a relevant speed increasing respect to the values currently used.
Scalable Prediction of Energy Consumption using Incremental Time Series Clustering
Energy Technology Data Exchange (ETDEWEB)
Simmhan, Yogesh; Noor, Muhammad Usman
2013-10-09
Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 data points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.
ARIES: Acquisition of Requirements and Incremental Evolution of Specifications
Roberts, Nancy A.
1993-01-01
This paper describes a requirements/specification environment specifically designed for large-scale software systems. This environment is called ARIES (Acquisition of Requirements and Incremental Evolution of Specifications). ARIES provides assistance to requirements analysts for developing operational specifications of systems. This development begins with the acquisition of informal system requirements. The requirements are then formalized and gradually elaborated (transformed) into formal and complete specifications. ARIES provides guidance to the user in validating formal requirements by translating them into natural language representations and graphical diagrams. ARIES also provides ways of analyzing the specification to ensure that it is correct, e.g., testing the specification against a running simulation of the system to be built. Another important ARIES feature, especially when developing large systems, is the sharing and reuse of requirements knowledge. This leads to much less duplication of effort. ARIES combines all of its features in a single environment that makes the process of capturing a formal specification quicker and easier.
Incremental and developmental perspectives for general-purpose learning systems
Directory of Open Access Journals (Sweden)
Fernando Martínez-Plumed
2017-02-01
Full Text Available The stupefying success of Articial Intelligence (AI for specic problems, from recommender systems to self-driving cars, has not yet been matched with a similar progress in general AI systems, coping with a variety of (dierent problems. This dissertation deals with the long-standing problem of creating more general AI systems, through the analysis of their development and the evaluation of their cognitive abilities. It presents a declarative general-purpose learning system and a developmental and lifelong approach for knowledge acquisition, consolidation and forgetting. It also analyses the use of the use of more ability-oriented evaluation techniques for AI evaluation and provides further insight for the understanding of the concepts of development and incremental learning in AI systems.
Transferring the Incremental Capacity Analysis to Lithium-Sulfur Batteries
DEFF Research Database (Denmark)
Knap, Vaclav; Kalogiannis, Theodoros; Purkayastha, Rajlakshmi
2017-01-01
In order to investigate the battery degradation and to estimate their health, various techniques can be applied. One of them, which is widely used for Lithium-ion batteries, is the incremental capacity analysis (ICA). In this work, we apply the ICA to Lithium-Sulfur batteries, which differ in many...... aspects from Lithium-ion batteries and possess unique behavior. One of the challenges of applying the ICA to Lithium-Sulfur batteries is the representation of the IC curves, as their voltage profiles are often non-monotonic, resulting in more complex IC curves. The ICA is at first applied to charge...... and discharge processes at various temperature levels and afterward the technique is applied to a cell undergoing cycling degradation. It is shown that the ageing processes are trackable from the IC curves and it opens a possibility for their utilization for state-of-health estimation....
Single Point Incremental Forming using a Dummy Sheet
DEFF Research Database (Denmark)
Skjødt, Martin; Silva, Beatriz; Bay, Niels
2007-01-01
A new version of single point incremental forming (SPIF) is presented. This version includes a dummy sheet on top of the work piece, thus forming two sheets instead of one. The dummy sheet, which is in contact with the rotating tool pin, is discarded after forming. The new set-up influences...... the process and furthermore offers a number of new possibilities for solving some of the problems appearing in SPIF. Investigations of the influence of dummy sheet on: formability, wear, surface quality and bulging of planar sides is done by forming to test shapes: a hyperboloid and a truncated pyramid....... The possible influence of friction between the two sheets is furthermore investigated. The results show that the use of a dummy sheet reduces wear of the work piece to almost zero, but also causes a decrease in formability. Bulging of the planar sides of the pyramid is reduced and surface roughness...
Compiler-Enhanced Incremental Checkpointing for OpenMP Applications
Energy Technology Data Exchange (ETDEWEB)
Bronevetsky, G; Marques, D; Pingali, K; Rugina, R; McKee, S A
2008-01-21
As modern supercomputing systems reach the peta-flop performance range, they grow in both size and complexity. This makes them increasingly vulnerable to failures from a variety of causes. Checkpointing is a popular technique for tolerating such failures, enabling applications to periodically save their state and restart computation after a failure. Although a variety of automated system-level checkpointing solutions are currently available to HPC users, manual application-level checkpointing remains more popular due to its superior performance. This paper improves performance of automated checkpointing via a compiler analysis for incremental checkpointing. This analysis, which works with both sequential and OpenMP applications, reduces checkpoint sizes by as much as 80% and enables asynchronous checkpointing.
Automating the Incremental Evolution of Controllers for Physical Robots
DEFF Research Database (Denmark)
Faina, Andres; Jacobsen, Lars Toft; Risi, Sebastian
2017-01-01
the evolution of digital objects.…” The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration......Evolutionary robotics is challenged with some key problems that must be solved, or at least mitigated extensively, before it can fulfill some of its promises to deliver highly autonomous and adaptive robots. The reality gap and the ability to transfer phenotypes from simulation to reality...... of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range...
Incremental peritoneal dialysis: a 10 year single-centre experience.
Sandrini, Massimo; Vizzardi, Valerio; Valerio, Francesca; Ravera, Sara; Manili, Luigi; Zubani, Roberto; Lucca, Bernardo J A; Cancarini, Giovanni
2016-12-01
Incremental dialysis consists in prescribing a dialysis dose aimed towards maintaining total solute clearance (renal + dialysis) near the targets set by guidelines. Incremental peritoneal dialysis (incrPD) is defined as one or two dwell-times per day on CAPD, whereas standard peritoneal dialysis (stPD) consists in three-four dwell-times per day. Single-centre cohort study. Enrollement period: January 2002-December 2007; end of follow up (FU): December 2012. incident patients with FU ≥6 months, initial residual renal function (RRF) 3-10 ml/min/1.73 sqm BSA, renal indication for PD. Median incrPD duration was 17 months (I-III Q: 10; 30). There were no statistically significant differences between 29 patients on incrPD and 76 on stPD regarding: clinical, demographic and anthropometric characteristics at the beginning of treatment, adequacy indices, peritonitis-free survival (peritonitis incidence: 1/135 months-patients in incrPD vs. 1/52 months-patients in stPD) and patient survival. During the first 6 months, RRF remained stable in incrPD (6.20 ± 2.02 vs. 6.08 ± 1.47 ml/min/1.73 sqm BSA; p = 0.792) whereas it decreased in stPD (4.48 ± 2.12 vs. 5.61 ± 1.49; p peritonitis incidence and slower reduction of renal function.
Directory of Open Access Journals (Sweden)
T. Sghaier
2013-12-01
Full Text Available Aim of study: The aim of the work was to develop an individual tree diameter-increment model for Thuya (Tetraclinis articulata in Tunisia.Area of study: The natural Tetraclinis articulata stands at Jbel Lattrech in north-eastern of Tunisia.Material and methods: Data came from 200 trees located in 50 sample plots. The diameter at age t and the diameter increment for the last five years obtained from cores taken at breast height were measured for each tree. Four difference equations derived from the base functions of Richards, Lundqvist, Hossfeld IV and Weibull were tested using the age-independent formulations of the growth functions. Both numerical and graphical analyses were used to evaluate the performance of the candidate models.Main results: Based on the analysis, the age-independent difference equation derived from the base function Richards model was selected. Two of the three parameters (growth rate and shape parameter of the retained model were related to site quality, represented by a Growth Index, stand density and the basal area in larger trees divided by diameter of the subject tree expressing the inter-tree competition.Research highlights: The proposed model can be useful for predicting the diameter growth of Tetraclinis articulata in Tunisia when age is not available or for trees growing in uneven-aged stands.Keywords: Age-independent growth model; difference equations; Tetraclinis articulata; Tunisia.
Tactile friction of topical formulations.
Skedung, L; Buraczewska-Norin, I; Dawood, N; Rutland, M W; Ringstad, L
2016-02-01
The tactile perception is essential for all types of topical formulations (cosmetic, pharmaceutical, medical device) and the possibility to predict the sensorial response by using instrumental methods instead of sensory testing would save time and cost at an early stage product development. Here, we report on an instrumental evaluation method using tactile friction measurements to estimate perceptual attributes of topical formulations. Friction was measured between an index finger and an artificial skin substrate after application of formulations using a force sensor. Both model formulations of liquid crystalline phase structures with significantly different tactile properties, as well as commercial pharmaceutical moisturizing creams being more tactile-similar, were investigated. Friction coefficients were calculated as the ratio of the friction force to the applied load. The structures of the model formulations and phase transitions as a result of water evaporation were identified using optical microscopy. The friction device could distinguish friction coefficients between the phase structures, as well as the commercial creams after spreading and absorption into the substrate. In addition, phase transitions resulting in alterations in the feel of the formulations could be detected. A correlation was established between skin hydration and friction coefficient, where hydrated skin gave rise to higher friction. Also a link between skin smoothening and finger friction was established for the commercial moisturizing creams, although further investigations are needed to analyse this and correlations with other sensorial attributes in more detail. The present investigation shows that tactile friction measurements have potential as an alternative or complement in the evaluation of perception of topical formulations. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Global Combat Support System - Army Increment 2 (GCSS-A Inc 2)
2016-03-01
conducted on GCSS-Army Increment 2 by OSD Cost Assessment and Program Evaluation in advance of MS B. Certification of Business Case Alignment...2016 Major Automated Information System Annual Report Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) Defense Acquisition...Date Assigned: Program Information Program Name Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) DoD Component Army Responsible
Abuse-deterrent Opioid Formulations.
Litman, Ronald S; Pagán, Olivia H; Cicero, Theodore J
2017-12-18
Abuse-deterrent opioid formulations have been suggested as one way to decrease the abuse, addiction, and overdose of orally prescribed opioids. Ten oral opioid formulations have received abuse-deterrent labeling by the U.S. Food and Drug Administration (FDA). Their properties consist of physical and/or chemical means by which the pills resist manipulation and create a barrier to unintended administration, such as chewing, nasal snorting, smoking, and intravenous injection. In this review, we describe the mechanisms of abuse-deterrent technology, the types of premarketing studies required for FDA approval, the pharmacology of the currently approved abuse-deterrent opioid formulations, and the evidence for and against their influence on opioid abuse. We conclude that there is currently insufficient evidence to indicate that the availability of abuse-deterrent opioid formulations has altered the trajectory of opioid overdose and addiction; however, postmarketing studies are in their infancy, and novel deterrent formulations are continually being developed and submitted for marketing approval.
The relations of growth and increment in thrace oak forests
Directory of Open Access Journals (Sweden)
Gafura Aylak Özdemir
2016-01-01
Full Text Available In this study, increment and growth relationships of oak forests of Thrace in different ages, densities and site indexes were examined. For this purpose, double-entry tree volume table, site quality table and the density - dependent yield table was created with the help of the data provided to 101 sample plots were taken. Density – dependent yield table was programmed using VBA macro feature of MS Excel 2010 program. Thus, yield table can be taken as output in computer environment according to the desired age, density and site index. Trends of stand volume and volume elements provided to density – dependent yield table for the oak forest of Thrace according to the age in different site conditions and densities was presented comparatively. Values obtained by density dependent yield table were compared with values of the yield table generated by Eraslan (1954 and Eraslan – Evcimen (1967 for the oak forests. The same comparison also was made with values of the yield table generated by Carus (1998 for beech forests with broad-leaved species.
Incremental Sampling Methodology: Applications for Background Screening Assessments.
Pooler, Penelope S; Goodrum, Philip E; Crumbling, Deana; Stuchal, Leah D; Roberts, Stephen M
2018-01-01
This article presents the findings from a numerical simulation study that was conducted to evaluate the performance of alternative statistical analysis methods for background screening assessments when data sets are generated with incremental sampling methods (ISMs). A wide range of background and site conditions are represented in order to test different ISM sampling designs. Both hypothesis tests and upper tolerance limit (UTL) screening methods were implemented following U.S. Environmental Protection Agency (USEPA) guidance for specifying error rates. The simulations show that hypothesis testing using two-sample t-tests can meet standard performance criteria under a wide range of conditions, even with relatively small sample sizes. Key factors that affect the performance include unequal population variances and small absolute differences in population means. UTL methods are generally not recommended due to conceptual limitations in the technique when applied to ISM data sets from single decision units and due to insufficient power given standard statistical sample sizes from ISM. © 2017 Society for Risk Analysis.
Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation
Roberts, Seán G.
2018-01-01
This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech. PMID:29515487
A novel instrument for generating angular increments of 1 nanoradian
Alcock, Simon G.; Bugnar, Alex; Nistea, Ioana; Sawhney, Kawal; Scott, Stewart; Hillman, Michael; Grindrod, Jamie; Johnson, Iain
2015-12-01
Accurate generation of small angles is of vital importance for calibrating angle-based metrology instruments used in a broad spectrum of industries including mechatronics, nano-positioning, and optic fabrication. We present a novel, piezo-driven, flexure device capable of reliably generating micro- and nanoradian angles. Unlike many such instruments, Diamond Light Source's nano-angle generator (Diamond-NANGO) does not rely on two separate actuators or rotation stages to provide coarse and fine motion. Instead, a single Physik Instrumente NEXLINE "PiezoWalk" actuator provides millimetres of travel with nanometre resolution. A cartwheel flexure efficiently converts displacement from the linear actuator into rotary motion with minimal parasitic errors. Rotation of the flexure is directly measured via a Magnescale "Laserscale" angle encoder. Closed-loop operation of the PiezoWalk actuator, using high-speed feedback from the angle encoder, ensures that the Diamond-NANGO's output drifts by only ˜0.3 nrad rms over ˜30 min. We show that the Diamond-NANGO can reliably move with unprecedented 1 nrad (˜57 ndeg) angular increments over a range of >7000 μrad. An autocollimator, interferometer, and capacitive displacement sensor are used to independently confirm the Diamond-NANGO's performance by simultaneously measuring the rotation of a reflective cube.
Incremental cost of PACS in a medical intensive care unit
Langlotz, Curtis P.; Cleff, Bridget; Even-Shoshan, Orit; Bozzo, Mary T.; Redfern, Regina O.; Brikman, Inna; Seshadri, Sridhar B.; Horii, Steven C.; Kundel, Harold L.
1995-05-01
Our purpose is to determine the incremental costs (or savings) due to the introduction of picture archiving and communication systems (PACS) and computed radiology (CR) in a medical intensive care unit (MICU). Our economic analysis consists of three measurement methods. The first method is an assessment of the direct costs to the radiology department, implemented in a spreadsheet model. The second method consists of a series of brief observational studies to measure potential changes in personnel costs that might not be reflected in administrative claims. The third method (results not reported here) is a multivariate modeling technique which estimates the independent effect of PACS/CR on the cost of care (estimated from administrative claims data), while controlling for clinical case- mix variables. Our direct cost model shows no cost savings to the radiology department after the introduction of PACS in the medical intensive care unit. Savings in film supplies and film library personnel are offset by increases in capital equipment costs and PACS operation personnel. The results of observational studies to date demonstrate significant savings in clinician film-search time, but no significant change in technologist time or lost films. Our model suggests that direct radiology costs will increase after the limited introduction of PACS/CR in the MICU. Our observational studies show a small but significant effect on clinician film search time by the introduction of PACS/CR in the MICU, but no significant effect on other variables. The projected costs of a hospital-wide PACS are currently under study.
Numerical Simulation of Incremental Sheet Forming by Simplified Approach
Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.
2011-01-01
The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.
Business Collaboration in Food Networks: Incremental Solution Development
Directory of Open Access Journals (Sweden)
Harald Sundmaeker
2014-10-01
Full Text Available The paper will present an approach for an incremental solution development that is based on the usage of the currently developed Internet based FIspace business collaboration platform. Key element is the clear segmentation of infrastructures that are either internal or external to the collaborating business entity in the food network. On the one hand, the approach enables to differentiate between specific centralised as well as decentralised ways for data storage and hosting of IT based functionalities. The selection of specific dataexchange protocols and data models is facilitated. On the other hand, the supported solution design and subsequent development is focusing on reusable “software Apps” that can be used on their own and are incorporating a clear added value for the business actors. It will be outlined on how to push the development and introduction of Apps that do not require basic changes of the existing infrastructure. The paper will present an example that is based on the development of a set of Apps for the exchange of product quality related information in food networks, specifically addressing fresh fruits and vegetables. It combines workflow support for data exchange from farm to retail as well as to provide quality feedback information to facilitate the business process improvement. Finally, the latest status of theFIspace platform development will be outlined. Key features and potential ways for real users and software developers in using the FIspace platform that is initiated by science and industry will be outlined.
Distribution of incremental static stress caused by earthquakes
Directory of Open Access Journals (Sweden)
Y. Y. Kagan
1994-01-01
Full Text Available Theoretical calculations, simulations and measurements of rotation of earthquake focal mechanisms suggest that the stress in earthquake focal zones follows the Cauchy distribution which is one of the stable probability distributions (with the value of the exponent α equal to 1. We review the properties of the stable distributions and show that the Cauchy distribution is expected to approximate the stress caused by earthquakes occurring over geologically long intervals of a fault zone development. However, the stress caused by recent earthquakes recorded in instrumental catalogues, should follow symmetric stable distributions with the value of α significantly less than one. This is explained by a fractal distribution of earthquake hypocentres: the dimension of a hypocentre set, ��, is close to zero for short-term earthquake catalogues and asymptotically approaches 2¼ for long-time intervals. We use the Harvard catalogue of seismic moment tensor solutions to investigate the distribution of incremental static stress caused by earthquakes. The stress measured in the focal zone of each event is approximated by stable distributions. In agreement with theoretical considerations, the exponent value of the distribution approaches zero as the time span of an earthquake catalogue (ΔT decreases. For large stress values α increases. We surmise that it is caused by the δ increase for small inter-earthquake distances due to location errors.
Automated Dimension Determination for NMF-based Incremental Collaborative Filtering
Directory of Open Access Journals (Sweden)
Xiwei Wang
2015-12-01
Full Text Available The nonnegative matrix factorization (NMF based collaborative filtering t e chniques h a ve a c hieved great success in product recommendations. It is well known that in NMF, the dimensions of the factor matrices have to be determined in advance. Moreover, data is growing fast; thus in some cases, the dimensions need to be changed to reduce the approximation error. The recommender systems should be capable of updating new data in a timely manner without sacrificing the prediction accuracy. In this paper, we propose an NMF based data update approach with automated dimension determination for collaborative filtering purposes. The approach can determine the dimensions of the factor matrices and update them automatically. It exploits the nearest neighborhood based clustering algorithm to cluster users and items according to their auxiliary information, and uses the clusters as the constraints in NMF. The dimensions of the factor matrices are associated with the cluster quantities. When new data becomes available, the incremental clustering algorithm determines whether to increase the number of clusters or merge the existing clusters. Experiments on three different datasets (MovieLens, Sushi, and LibimSeTi were conducted to examine the proposed approach. The results show that our approach can update the data quickly and provide encouraging prediction accuracy.
Incremental Dynamic Analysis of Koyna Dam under Repeated Ground Motions
Zainab Nik Azizan, Nik; Majid, Taksiah A.; Nazri, Fadzli Mohamed; Maity, Damodar; Abdullah, Junaidah
2018-03-01
This paper discovers the incremental dynamic analysis (IDA) of concrete gravity dam under single and repeated earthquake loadings to identify the limit state of the dam. Seven ground motions with horizontal and vertical direction as seismic input considered in the nonlinear dynamic analysis based on the real repeated earthquake in the worldwide. All the ground motions convert to respond spectrum and scaled according to the developed elastic respond spectrum in order to match the characteristic of the ground motion to the soil type. The scaled was depends on the fundamental period, T1 of the dam. The Koyna dam has been selected as a case study for the purpose of the analysis by assuming that no sliding and rigid foundation, has been estimated. IDA curves for Koyna dam developed for single and repeated ground motions and the performance level of the dam identifies. The IDA curve of repeated ground motion shown stiffer rather than single ground motion. The ultimate state displacement for a single event is 45.59mm and decreased to 39.33mm under repeated events which are decreased about 14%. This showed that the performance level of the dam based on seismic loadings depend on ground motion pattern.
Incremental Frequent Subgraph Mining on Large Evolving Graphs
Abdelhamid, Ehab
2017-08-22
Frequent subgraph mining is a core graph operation used in many domains, such as graph data management and knowledge exploration, bioinformatics and security. Most existing techniques target static graphs. However, modern applications, such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem on a single large evolving graph. We adapt the notion of “fringe” to the graph context, that is the set of subgraphs on the border between frequent and infrequent subgraphs. IncGM+ maintains fringe subgraphs and exploits them to prune the search space. To boost the efficiency, we propose an efficient index structure to maintain selected embeddings with minimal memory overhead. These embeddings are utilized to avoid redundant expensive subgraph isomorphism operations. Moreover, the proposed system supports batch updates. Using large real-world graphs, we experimentally verify that IncGM+ outperforms existing methods by up to three orders of magnitude, scales to much larger graphs and consumes less memory.
Validation of daily increments periodicity in otoliths of spotted gar
Snow, Richard A.; Long, James M.; Frenette, Bryan D.
2017-01-01
Accurate age and growth information is essential in successful management of fish populations and for understanding early life history. We validated daily increment deposition, including the timing of first ring formation, for spotted gar (Lepisosteus oculatus) through 127 days post hatch. Fry were produced from hatchery-spawned specimens, and up to 10 individuals per week were sacrificed and their otoliths (sagitta, lapillus, and asteriscus) removed for daily age estimation. Daily age estimates for all three otolith pairs were significantly related to known age. The strongest relationships existed for measurements from the sagitta (r2 = 0.98) and the lapillus (r2 = 0.99) with asteriscus (r2 = 0.95) the lowest. All age prediction models resulted in a slope near unity, indicating that ring deposition occurred approximately daily. Initiation of ring formation varied among otolith types, with deposition beginning 3, 7, and 9 days for the sagitta, lapillus, and asteriscus, respectively. Results of this study suggested that otoliths are useful to estimate daily age of spotted gar juveniles; these data may be used to back calculate hatch dates, estimate early growth rates, and correlate with environmental factor that influence spawning in wild populations. is early life history information will be valuable in better understanding the ecology of this species.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
Automating the Incremental Evolution of Controllers for Physical Robots.
Faíña, Andrés; Jacobsen, Lars Toft; Risi, Sebastian
2017-01-01
Evolutionary robotics is challenged with some key problems that must be solved, or at least mitigated extensively, before it can fulfill some of its promises to deliver highly autonomous and adaptive robots. The reality gap and the ability to transfer phenotypes from simulation to reality constitute one such problem. Another lies in the embodiment of the evolutionary processes, which links to the first, but focuses on how evolution can act on real agents and occur independently from simulation, that is, going from being, as Eiben, Kernbach, & Haasdijk [2012, p. 261] put it, "the evolution of things, rather than just the evolution of digital objects.…" The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range of problems amenable to embodied evolution.
Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation.
Roberts, Seán G
2018-01-01
This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech.
>Tey Kok Soon,
2013-06-01
This paper proposed an improved incremental conductance method to track the Maximum Power Point (MPP) for PV Panel under fast changing solar irradiation. When there is increment in solar irradiation level, the conventional incremental conductance method is confused and responses incorrectly. The proposed method response correctly and there is no steady state oscillation compared to the conventional method. Matlab simulation is carried out for both the improved and conventional incremental conductance method under fast changing solar irradiation level. The simulation results showed the system able to track the MPP faster than the conventional method.
Formulations of Amlodipine: A Review
Directory of Open Access Journals (Sweden)
Muhammad Ali Sheraz
2016-01-01
Full Text Available Amlodipine (AD is a calcium channel blocker that is mainly used in the treatment of hypertension and angina. However, latest findings have revealed that its efficacy is not only limited to the treatment of cardiovascular diseases as it has shown to possess antioxidant activity and plays an important role in apoptosis. Therefore, it is also employed in the treatment of cerebrovascular stroke, neurodegenerative diseases, leukemia, breast cancer, and so forth either alone or in combination with other drugs. AD is a photosensitive drug and requires protection from light. A number of workers have tried to formulate various conventional and nonconventional dosage forms of AD. This review highlights all the formulations that have been developed to achieve maximum stability with the desired therapeutic action for the delivery of AD such as fast dissolving tablets, floating tablets, layered tablets, single-pill combinations, capsules, oral and transdermal films, suspensions, emulsions, mucoadhesive microspheres, gels, transdermal patches, and liposomal formulations.
Hamiltonian formulation of the supermembrane
International Nuclear Information System (INIS)
Bergshoeff, E.; Sezgin, E.; Tanii, Y.
1987-06-01
The Hamiltonian formulation of the supermembrane theory in eleven dimensions is given. The covariant split of the first and second class constraints is exhibited, and their Dirac brackets are computed. Gauge conditions are imposed in such a way that the reparametrizations of the membrane with divergence free 2-vectors are unfixed. (author). 10 refs
Curcumin nanodisks: formulation and characterization
Ghosh, Mistuni; Singh, Amareshwar T. K.; Xu, Wenwei; Sulchek, Todd; Gordon, Leo I.; Ryan, Robert O.
2010-01-01
Nanodisks (ND) are nanoscale, disk-shaped phospholipid bilayers whose edge is stabilized by apolipoproteins. In the present study, ND were formulated with the bioactive polyphenol, curcumin, at a 6:1 phospholipid:curcumin molar ratio. Atomic force microscopy revealed that curcumin-ND are particles with diameters
Alternate formulations of classical electrodynamics
International Nuclear Information System (INIS)
Beil, R.G.
1975-01-01
The Lorentz--Dirac, Wheeler--Feynman, and Synge formulations of classical electrodynamics are compared with regard to their equations of motion for charged particles and their treatment of radiation. It is found that the less familiar Synge theory offers a viable alternate to the other two, since it is theoretically consistent and predicts results not at variance with experiment
Rogers, Gregory M.; Reinecke, Mark A.; Curry, John F.
2005-01-01
For the Treatment for Adolescents With Depression Study (TADS), a cognitive-behavioral therapy (CBT) manual was developed with the aim of balancing standardization and flexibility. In this article, we describe the manual's case formulation procedures, which served as one major mechanism of flexibility in TADS CBT. We first describe the essential…
Covariant Formulation of Hooke's Law.
Gron, O.
1981-01-01
Introducing a four-vector strain and a four-force stress, Hooke's law is written as a four-vector equation. This formulation is shown to clarify seemingly paradoxical results in connection with uniformly accelerated motion, and rotational motion with angular acceleration. (Author/JN)
On Hamiltonian formulation of cosmologies
Indian Academy of Sciences (India)
This opens up the way to the usual technique of quantization. Elbaz et al [4] have applied this method to the Hamiltonian formulation of FRW cosmological equations. This note presents a generalization of this approach to a variety of cosmologies. A general Schrödinger wave equation has been derived and exact solutions ...
Directory of Open Access Journals (Sweden)
NI KADEK EROSI UNDAHARTA
2008-10-01
Full Text Available Dysoxylum parasiticum (Osbeck Kosterm. including into the tribe of Meliace. The society in Bali calls this plant with majegau. It gives a special characteristic for Balinese and has advantages in their religion. In Bali, it is more known as divine tree (kayu dewa, which able to be used for purified building. The objective of this research was to measure the mean annual increment of height, diameter and volume of D. parasiticum. Recognizing the characteristic of this plant is an important step to get information about the growth of majegau. Estimation of age harvest and also the conservation effort as one of effort to take care of its continuity. The results showed that highest height increment in XIVA garden bed, and the highest diameter increment in XVIIIA garden bed. The height increment is more optimal in the area with high light intensity (XIV A bed garden and the diameter increment is more optimal in low light intensity area (XVIIIA bed garden. The increment of height is more influentially to create volume increment. The model showed that height and volume increment have higher R2 adjusted value than diameter and volume (0.645 and 0.132, respectively.
Influence of Rotation Increments on Imaging Performance for a Rotatory Dual-Head PET System
Directory of Open Access Journals (Sweden)
Fanzhen Meng
2017-01-01
Full Text Available For a rotatory dual-head positron emission tomography (PET system, how to determine the rotation increments is an open problem. In this study, we simulated the characteristics of a rotatory dual-head PET system. The influences of different rotation increments were compared and analyzed. Based on this simulation, the imaging performance of a prototype system was verified. A reconstruction flowchart was proposed based on a precalculated system response matrix (SRM. The SRM made the relationships between the voxels and lines of response (LORs fixed; therefore, we added the interpolation method into the flowchart. Five metrics, including spatial resolution, normalized mean squared error (NMSE, peak signal-to-noise ratio (PSNR, contrast-to-noise (CNR, and structure similarity (SSIM, were applied to assess the reconstructed image quality. The results indicated that the 60° rotation increments with the bilinear interpolation had advantages in resolution, PSNR, NMSE, and SSIM. In terms of CNR, the 90° rotation increments were better than other increments. In addition, the reconstructed images of 90° rotation increments were also flatter than that of 60° increments. Therefore, both the 60° and 90° rotation increments could be used in the real experiments, and which one to choose may depend on the application requirement.
Word decoding development in incremental phonics instruction in a transparent orthography
Schaars, M.M.H.; Segers, P.C.J.; Verhoeven, L.T.W.
2017-01-01
The present longitudinal study aimed to investigate the development of word decoding skills during incremental phonics instruction in Dutch as a transparent orthography. A representative sample of 973 Dutch children in the first grade (M age = 6;1, SD = 0;5) was exposed to incremental subsets of
Lactate and ammonia concentration in blood and sweat during incremental cycle ergometer exercise
Ament, W; Huizenga, [No Value; Mook, GA; Gips, CH; Verkerke, GJ
It is known that the concentrations of ammonia and lactate in blood increase during incremental exercise. Sweat also contains lactate and ammonia. The aim of the present study was to investigate the physiological response of lactate and ammonia in plasma and sweat during a stepwise incremental cycle
Vlaanderen, K.; Brinkkemper, S.; van de Weerd, I.
2012-01-01
Incremental software process improvement deals with the challenges of step-wise process improvement in a time where resources are scarce and many organizations are struggling with the challenges of effective management of software products. Effective knowledge sharing and incremental approaches are
Vujanovic, Anka A; Bonn-Miller, Marcel O; Bernstein, Amit; McKee, Laura G; Zvolensky, Michael J
2010-01-01
The present investigation examined the incremental predictive validity of mindfulness skills, as measured by the Kentucky Inventory of Mindfulness Skills (KIMS), in relation to multiple facets of emotional dysregulation, as indexed by the Difficulties in Emotion Regulation Scale (DERS), above and beyond variance explained by negative affectivity, anxiety sensitivity, and distress tolerance. Participants were a nonclinical community sample of 193 young adults (106 women, 87 men; M(age) = 23.91 years). The KIMS Accepting without Judgment subscale was incrementally negatively predictive of all facets of emotional dysregulation, as measured by the DERS. Furthermore, KIMS Acting with Awareness was incrementally negatively related to difficulties engaging in goal-directed behavior. Additionally, both observing and describing mindfulness skills were incrementally negatively related to lack of emotional awareness, and describing skills also were incrementally negatively related to lack of emotional clarity. Findings are discussed in relation to advancing scientific understanding of emotional dysregulation from a mindfulness skills-based framework.
Directory of Open Access Journals (Sweden)
Jonathan Fletcher
2017-10-01
Full Text Available This study uses the Bayesian approach to examine the incremental contribution of stock characteristics to the investment opportunity set in U.K. stock returns. The paper finds that size, book-to-market (BM ratio, and momentum characteristics all make a significant incremental contribution to the investment opportunity set when there is unrestricted short selling. However, no short selling constraints eliminate the incremental contribution of the size and BM characteristics, but not the momentum characteristic. The use of additional stock characteristics such as stock issues, accruals, profitability, and asset growth leads to a significant incremental contribution beyond the size, BM, and momentum characteristics when there is unrestricted short selling, but no short selling constraints largely eliminates the incremental contribution of the additional characteristics.
Neuromuscular responses to incremental caffeine doses: performance and side effects.
Pallarés, Jesús G; Fernández-Elías, Valentín E; Ortega, Juan F; Muñoz, Gloria; Muñoz-Guerra, Jesús; Mora-Rodríguez, Ricardo
2013-11-01
The purpose of this study was to determine the oral dose of caffeine needed to increase muscle force and power output during all-out single multijoint movements. Thirteen resistance-trained men underwent a battery of muscle strength and power tests in a randomized, double-blind, crossover design, under four different conditions: (a) placebo ingestion (PLAC) or with caffeine ingestion at doses of (b) 3 mg · kg(-1) body weight (CAFF 3mg), (c) 6 mg · kg(-1) (CAFF 6mg), and (d) 9 mg · kg(-1) (CAFF 9mg). The muscle strength and power tests consisted in the measurement of bar displacement velocity and muscle power output during free-weight full-squat (SQ) and bench press (BP) exercises against four incremental loads (25%, 50%, 75%, and 90% one-repetition maximum [1RM]). Cycling peak power output was measured using a 4-s inertial load test. Caffeine side effects were evaluated at the end of each trial and 24 h later. Mean propulsive velocity at light loads (25%-50% 1RM) increased significantly above PLAC for all caffeine doses (5.4%-8.5%, P = 0.039-0.003). At the medium load (75% 1RM), CAFF 3mg did not improve SQ or BP muscle power or BP velocity. CAFF 9mg was needed to enhance BP velocity and SQ power at the heaviest load (90% 1RM) and cycling peak power output (6.8%-11.7%, P = 0.03-0.05). The CAFF 9mg trial drastically increased the frequency of the adverse side effects (15%-62%). The ergogenic dose of caffeine required to enhance neuromuscular performance during a single all-out contraction depends on the magnitude of load used. A dose of 3 mg · kg(-1) is enough to improve high-velocity muscle actions against low loads, whereas a higher caffeine dose (9 mg · kg(-1)) is necessary against high loads, despite the appearance of adverse side effects.
Langevin formulation of quantum dynamics
International Nuclear Information System (INIS)
Roncadelli, M.
1989-03-01
We first show that nonrelativistic quantum mechanics formulated at imaginary-(h/2 π) can formally be viewed as the Fokker-Planck description of a frictionless brownian motion, which occurs (in general) in an absorbing medium. We next offer a new formulation of quantum mechanics, which is basically the Langevin treatment of this brownian motion. Explicitly, we derive a noise-average representation for the transition probability W(X'',t''|X',t'), in terms of the solutions to a Langevin equation with a Gaussian white-noise. Upon analytic continuation back to real-(h/2 π),W(X'',t''|X',t') becomes the propagator of the original Schroedinger equation. Our approach allows for a straightforward application to quantum dynamical problems of the mathematical techniques of classical stochastic processes. Moreover, computer simulations of quantum mechanical systems can be carried out by using numerical programs based on the Langevin dynamics. (author). 19 refs, 1 tab
Do otolith increments allow correct inferences about age and growth of coral reef fishes?
Booth, D. J.
2014-03-01
Otolith increment structure is widely used to estimate age and growth of marine fishes. Here, I test the accuracy of the long-term otolith increment analysis of the lemon damselfish Pomacentrus moluccensis to describe age and growth characteristics. I compare the number of putative annual otolith increments (as a proxy for actual age) and widths of these increments (as proxies for somatic growth) with actual tagged fish-length data, based on a 6-year dataset, the longest time course for a coral reef fish. Estimated age from otoliths corresponded closely with actual age in all cases, confirming annual increment formation. However, otolith increment widths were poor proxies for actual growth in length [linear regression r 2 = 0.44-0.90, n = 6 fish] and were clearly of limited value in estimating annual growth. Up to 60 % of the annual growth variation was missed using otolith increments, suggesting the long-term back calculations of otolith growth characteristics of reef fish populations should be interpreted with caution.
Herrera, María; Segura, Álvaro; Sánchez, Adriana; Sánchez, Andrés; Vargas, Mariángela; Villalta, Mauren; Harrison, Robert A; Gutiérrez, José María; León, Guillermo
2017-07-01
EchiTAb + ICP is a pan-African antivenom used for the treatment of snakebite envenomation in rural sub-Saharan African communities, where the cold chain can be difficult to maintain. To develop a formulation of EchiTAb + ICP that can be distributed and stored without refrigeration, we submitted three different formulations of EchiTAb + ICP: control (i.e. liquid antivenom formulated without stabilizer), liquid antivenom stabilized with sorbitol, and freeze-dried antivenom formulated with sucrose, to an accelerated stability study (i.e. 38 ± 2 °C and 75% relative humidity for 6 months). We analyzed changes in color, residual humidity, reconstitution time (for freeze-dried preparation), pH, osmolality, total protein concentration, antibody monomers content, turbidity, bacterial endotoxins, and pre-clinical neutralizing efficacy of the lethal effect of Echis ocellatus venom at 0, 3 and 6 months. In the control formulation, instability was evidenced by the development of a yellow coloration and an increment in aggregation and turbidity, without change in its neutralizing activity. The sorbitol-stabilized formulation did not develop marked aggregation or turbidity, but instability was evidenced by the development of yellow coloration and a drop in the neutralizing potency. The freeze-dried formulation maintained its neutralizing potency and did not show marked signs of instability, thus indicating that freeze-drying could confer EchiTAb + ICP with improved thermal stability required for distribution and storage at room temperature in sub-Saharan Africa. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the validity of the incremental approach to estimate the impact of cities on air quality
Thunis, Philippe
2018-01-01
The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.
International Nuclear Information System (INIS)
Rezende, M.A.; Guerrini, I.A.; Ferraz, E.S.B.
1990-01-01
Specific gravity annual increments in volume, mass and energy of Eucalyptus grandis at thirteen years of age were made taking into account measurements of the calorific value for wood. It was observed that the calorific value for wood decrease slightly, while the specific gravity increase significantly with age. The so-called culmination age for the Annual Volume Increment was determined to be around fourth year of growth while for the Annual Mass and Energy Increment was around the eighty year. These results show that a tree in a particular age may not have a significant growth in volume, yet one is mass and energy. (author)
Formulation and Characterization of Sustained Release Floating ...
African Journals Online (AJOL)
Purpose: To formulate sustained release gastroretentive microballoons of metformin hydrochloride with the objective of improving its bioavailability. Methods: Microballoons of metformin hydrochloride were formulated by solvent evaporation and diffusion method using varying mixtures of hydroxypropyl methylcellulose ...
Perfume formulation: words and chats.
Ellena, Céline
2008-06-01
What does it mean to create fragrances with materials from chemistry and/or from nature? How are they used to display their characteristic differences, their own personality? Is it easier to create with synthetic raw materials or with essential oils? This review explains why a perfume formulation corresponds in fact to a conversation, an interplay between synthetic and natural perfumery materials. A synthetic raw material carries a single information, and usually is very linear. Its smell is uniform, clear, and faithful. Natural raw materials, on the contrary, provide a strong, complex and generous image. While a synthetic material can be seen as a single word, a natural one such as rose oil could be compared to chatting: cold, warm, sticky, heavy, transparent, pepper, green, metallic, smooth, watery, fruity... full of information. Yet, if a very small amount of the natural material is used, nothing happens, the fragrance will not change. However, if a large amount is used, the rose oil will swallow up everything else. The fragrance will smell of nothing else except rose! To formulate a perfume is not to create a culinary recipe, with only dosing the ingredients in well-balanced amounts. To formulate rather means to flexibly knit materials together with a lively stitch, meeting or repelling each other, building a pleasant form, which is neither fixed, nor solid, nor rigid. A fragrance has an overall structure, which ranges from a clear sound, made up of stable, unique, and linear items, to a background chat, comfortable and reassuring. But that does, of course, not mean that there is only one way of creating a fragrance!
Formulation of soy oil products
Directory of Open Access Journals (Sweden)
Woerfel, John B.
1995-12-01
Full Text Available The paper comments different formulations of soy oil products such as salad and cooking oils, margarine, shortenings, commercial shortenings, frying shortenings, and fluid shortenings. Hydrogenation and its influence on final products is also included.
El trabajo presenta diferentes formulaciones a base de aceite de soja tales como aceites para ensalada y cocinado, margarina, grasas sólidas (shortenings, grasas sólidas comerciales, grasas sólidas para frituras y grasas fluidas. Hace también referencia al proceso de hidrogenación y a sus efectos en los productos finales.
Sieberling, S.; Chu, Q.P.; Mulder, J.A.
2010-01-01
This paper presents a flight control strategy based on nonlinear dynamic inversion. The approach presented, called incremental nonlinear dynamic inversion, uses properties of general mechanical systems and nonlinear dynamic inversion by feeding back angular accelerations. Theoretically, feedback of
MUNIX and incremental stimulation MUNE in ALS patients and control subjects
DEFF Research Database (Denmark)
Furtula, Jasna; Johnsen, Birger; Christensen, Peter Broegger
2013-01-01
This study compares the new Motor Unit Number Estimation (MUNE) technique, MUNIX, with the more common incremental stimulation MUNE (IS-MUNE) with respect to reproducibility in healthy subjects and as potential biomarker of disease progression in patients with ALS....
Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality
Acikmese, Ahmet Behcet; Martin, Corless
2004-01-01
We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.
Incremental Evolution of a 10/250 NLV into a 20/450 NMSLV, Phase I
National Aeronautics and Space Administration — The technical innovation proposed here is the continued functional evolution and concept refinement of an incremental series of test vehicles that will ultimately...
Formulation, Preparation, and Characterization of Polyurethane Foams
Pinto, Moises L.
2010-01-01
Preparation of laboratory-scale polyurethane foams is described with formulations that are easy to implement in experiments for undergraduate students. Particular attention is given to formulation aspects that are based on the main chemical reactions occurring in polyurethane production. This allows students to develop alternative formulations to…
Performance Evaluation of Abrasive Grinding Wheel Formulated ...
African Journals Online (AJOL)
This paper presents a study on the formulation and manufacture of abrasive grinding wheel using locally formulated silicon carbide abrasive grains. Six local raw material substitutes were identified through pilot study and with the initial mix of the identified materials, a systematic search for an optimal formulation of silicon ...
Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A)
2016-03-01
2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A...Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A) DoD Component Air Force Responsible Office Program...present, plan, source, mobilize, deploy, account for, sustain, redeploy and reconstitute forces to conduct National Command Authority authorized
Incremental Identification of Reaction and Mass-Transfer Kinetics Using the Concept of Extents
Bhatt, Nirav; Amrhein, Michael; Bonvin, Dominique
2011-01-01
This paper proposes a variation of the incremental approach to identify reaction and mass-transfer kinetics (rate expressions and the corresponding rate parameters) from concentration measurements for both homogeneous and gas-liquid reaction systems. This incremental approach proceeds in two steps: (i) computation of the extents of reaction and mass transfer from concentration measurements without explicit knowledge of the reaction and mass-transfer rate expressions, and (ii) estimation of ...
Application of incremental algorithms to CT image reconstruction for sparse-view, noisy data
DEFF Research Database (Denmark)
Rose, Sean; Andersen, Martin Skovgaard; Sidky, Emil Y.
2014-01-01
This conference contribution adapts an incremental framework for solving optimization problems of interest for sparse-view CT. From the incremental framework two algorithms are derived: one that combines a damped form of the algebraic reconstruction technique (ART) with a total-variation (TV......) projection, and one that employs a modified damped ART, accounting for a weighted-quadratic data fidelity term, combined with TV projection. The algorithms are demonstrated on simulated, noisy, sparseview CT data....
Incremental Reconstruction of Urban Environments by Edge-Points Delaunay Triangulation
Romanoni, Andrea; Matteucci, Matteo
2016-01-01
Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we ...
2010-04-01
... renewal of expiring consolidated ACC funding increments. 982.102 Section 982.102 Housing and Urban... budget authority for renewal of expiring consolidated ACC funding increments. (a) Applicability. This section applies to the renewal of consolidated ACC funding increments in the program (as described in...
Don C. Bragg
2002-01-01
This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...
Split-increment technique: an alternative approach for large cervical composite resin restorations.
Hassan, Khamis A; Khier, Salwa E
2007-02-01
This article proposes and describes the split-increment technique as an alternative for placement of composite resin in large cervical carious lesions which extend onto the root surface. Two flat 1.5 mm thick composite resin increments were used to restore these cervical carious lesions. Prior to light-curing, two diagonal cuts were made in each increment in order to split it into four triangular-shaped flat portions. The first increment was applied to cover the entire axial wall and portions of the four surrounding walls. The second increment was applied to fill the cavity completely covering the first one and the rest of the four surrounding walls as well as sealing all cavity margins. This technique results in the reduction of the C-factor and the generated shrinkage stresses by directing the shrinking composite resin during curing towards the free, unbonded areas created by the two diagonal cuts. The proposed technique would also produce a more naturally looking restoration by inserting flat dentin and enamel increments of composite resin of a uniform thickness which closely resembles the arrangement of natural tooth structure.
Chen, C L Philip; Liu, Zhulin
2018-01-01
Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.
A Rapidly-Incremented Tethered-Swimming Test for Defining Domain-Specific Training Zones
Directory of Open Access Journals (Sweden)
Pessôa Filho Dalton M.
2017-06-01
Full Text Available The purpose of this study was to investigate whether a tethered-swimming incremental test comprising small increases in resistive force applied every 60 seconds could delineate the isocapnic region during rapidly-incremented exercise. Sixteen competitive swimmers (male, n = 11; female, n = 5 performed: (a a test to determine highest force during 30 seconds of all-out tethered swimming (Favg and the ΔF, which represented the difference between Favg and the force required to maintain body alignment (Fbase, and (b an incremental test beginning with 60 seconds of tethered swimming against a load that exceeded Fbase by 30% of ΔF followed by increments of 5% of ΔF every 60 seconds. This incremental test was continued until the limit of tolerance with pulmonary gas exchange (rates of oxygen uptake and carbon dioxide production and ventilatory (rate of minute ventilation data collected breath by breath. These data were subsequently analyzed to determine whether two breakpoints defining the isocapnic region (i.e., gas exchange threshold and respiratory compensation point were present. We also determined the peak rate of O2 uptake and exercise economy during the incremental test. The gas exchange threshold and respiratory compensation point were observed for each test such that the associated metabolic rates, which bound the heavy-intensity domain during constant-work-rate exercise, could be determined. Significant correlations (Spearman’s were observed for exercise economy along with (a peak rate of oxygen uptake (ρ = .562; p < 0.025, and (b metabolic rate at gas exchange threshold (ρ = −.759; p < 0.005. A rapidly-incremented tethered-swimming test allows for determination of the metabolic rates that define zones for domain-specific constant-work-rate training.
Stem analysis program (GOAP for evaluating of increment and growth data at individual tree
Directory of Open Access Journals (Sweden)
Gafura Aylak Özdemir
2016-07-01
Full Text Available Stem analysis is a method evaluating in a detailed way data of increment and growth of individual tree at the past periods and widely used in various forestry disciplines. Untreated data of stem analysis consist of annual ring count and measurement procedures performed on cross sections taken from individual tree by section method. The evaluation of obtained this untreated data takes quite some time. Thus, a computer software was developed in this study to quickly and efficiently perform stem analysis. This computer software developed to evaluate untreated data of stem analysis as numerical and graphical was programmed as macro by utilizing Visual Basic for Application feature of MS Excel 2013 program currently the most widely used. In developed this computer software, growth height model is formed from two different approaches, individual tree volume depending on section method, cross-sectional area, increments of diameter, height and volume, volume increment percent and stem form factor at breast height are calculated depending on desired period lengths. This calculated values are given as table. Development of diameter, height, volume, increments of these variables, volume increment percent and stem form factor at breast height according to periodic age are given as chart. Stem model showing development of diameter, height and shape of individual tree in the past periods also can be taken from computer software as chart.
Multiple excitation of supports - Part 1. Formulation
International Nuclear Information System (INIS)
Galeao, A.C.N.R.; Barbosa, H.J.C.
1980-12-01
The formulation and the solution of a simple specific problem of support movement are presented. The formulation is extended to the general case of infinitesimal elasticity where the approximated solutions are obtained by the variational formulation with spatial discretization by Finite Element Method. Finally, the present usual numerical techniques for the treatment of the resulting ordinary differential equations system are discused: Direct integration, Modal overlap, Spectral response. (E.G.) [pt
Formulation of disperse systems science and technology
Tadros, Tharwat F
2014-01-01
This book presents comprehensively the science and technology behind the formulation of disperse systems like emulsions, suspensions, foams and others. Starting with a general introduction, the book covers a broad range of topics like the role of different classes of surfactants, stability of disperse systems, formulation of different dispersions, evaluation of formulations and many more. Many examples are included, too. Written by the experienced author and editor Tharwart Tadros, this book is indispensable for every scientist working in the field.
Spray granulation for drug formulation.
Loh, Zhi Hui; Er, Dawn Z L; Chan, Lai Wah; Liew, Celine V; Heng, Paul W S
2011-12-01
Granulation is a key unit process in the production of pharmaceutical solid dosage forms and involves the agglomeration of fine particles with the aid of a binding agent. Fluidized bed granulation, a classic example of spray granulation, is a technique of particle agglomeration brought about by the spray addition of the binding liquid onto a stationary bed of powder particles that is transformed to a fluid-like state by the passage of air through it. The basic working principles, equipment set-up, advantages and challenges of fluidized bed granulation are introduced in this review. This is followed by an overview of the formulation and process-related variables affecting granulation performance. Technological advances, particularly in the application of process analytical tools, in the field of fluidized bed granulation research are also discussed. Fluidized bed granulation is a popular technique for pharmaceutical production, as it is a highly economical and efficient one-pot process. The research and development of process analytical technologies (PAT) has allowed greater process understanding and control to be achieved, even for the lesser known fluidized bed techniques, such as bottom spray and fluidized hot melt granulation. In view of its consistent mixing, as well as continuous and concurrent wetting and drying occurring throughout processing, fluidized bed granulation shows great potential for continuous production although more research is required to fully implement, validate and integrate the PAT tools in a production line.
Policy formulation of public acceptance
International Nuclear Information System (INIS)
Kasai, Akihiro
1978-01-01
Since 1970, the new policy formulation for public acceptance of the new consideration on the location of electric power generation has been set and applied. The planning and the enforcement being conducted by local public organizations for the local economic build-up with plant location and also the adjustement of the requirements for fishery are two main specific characters in this new policy. The background of this new public acceptance policy, the history and the actual problems about the compensation for the location of power generation plants are reviewed. One new proposal, being recommended by the Policy and Science Laboratory to MITI in 1977 is explained. This is based on the method of promoting the location of power generation plants by public participation placing the redevelopment of regional societies as its basis. The problems concerning the industrial structures in farm villages, fishing villages and the areas of commerce and industry should be systematized, and explained from the viewpoint of outside impact, the characteristics of local areas and the location problems in this new proposal. Finally, the location process and its effectiveness should be put in order. (Nakai, Y.)
Slag-based saltstone formulations
International Nuclear Information System (INIS)
Langton, C.A.
1987-01-01
Approximately 400 x 10 6 liters of low-level alkaline salt solution will be treated at the Savannah River Plant (SRP) Defense Waste Processing Facility (DWPF) prior to disposal in concrete vaults at SRP. Treatment involves removal of CS + and Sr +2 followed by solidification and stabilization of potential contaminants in saltstone, a hydrated ceramic waste form. Chromium, technetium, and nitrate releases from saltstone can be significantly reduced by substituting hydraulic blast furnace slag for portland cement in the formulation designs. Slag-based mixes are also compatible with Class F fly ash used in saltstone as a functional extender to control heat of hydration and reduce permeability. A monolithic waste form is produced by the hydration of the slag and fly ash. Soluble ion release (NO 3 - ) is controlled by the saltstone microstructure. Chromium and technetium are less leachable from slag mixes compared to cement-based waste forms because these species are chemically reduced to a lower valence state by ferrous iron in the slag and precipitated as relatively insoluble phases, such as CR(OH) 3 and TcO 2 . 5 refs., 4 figs., 4 tabs
Pediatric drug development: formulation considerations.
Ali, Areeg Anwer; Charoo, Naseem Ahmad; Abdallah, Daud Baraka
2014-10-01
Absence of safe, effective and appropriate treatment is one of the main causes of high mortality and morbidity rates among the pediatric group. This review provides an overview of pharmacokinetic differences between pediatric and adult population and their implications in pharmaceutical development. Different pediatric dosage forms, their merits and demerits are discussed. Food and Drug Administration Act of 1997 and the Best Pharmaceuticals for Children Act 2002 added 6 months patent extension and exclusivity incentives to pharmaceutical companies for evaluation of medicinal products in children. Prescription Drug User Fee Act and Food and Drug Administration Amendments Act of 2007 made it mandatory for pharmaceutical companies to perform pediatric clinical studies on new drug products. Drug development program should include additional clinical bridge studies to evaluate differences in pharmacokinetics and pharmacodynamics of drugs in adult and child populations. Additionally, pharmaceutical development should consider ease of administration, palatability, appropriate excipients, stability and therapeutic equivalency of pediatric dosage forms. Pediatric population is diverse with individual preferences and demand for custom made dosage formulations. Practically it is not feasible to have different pharmaceutical dosage forms for each group. Hence, an appropriate dosage form that can be administered across pediatric population is warranted.
Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.
Ng, Thomas W H; Feldman, Daniel C
2015-05-01
This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Osawa, Takuya; Arimitsu, Takuma; Takahashi, Hideyuki
2017-10-01
The present study was performed to determine the impact of hypoxia on working muscle oxygenation during incremental running, and to compare tissue oxygenation between the thigh and calf muscles. Nine distance runners and triathletes performed incremental running tests to exhaustion under normoxic and hypoxic conditions (fraction of inspired oxygen = 0.15). Peak pulmonary oxygen uptake ([Formula: see text]) and tissue oxygen saturation (StO 2 ) were measured simultaneously in both the vastus lateralis and medial gastrocnemius. Hypoxia significantly decreased peak running speed and [Formula: see text] (p muscles was significantly decreased under hypoxic compared with normoxic conditions at all running speeds (p calf under hypoxic conditions, and that the effects of hypoxia on tissue oxygenation differ between these two muscles during incremental running.
EFFECT OF COST INCREMENT DISTRIBUTION PATTERNS ON THE PERFORMANCE OF JIT SUPPLY CHAIN
Directory of Open Access Journals (Sweden)
Ayu Bidiawati J.R
2008-01-01
Full Text Available Cost is an important consideration in supply chain (SC optimisation. This is due to emphasis placed on cost reduction in order to optimise profit. Some researchers use cost as one of their performance measures and others propose ways of accurately calculating cost. As product moves across SC, the product cost also increases. This paper studied the effect of cost increment distribution patterns on the performance of a JIT Supply Chain. In particular, it is necessary to know if inventory allocation across SC needs to be modified to accommodate different cost increment distribution patterns. It was found that funnel is still the best card distribution pattern for JIT-SC regardless the cost increment distribution patterns used.
Directory of Open Access Journals (Sweden)
Tarun K. Sen
2011-11-01
Full Text Available Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy Abstract Innovation in information technology is a primary driver for growth in developed economies. Research indicates that countries go through three stages in the adoption of innovation strategies: buying innovation through global trade, incremental innovation from other countries by enhancing efficiency, and, at the most developed stage, radically innovating independently for competitive advantage. The first two stages of innovation maturity depend more on cross-border trade than the third stage. In this paper, we find that IT professionals in in an emerging economy such as India believe in radical innovation over incremental innovation (adaptation as a growth strategy, even though competitive advantage may rest in adaptation. The results of the study report the preference for innovation strategies among IT professionals in India and its implications for other rapidly growing emerging economies.
Directory of Open Access Journals (Sweden)
Economou Elias
2015-01-01
Full Text Available In double increment Simultaneous Lightness Contrast two equiluminant squares rest on darker backgrounds that differ in luminance. Research on such displays has produced conflicting results as to whether an illusion is observed. Anchoring theory of lightness predicts no illusion with double increment displays. Here we test the hypothesis that an illusion can be predicted if the framework containing the target and the darkest background carries more weight in lightness computations. We tested two displays, one with a general white and one with a general black background. An illusion was obtained only in the first display. These results suggest that an illusion can occur with double increment displays when the global value of the targets differs from their local value, as these conditions allow different local framework weighting to affect the targets’ final lightness. We propose that this parameter be added to the Anchoring Theory.
Brisson, Marc; van de Velde, Nicolas; Franco, Eduardo L; Drolet, Mélanie; Boily, Marie-Claude
2011-08-01
Our aim was to examine the potential incremental impact of vaccinating boys against human papillomavirus (HPV) on vaccine-type infection in females and males, using an individual-based HPV transmission-dynamic model. Under base assumptions (vaccine efficacy = 99%, duration of protection = 20 years, coverage = 70%), vaccinating 12-year-old boys, in addition to girls, resulted in an incremental reduction in HPV-16/18 (HPV-6/11) incidence over 70 years of 16% (3%) in females and 23% (4%) in males. The benefit of vaccinating boys decreased with improved vaccination coverage in girls. Given the important predicted herd immunity impact of vaccinating girls under moderate to high vaccine coverage, the potential incremental gains of vaccinating boys are limited.
A Rapidly-Incremented Tethered-Swimming test for Defining Domain-Specific Training Zones.
Pessôa Filho, Dalton M; Siqueira, Leandro O C; Simionato, Astor R; Espada, Mário A C; Pestana, Daniel S; DiMenna, Fred J
2017-06-01
The purpose of this study was to investigate whether a tethered-swimming incremental test comprising small increases in resistive force applied every 60 seconds could delineate the isocapnic region during rapidly-incremented exercise. Sixteen competitive swimmers (male, n = 11; female, n = 5) performed: (a) a test to determine highest force during 30 seconds of all-out tethered swimming (F avg ) and the ΔF, which represented the difference between F avg and the force required to maintain body alignment (F base ), and (b) an incremental test beginning with 60 seconds of tethered swimming against a load that exceeded F base by 30% of ΔF followed by increments of 5% of ΔF every 60 seconds. This incremental test was continued until the limit of tolerance with pulmonary gas exchange (rates of oxygen uptake and carbon dioxide production) and ventilatory (rate of minute ventilation) data collected breath by breath. These data were subsequently analyzed to determine whether two breakpoints defining the isocapnic region (i.e., gas exchange threshold and respiratory compensation point) were present. We also determined the peak rate of O 2 uptake and exercise economy during the incremental test. The gas exchange threshold and respiratory compensation point were observed for each test such that the associated metabolic rates, which bound the heavy-intensity domain during constant-work-rate exercise, could be determined. Significant correlations (Spearman's) were observed for exercise economy along with (a) peak rate of oxygen uptake (ρ = .562; p rate at gas exchange threshold (ρ = -.759; p rates that define zones for domain-specific constant-work-rate training.
Predicting success of methotrexate treatment by pretreatment HCG level and 24-hour HCG increment.
Levin, Gabriel; Saleh, Narjes A; Haj-Yahya, Rani; Matan, Liat S; Avi, Benshushan
2018-04-01
To evaluate β-human chorionic gonadotropin (β-HCG) level and its 24-hour increment as predictors of successful methotrexate treatment for ectopic pregnancy. Data were retrospectively reviewed from women with ectopic pregnancy who were treated by single-dose methotrexate (50 mg/m 2 ) at a university hospital in Jerusalem, Israel, between January 1, 2000, and June 30, 2015. Serum β-HCG before treatment and its percentage increment in the 24 hours before treatment were compared between treatment success and failure groups. Sixty-nine women were included in the study. Single-dose methotrexate treatment was successful for 44 (63.8%) women. Both mean β-HCG level and its 24-hour increment were lower for women with successful treatment than for those with failed treatment (respectively, 1224 IU\\L vs 2362 IU\\L, P=0.018; and 13.5% vs 29.6%, P=0.009). Receiver operator characteristic curve analysis yielded cutoff values of 1600 IU\\L and 14% increment with a positive predictive value of 75% and 82%, respectively, for treatment success. β-HCG level and its 24-hour increment were independent predictors of treatment outcome by logistic regression (both PHCG increment of less than 14% in the 24 hours before single-dose methotrexate and serum β-HCG of less than 1600 IU\\L were found to be good predictors of treatment success. © 2017 International Federation of Gynecology and Obstetrics.
Conjugate descent formulation of backpropagation error in ...
African Journals Online (AJOL)
the supervised learning process is posed as an unconstrained optimization problem with the error function as objective function. In this case an optimal value of an increment in the weights is obtained by considering only up to second order derivatives of the error function. The resulting expression for the optimal weight ...
BMI and BMI SDS in childhood: annual increments and conditional change
Brannsether-Ellingsen, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Juliusson, Petur Benedikt
2016-01-01
Background: Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim: To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods: The distributions of 1-year increments of BMI (kg/m2) and BMI SDS are summarised by...
A power-driven increment borer for sampling high-density tropical wood
Stefan Krottenthaler; Philipp Pitsch; G. Helle; Giuliano Maselli Locosselli; Gregório Ceccantini; Jan Altman; Miroslav Svoboda; Jiri Dolezal; Gerhard Schleser; Dieter Anhuf
2015-01-01
High-density hardwood trees with large diameters have been found to damage manually operated increment borers, thus limiting their use in the tropics. Therefore, we herein report a new, low-cost gasoline-powered sampling system for high-density tropical hardwood trees with large diameters. This system provides increment cores 15 mm in diameter and up to 1.35 m in length, allowing minimally invasive sampling of tropical hardwood tree species, which, up to the present, could not be collected by...
A program for the numerical control of a pulse increment system
Energy Technology Data Exchange (ETDEWEB)
Gray, D.C.
1963-08-21
This report will describe the important features of the development of magnetic tapes for the numerical control of a pulse-increment system consisting of a modified Gorton lathe and its associated control unit developed by L. E. Foley of Equipment Development Service, Engineering Services, General Electric Co., Schenectady, N.Y. Included is a description of CUPID (Control and Utilization of Pulse Increment Devices), a FORTRAN program for the design of these tapes on the IBM 7090 computer, and instructions for its operation.
Djakow, Eugen; Springer, Robert; Homberg, Werner; Piper, Mark; Tran, Julian; Zibart, Alexander; Kenig, Eugeny
2017-10-01
Electrohydraulic Forming (EHF) processes permit the production of complex, sharp-edged geometries even when high-strength materials are used. Unfortunately, the forming zone is often limited as compared to other sheet metal forming processes. The use of a special industrial-robot-based tool setup and an incremental process strategy could provide a promising solution for this problem. This paper describes such an innovative approach using an electrohydraulic incremental forming machine, which can be employed to manufacture the large multifunctional and complex part geometries in steel, aluminium, magnesium and reinforced plastic that are employed in lightweight constructions or heating elements.
Maximal power output during incremental exercise by resistance and endurance trained athletes.
Sakthivelavan, D S; Sumathilatha, S
2010-01-01
This study was aimed at comparing the maximal power output by resistance trained and endurance trained athletes during incremental exercise. Thirty male athletes who received resistance training (Group I) and thirty male athletes of similar age group who received endurance training (Group II) for a period of more than 1 year were chosen for the study. Physical parameters were measured and exercise stress testing was done on a cycle ergometer with a portable gas analyzing system. The maximal progressive incremental cycle ergometer power output at peak exercise and carbon dioxide production at VO2max were measured. Highly significant (P biofeedback and perk up the athlete's performance.
The period adding and incrementing bifurcations: from rotation theory to applications
DEFF Research Database (Denmark)
Granados, Albert; Alseda, Lluis; Krupa, Maciej
2017-01-01
for maps on the circle. In the second scenario, symbolic sequences are obtained by consecutive attachment of a given symbolic block and the periods of periodic orbits are incremented by a constant term. It is called the period incrementing bifurcation, in its proof relies on results for maps...... on the interval. We also discuss the expanding cases, as some of the partial results found in the literature also hold when these maps lose contractiveness. The higher dimensional case is also discussed by means of quasi-contractions. We also provide applied examples in control theory, power electronics...
Glass Ceramic Formulation Data Package
Energy Technology Data Exchange (ETDEWEB)
Crum, Jarrod V.; Rodriguez, Carmen P.; McCloy, John S.; Vienna, John D.; Chung, Chul-Woo
2012-06-17
A glass ceramic waste form is being developed for treatment of secondary waste streams generated by aqueous reprocessing of commercial used nuclear fuel (Crum et al. 2012b). The waste stream contains a mixture of transition metals, alkali, alkaline earths, and lanthanides, several of which exceed the solubility limits of a single phase borosilicate glass (Crum et al. 2009; Caurant et al. 2007). A multi-phase glass ceramic waste form allows incorporation of insoluble components of the waste by designed crystallization into durable heat tolerant phases. The glass ceramic formulation and processing targets the formation of the following three stable crystalline phases: (1) powellite (XMoO4) where X can be (Ca, Sr, Ba, and/or Ln), (2) oxyapatite Yx,Z(10-x)Si6O26 where Y is alkaline earth, Z is Ln, and (3) lanthanide borosilicate (Ln5BSi2O13). These three phases incorporate the waste components that are above the solubility limit of a single-phase borosilicate glass. The glass ceramic is designed to be a single phase melt, just like a borosilicate glass, and then crystallize upon slow cooling to form the targeted phases. The slow cooling schedule is based on the centerline cooling profile of a 2 foot diameter canister such as the Hanford High-Level Waste canister. Up to this point, crucible testing has been used for glass ceramic development, with cold crucible induction melter (CCIM) targeted as the ultimate processing technology for the waste form. Idaho National Laboratory (INL) will conduct a scaled CCIM test in FY2012 with a glass ceramic to demonstrate the processing behavior. This Data Package documents the laboratory studies of the glass ceramic composition to support the CCIM test. Pacific Northwest National Laboratory (PNNL) measured melt viscosity, electrical conductivity, and crystallization behavior upon cooling to identify a processing window (temperature range) for melter operation and cooling profiles necessary to crystallize the targeted phases in the
Chemicals-Based Formulation Design: Virtual Experimentations
DEFF Research Database (Denmark)
Conte, Elisa; Gani, Rafiqul
2011-01-01
This paper presents a systematic procedure for virtual experimentations related to the design of liquid formulated products. All the experiments that need to be performed when designing a liquid formulated product (lotion), such as ingredients selection and testing, solubility tests, property...... measurements, can now be performed through the virtual Product-Process Design laboratory [[1], [2] and [3
Colorimetric analysis of hexachlorophene in topical formulations.
French, W N; Matsui, F; Smith, S J; Wood, R J
1975-01-01
The commonly used 4-aminoantipyrine dye formation procedure for hexachlorophene analysis in topical formulations was modified to overcome interference due to other components. Bar soaps and nonemulsion formulations are analyzed directly, employing a chloroform back-extraction stage of the dye prior to quantitation. Hexachlorophene in emulsions and liquid soaps is determined using a TLC separation prior to dye formation.
40 CFR 152.43 - Alternate formulations.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Alternate formulations. 152.43 Section 152.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PESTICIDE REGISTRATION AND CLASSIFICATION PROCEDURES Registration Procedures § 152.43 Alternate formulations...
Formulation and characterization of modified release tablets ...
African Journals Online (AJOL)
Formulation with 35% polymer content exhibited zero order release profile and it released 35% of the drug in first hr, later on, controlled drug release was observed upto the 12th hour. Formulations with PVAc to Na-CMC ratio 20:80 exhibited zero order release pattern at levels of studied concentrations, which suggested that ...
A New Resistance Formulation for Carbon Nanotubes
Directory of Open Access Journals (Sweden)
Ji-Huan He
2008-01-01
Full Text Available A new resistance formulation for carbon nanotubes is suggested using fractal approach. The new formulation is also valid for other nonmetal conductors including nerve fibers, conductive polymers, and molecular wires. Our theoretical prediction agrees well with experimental observation.
Diego Puga; Daniel Trefler
2009-01-01
Increasingly, a small number of lowwage countries such as China, India and Mexico are involved in incremental innovation. That is, they are responsible for resolving productionline bugs and suggesting product improvements. We provide evidence of this new phenomenon and develop a model in which there is a transition from oldstyle productcycle trade to trade involving incremental innovation in lowwage countries. The model explains why levels of involvement in incremental innovation vary across ...
BMI and BMI SDS in childhood: annual increments and conditional change.
Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt
2017-02-01
Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.
Even Highly Correlated Measures Can Add Incrementally to Predicting Recidivism among Sex Offenders
Babchishin, Kelly M.; Hanson, R. Karl; Helmus, Leslie
2012-01-01
Criterion-referenced measures, such as those used in the assessment of crime and violence, prioritize predictive accuracy (discrimination) at the expense of construct validity. In this article, we compared the discrimination and incremental validity of three commonly used criterion-referenced measures for sex offenders (Rapid Risk Assessment for…
Fan, Weiqiao; Zhang, Li-Fang; Watkins, David
2010-01-01
The study examined the incremental validity of thinking styles in predicting academic achievement after controlling for personality and achievement motivation in the hypermedia-based learning environment. Seventy-two Chinese college students from Shanghai, the People's Republic of China, took part in this instructional experiment. The…
DEFF Research Database (Denmark)
Basse-O'Connor, Andreas; Rosiński, Jan
2013-01-01
We characterize the finite variation property for stationary increment mixed moving averages driven by infinitely divisible random measures. Such processes include fractional and moving average processes driven by Levy processes, and also their mixtures. We establish two types of zero-one laws...
Lead 210 and moss-increment dating of two Finnish Sphagnum hummocks
International Nuclear Information System (INIS)
El-Daoushy, F.
1982-01-01
A comparison is presented of 210 Pb dating data with mass-increment dates of selected peat material from Finland. The measurements of 210 Pb were carried out by determining the granddaughter product 210 Po by means of the isotope dilution. The ages in 210 Pb yr were calculated using the constant initial concentration and the constant rate of supply models. (U.K.)
40 CFR Table 3 to Subpart Ggg of... - Generic Compliance Schedule and Increments of Progress a
2010-07-01
... Increments of Progress a 3 Table 3 to Subpart GGG of Part 62 Protection of Environment ENVIRONMENTAL... Pt. 62, Subpt. GGG, Table 3 Table 3 to Subpart GGG of Part 62—Generic Compliance Schedule and... NMOC emissions ≥ 50 Mg/yr.b a Table 3 of subpart GGG applies to landfills with design capacities ≥2.5...
Directory of Open Access Journals (Sweden)
B.P.C. Smirmaul
2011-01-01
Full Text Available The aim of this study was to analyze the music effects on physiological and psychophysiological responses, as well as on the maximum power output attained during an incremental test. A sample of 10 healthy individuals (20.8 ± 1.4 years, 77.0 ± 12.0 kg, 179.2 ± 6.3 cm participated in this study. It was recorded the electromyographic activity (muscles Rectus Femoris − RF and Vastus Lateralis − VL, heart rate (HR, rating of perceived exertion (RPE, ratings of perceived time (RPT and the maximum power output attained (PMax during music (WM and without music (WTM conditions. The individuals completed four maximal incremental tests (MIT ramp-like on a cycle simulator with initial load of 100 W and increments of 10 W•min-1. The mean values of PMax between conditions WTM (260.5 ± 27.7 W and WM (263.2 ± 17.2 W were not statistically different. The comparison between the rates of increase of the values expressed in root-mean-square (RMS and median frequency (MF for both muscles (RF and VL also showed no statistical difference, as well as HR, RPE and RPT. It is concluded that the use of the electronic music during an incremental test to exhaustion showed no effect on the analyzed variables for the investigated group.
Directory of Open Access Journals (Sweden)
Bruno de Paula Caraça Smirmaul
2011-09-01
Full Text Available The aim of this study was to analyze the music effects on physiological and psychophysiological responses, as well as on the maximum power output attained during an incremental test. A sample of 10 healthy individuals (20.8 ± 1.4 years, 77.0 ± 12.0 kg, 179.2 ± 6.3 cm participated in this study. It was recorded the electromyographic activity (muscles Rectus Femoris − RF and Vastus Lateralis − VL, heart rate (HR, rating of perceived exertion (RPE, ratings of perceived time (RPT and the maximum power output attained (PMax during music (WM and without music (WTM conditions. The individuals completed four maximal incremental tests (MIT ramp-like on a cycle simulator with initial load of 100 W and increments of 10 W·min-1. The mean values of PMax between conditions WTM (260.5 ± 27.7 W and WM (263.2 ± 17.2 W were not statistically different. The comparison between the rates of increase of the values expressed in root-mean-square (RMS and median frequency (MF for both muscles (RF and VL also showed no statistical difference, as well as HR, RPE and RPT. It is concluded that the use of the electronic music during an incremental test to exhaustion showed no effect on the analyzed variables for the investigated group.
Compositional Temporal Analysis Model for Incremental Hard Real-Time System Design
Hausmans, J.P.H.M.; Geuns, S.J.; Wiggers, M.H.; Bekooij, Marco Jan Gerrit
2012-01-01
The incremental design and analysis of parallel hard real-time stream processing applications is hampered by the lack of an intuitive compositional temporal analysis model that supports arbitrary cyclic dependencies between tasks. This paper introduces a temporal analysis model for hard real-time
The Interpersonal Measure of Psychopathy: Construct and Incremental Validity in Male Prisoners
Zolondek, Stacey; Lilienfeld, Scott O.; Patrick, Christopher J.; Fowler, Katherine A.
2006-01-01
The authors examined the construct and incremental validity of the Interpersonal Measure of Psychopathy (IM-P), a relatively new instrument designed to detect interpersonal behaviors associated with psychopathy. Observers of videotaped Psychopathy Checklist-Revised (PCL-R) interviews rated male prisoners (N = 93) on the IM-P. The IM-P correlated…
Sieberling, S.; Chu, Q.P.; Mulder, J.A.
2010-01-01
This paper presents a flight control strategy based on nonlinear dynamic inversion. The approach presented, called incremental nonlinear dynamic inversion, uses properties of general mechanical systems and nonlinear dynamic inversion by feeding back angular accelerations. Theoretically, feedback of angular accelerations eliminates sensitivity to model mismatch, greatly increasing the robust performance of the system compared with conventional nonlinear dynamic inversion. However, angular acce...
A power-driven increment borer for sampling high-density tropical wood
Czech Academy of Sciences Publication Activity Database
Krottenthaler, S.; Pitsch, P.; Helle, G.; Locosselli, G. M.; Ceccantini, G.; Altman, Jan; Svoboda, M.; Doležal, Jiří; Schleser, G.; Anhuf, D.
2015-01-01
Roč. 36, November (2015), s. 40-44 ISSN 1125-7865 R&D Projects: GA ČR GAP504/12/1952; GA ČR(CZ) GA14-12262S Institutional support: RVO:67985939 Keywords : tropical dendrochronology * tree sampling methods * increment cores Subject RIV: EF - Botanics Impact factor: 2.107, year: 2015
Zheng, Lingyun; Silliman, S.E.
2000-01-01
A modification of previously published solutions regarding the spatial variation of hydraulic heads is discussed whereby the semivariogram of increments of head residuals (termed head residual increments HRIs) are related to the variance and integral scale of the transmissivity field. A first-order solution is developed for the case of a transmissivity field which is isotropic and whose second-order behavior can be characterized by an exponential covariance structure. The estimates of the variance ??(Y)/2 and the integral scale ?? of the log transmissivity field are then obtained via fitting a theoretical semivariogram for the HRI to its sample semivariogram. This approach is applied to head data sampled from a series of two-dimensional, simulated aquifers with isotropic, exponential covariance structures and varying degrees of heterogeneity (??(Y)/2 = 0.25, 0.5, 1.0, 2.0, and 5.0). The results show that this method provided reliable estimates for both ?? and ??(Y)/2 in aquifers with the value of ??(Y)/2 up to 2.0, but the errors in those estimates were higher for ??(Y)/2 equal to 5.0. It is also demonstrated through numerical experiments and theoretical arguments that the head residual increments will provide a sample semivariogram with a lower variance than will the use of the head residuals without calculation of increments.
A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments
Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco
2016-04-01
We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.
Comparision of Perturb and Observer and Incremental Conductance MPPT Based Solar Tracking System
Nilam Rajendra Deshmukh
2015-01-01
This paper presents a detailed analysis of the two most well-known hill-climbing maximum power point tracking (MPPT) algorithms: the perturb-and-observe (P&O) and incremental conductance (INC). The purpose of the analysis is to clarify some common misconceptions in the literature regarding these two trackers, therefore helping the selection process of MPPT
Fabian C.C. Uzoh
2001-01-01
A height increment equation was used to determine the effects of site quality and competing herbaceous vegetation on the development of ponderosa pine seedlings (Pinus ponderosa var. scopulorum Engelm.). Study areas were established in 36 plantations across northwest and west-central Montana on Champion International Corporation's timberland (...
Periodic Annual Diameter Increment After Overstory Removal in Mixed Conifer Stands
Fabian C.C. Uzoh; K. Leroy Dolph; John R. Anstead
1998-01-01
Diameter growth rates of understory trees were measured for periods both before and after overstory removal on six study areas in northern California. All the species responded with increased diameter growth after adjusting to their new environments. Linear regression equations that predict periodic annual increment of the diameters of the residual trees after...
Joint Space Operations Center (JSpOC) Mission System Increment 3 (JMS Inc 3)
2016-03-01
Component Command (JFCC) Space, to make rapid , responsive decisions for the protection of space assets from proliferating threats (adversary as well as... orbiting debris). JMS Increment-1 provided the foundational infrastructure, service oriented architecture, and user-defined operational picture. JMS
Gradient nanostructured surface of a Cu plate processed by incremental frictional sliding
DEFF Research Database (Denmark)
Hong, Chuanshi; Huang, Xiaoxu; Hansen, Niels
2015-01-01
The flat surface of a Cu plate was processed by incremental frictional sliding at liquid nitrogen temperature. The surface treatment results in a hardened gradient surface layer as thick as 1 mm in the Cu plate, which contains a nanostructured layer on the top with a boundary spacing of the order...
Antonietti, Alessandro; Balconi, Michela
2010-06-01
Abstract The step-by-step, incremental nature of analogical reasoning can be questioned, since analogy making appears to be an insight-like process. This alternative view of analogical thinking can be integrated in Speed's model, even though the alleged role played by dopaminergic subcortical circuits needs further supporting evidence.
Directory of Open Access Journals (Sweden)
Yanhui Wang
2018-01-01
Full Text Available Detecting and extracting the change types of spatial area objects can track area objects’ spatiotemporal change pattern and provide the change backtracking mechanism for incrementally updating spatial datasets. To respond to the problems of high complexity of detection methods, high redundancy rate of detection factors, and the low automation degree during incrementally update process, we take into account the change process of area objects in an integrated way and propose a hierarchical matching method to detect the nine types of changes of area objects, while minimizing the complexity of the algorithm and the redundancy rate of detection factors. We illustrate in details the identification, extraction, and database entry of change types, and how we achieve a close connection and organic coupling of incremental information extraction and object type-of-change detection so as to characterize the whole change process. The experimental results show that this method can successfully detect incremental information about area objects in practical applications, with the overall accuracy reaching above 90%, which is much higher than the existing weighted matching method, making it quite feasible and applicable. It helps establish the corresponding relation between new-version and old-version objects, and facilitate the linked update processing and quality control of spatial data.
International Nuclear Information System (INIS)
Mabit, L.; Toloza, A.; Meusburger, K.; Alewell, C.; Iurian, A-R.; Owens, P.N.
2014-01-01
Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”
Directory of Open Access Journals (Sweden)
Andrew I. Chin
2017-09-01
Discussion: More than 50% of incident HD patients with RKF have adequate kidney urea clearance to be considered for 2-times weekly HD. When additionally ultrafiltration volume and blood pressure stability are taken into account, more than one-fourth of the total cohort could optimally start HD in an incremental fashion.
DEFF Research Database (Denmark)
Wang, Ting; Guan, Sheng-Uei; Puthusserypady, Sadasivan
2014-01-01
Feature ordering is a significant data preprocessing method in Incremental Attribute Learning (IAL), a novel machine learning approach which gradually trains features according to a given order. Previous research has shown that, similar to feature selection, feature ordering is also important based...
Literature Review of Data on the Incremental Costs to Design and Build Low-Energy Buildings
Energy Technology Data Exchange (ETDEWEB)
Hunt, W. D.
2008-05-14
This document summarizes findings from a literature review into the incremental costs associated with low-energy buildings. The goal of this work is to help establish as firm an analytical foundation as possible for the Building Technology Program's cost-effective net-zero energy goal in the year 2025.
The effects of the pine processionary moth on the increment of ...
African Journals Online (AJOL)
STORAGESEVER
2009-05-18
May 18, 2009 ... sycophanta L. (Coleoptera: Carabidae) used against the pine processionary moth (Thaumetopoea pityocampa Den. & Schiff.) (Lepidoptera: Thaumetopoeidae) in biological control. T. J. Zool. 30:181-185. Kanat M, Sivrikaya F (2005). Effect of the pine processionary moth on diameter increment of Calabrian ...
Kok, R.A.W.; Ligthart, P.E.M.
2014-01-01
This study seeks to explain the differential effects of workforce flexibility on incremental and major new product development (NPD). Drawing on the resource-based theory of the firm, human resource management research, and innovation management literature, the authors distinguish two types of
Time-incremental creep–fatigue damage rule for single crystal Ni-base
W.A.M. Brekelmans; T. Tinga; M.G.D. Geers
2009-01-01
In the present paper a damage model for single crystal Ni-base superalloys is proposed that integrates time-dependent and cyclic damage into a generally applicable time-incremental damage rule. A criterion based on the Orowan stress is introduced to detect slip reversal on the microscopic level
Time-incremental creep–fatigue damage rule for single crystal Ni-base superalloys
Tinga, Tiedo; Brekelmans, W.A.M.; Geers, M.G.D.
2009-01-01
In the present paper a damage model for single crystal Ni-base superalloys is proposed that integrates time-dependent and cyclic damage into a generally applicable time-incremental damage rule. A criterion based on the Orowan stress is introduced to detect slip reversal on the microscopic level and
Root, Liz; Van Der Krabben, Erwin; Spit, Tejo
2015-01-01
The aim of the paper is to assess the institutional (mis)fit of tax increment financing for the Dutch spatial planning financial toolkit. By applying an institutionally oriented assessment framework, we analyse the interconnectivity of Dutch municipal finance and spatial planning structures and
Andrei, Federica; Smith, Martin M.; Surcinelli, Paola; Baldaro, Bruno; Saklofske, Donald H.
2016-01-01
This study investigated the structure and validity of the Italian translation of the Trait Emotional Intelligence Questionnaire. Data were self-reported from 227 participants. Confirmatory factor analysis supported the four-factor structure of the scale. Hierarchical regressions also demonstrated its incremental validity beyond demographics, the…
Equity and Entrepreneurialism: The Impact of Tax Increment Financing on School Finance.
Weber, Rachel
2003-01-01
Describes tax increment financing (TIF), an entrepreneurial strategy with significant fiscal implications for overlapping taxing jurisdictions that provide these functions. Statistical analysis of TIF's impact on the finances of one Illinois county's school districts indicates that municipal use of TIF depletes the property tax revenues of schools…
A Self-Organizing Incremental Neural Network based on local distribution learning.
Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi
2016-12-01
In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Jansohn, W.
1997-10-01
This report deals with the formulation and numerical integration of constitutive models in the framework of finite deformation thermomechanics. Based on the concept of dual variables, plasticity and viscoplasticity models exhibiting nonlinear kinematic hardening as well as nonlinear isotropic hardening rules are presented. Care is taken that the evolution equations governing the hardening response fulfill the intrinsic dissipation inequality in every admissible process. In view of the development of an efficient numerical integration procedure, simplified versions of these constitutive models are supposed. In these versions, the thermoelastic strains are assumed to be small and a simplified kinematic hardening rule is considered. Additionally, in view of an implementation into the ABAQUS finite element code, the elasticity law is approximated by a hypoelasticity law. For the simplified onstitutive models, an implicit time-integration algorithm is developed. First, in order to obtain a numerical objective integration scheme, use is made of the HUGHES-WINGET-Algorithm. In the resulting system of ordinary differential equations, it can be distinguished between three differential operators representing different physical effects. The structure of this system of differential equations allows to apply an operator split scheme, which leads to an efficient integration scheme for the constitutive equations. By linearizing the integration algorithm the consistent tangent modulus is derived. In this way, the quadratic convergence of Newton's method used to solve the basic finite element equations (i.e. the finite element discretization of the governing thermomechanical field equations) is preserved. The resulting integration scheme is implemented as a user subroutine UMAT in ABAQUS. The properties of the applied algorithm are first examined by test calculations on a single element under tension-compression-loading. For demonstrating the capabilities of the constitutive theory
Kacarab, M.; Li, L.; Carter, W. P. L.; Cocker, D. R., III
2015-12-01
Two surrogate reactive organic gas (ROG) mixtures were developed to create a controlled reactivity environment simulating different urban atmospheres with varying levels of anthropogenic (e.g. Los Angeles reactivity) and biogenic (e.g. Atlanta reactivity) influences. Traditional chamber experiments focus on the oxidation of one or two volatile organic compound (VOC) precursors, allowing the reactivity of the system to be dictated by those compounds. Surrogate ROG mixtures control the overall reactivity of the system, allowing for the incremental aerosol formation from an added VOC to be observed. The surrogate ROG mixtures were developed based on that used to determine maximum incremental reactivity (MIR) scales for O3 formation from VOC precursors in a Los Angeles smog environment. Environmental chamber experiments were designed to highlight the incremental aerosol formation in the simulated environment due to the addition of an added anthropogenic (aromatic) or biogenic (terpene) VOC. All experiments were conducted in the UC Riverside/CE-CERT dual 90m3 environmental chambers. It was found that the aerosol precursors behaved differently under the two altered reactivity conditions, with more incremental aerosol being formed in the anthropogenic ROG system than in the biogenic ROG system. Further, the biogenic reactivity condition inhibited the oxidation of added anthropogenic aerosol precursors, such as m-xylene. Data will be presented on aerosol properties (density, volatility, hygroscopicity) and bulk chemical composition in the gas and particle phases (from a SYFT Technologies selected ion flow tube mass spectrometer, SIFT-MS, and Aerodyne high resolution time of flight aerosol mass spectrometer, HR-ToF-AMS, respectively) comparing the two controlled reactivity systems and single precursor VOC/NOx studies. Incremental aerosol yield data at different controlled reactivities provide a novel and valuable insight in the attempt to extrapolate environmental chamber
Endogenous-cue prospective memory involving incremental updating of working memory: an fMRI study.
Halahalli, Harsha N; John, John P; Lukose, Ammu; Jain, Sanjeev; Kutty, Bindu M
2015-11-01
Prospective memory paradigms are conventionally classified on the basis of event-, time-, or activity-based intention retrieval. In the vast majority of such paradigms, intention retrieval is provoked by some kind of external event. However, prospective memory retrieval cues that prompt intention retrieval in everyday life are commonly endogenous, i.e., linked to a specific imagined retrieval context. We describe herein a novel prospective memory paradigm wherein the endogenous cue is generated by incremental updating of working memory, and investigated the hemodynamic correlates of this task. Eighteen healthy adult volunteers underwent functional magnetic resonance imaging while they performed a prospective memory task where the delayed intention was triggered by an endogenous cue generated by incremental updating of working memory. Working memory and ongoing task control conditions were also administered. The 'endogenous-cue prospective memory condition' with incremental working memory updating was associated with maximum activations in the right rostral prefrontal cortex, and additional activations in the brain regions that constitute the bilateral fronto-parietal network, central and dorsal salience networks as well as cerebellum. In the working memory control condition, maximal activations were noted in the left dorsal anterior insula. Activation of the bilateral dorsal anterior insula, a component of the central salience network, was found to be unique to this 'endogenous-cue prospective memory task' in comparison to previously reported exogenous- and endogenous-cue prospective memory tasks without incremental working memory updating. Thus, the findings of the present study highlight the important role played by the dorsal anterior insula in incremental working memory updating that is integral to our endogenous-cue prospective memory task.
Omran, Tarek A; Garoushi, Sufyan; Abdulmajeed, Aous A; Lassila, Lippo V; Vallittu, Pekka K
2017-06-01
Bulk-fill resin composites (BFCs) are gaining popularity in restorative dentistry due to the reduced chair time and ease of application. This study aimed to evaluate the influence of increment thickness on dentin bond strength and light transmission of different BFCs and a new discontinuous fiber-reinforced composite. One hundred eighty extracted sound human molars were prepared for a shear bond strength (SBS) test. The teeth were divided into four groups (n = 45) according to the resin composite used: regular particulate filler resin composite: (1) G-ænial Anterior [GA] (control); bulk-fill resin composites: (2) Tetric EvoCeram Bulk Fill [TEBF] and (3) SDR; and discontinuous fiber-reinforced composite: (4) everX Posterior [EXP]. Each group was subdivided according to increment thickness (2, 4, and 6 mm). The irradiance power through the material of all groups/subgroups was quantified (MARC® Resin Calibrator; BlueLight Analytics Inc.). Data were analyzed using two-way ANOVA followed by Tukey's post hoc test. SBS and light irradiance decreased as the increment's height increased (p composite used. EXP presented the highest SBS in 2- and 4-mm-thick increments when compared to other composites, although the differences were not statistically significant (p > 0.05). Light irradiance mean values arranged in descending order were (p composites. Discontinuous fiber-reinforced composite showed the highest value of curing light transmission, which was also seen in improved bonding strength to the underlying dentin surface. Discontinuous fiber-reinforced composite can be applied safely in bulks of 4-mm increments same as other bulk-fill composites, although, in 2-mm thickness, the investigated composites showed better performance.
Isotretinoin Oil-Based Capsule Formulation Optimization
Directory of Open Access Journals (Sweden)
Pi-Ju Tsai
2013-01-01
Full Text Available The purpose of this study was to develop and optimize an isotretinoin oil-based capsule with specific dissolution pattern. A three-factor-constrained mixture design was used to prepare the systemic model formulations. The independent factors were the components of oil-based capsule including beeswax (X1, hydrogenated coconut oil (X2, and soybean oil (X3. The drug release percentages at 10, 30, 60, and 90 min were selected as responses. The effect of formulation factors including that on responses was inspected by using response surface methodology (RSM. Multiple-response optimization was performed to search for the appropriate formulation with specific release pattern. It was found that the interaction effect of these formulation factors (X1X2, X1X3, and X2X3 showed more potential influence than that of the main factors (X1, X2, and X3. An optimal predicted formulation with Y10 min, Y30 min, Y60 min, and Y90 min release values of 12.3%, 36.7%, 73.6%, and 92.7% at X1, X2, and X3 of 5.75, 15.37, and 78.88, respectively, was developed. The new formulation was prepared and performed by the dissolution test. The similarity factor f2 was 54.8, indicating that the dissolution pattern of the new optimized formulation showed equivalence to the predicted profile.
Formulation of heat absorbing glasses
Directory of Open Access Journals (Sweden)
Álvarez-Casariego, Pedro
1996-06-01
Full Text Available In the thermal exchanges between buildings and environment, glazing is an element of major importance, for it largely influences the so-called Solar Heat Gain and Thermal Losses. These parameters can be modified by applying different type of coatings onto glass surface or by adding colorant compounds during glass melting.
The latter is a cheaper way to control the Solar Heat Gain. The knowledge of the laws governing the interaction between colorant compounds and solar radiation, allows us to define glass formulations achieving specific aesthetic requirements and solar energy absorption.
In this paper two examples of application of the modelling of glass colorants spectral absorptance are presented. First is addressed to obtaining a glass with high luminous transmittance and low solar energy transmittance, and the other one to obtaining a glass with neutral colour appearance and minimized solar energy transmittance. Calculation formulas are defined together with photometric properties so-obtained. These type of glasses are particularly suitable to be used as building and automotive glazing, for they retain the mechanical characteristics and possibilities of transformation of standard glass.
En los intercambios de energía entre un edificio y el medio exterior, el vidrio es el elemento de mayor importancia, por su influencia en la Ganancia de Calor Solar y en las Pérdidas Térmicas. Estos parámetros pueden ser modificados mediante el depósito de capas sobre el vidrio o mediante la adición de compuestos absorbentes de la radiación solar.
Esta última vía es la más económica para controlar la Ganancia de Calor Solar. El conocimiento de las leyes que gobiernan la interacción de los diversos colorantes con la radiación solar, permite definir formulaciones de vidrios con características especificas de tipo estético y de absorción energética.
En este trabajo se presentan dos ejemplos de aplicación de esta modelización de las
Canonical operator formulation of nonequilibrium thermodynamics
International Nuclear Information System (INIS)
Mehrafarin, M.
1992-09-01
A novel formulation of nonequilibrium thermodynamics is proposed which emphasises the fundamental role played by the Boltzmann constant k in fluctuations. The equivalence of this and the stochastic formulation is demonstrated. The k → 0 limit of this theory yields the classical deterministic description of nonequilibrium thermodynamics. The new formulation possesses unique features which bear two important results namely the thermodynamic uncertainty principle and the quantisation of entropy production rate. Such a theory becomes indispensable whenever fluctuations play a significant role. (author). 7 refs
Preformulation and Formulation of Investigational New Drugs
1985-07-01
bilock number) -his annual report contains preformulation and formulation studies on WR249,655.2CL’ C141-6s2CI) , WR238,605, WR171,669’HCI...RMI-S ltr, 31 Jul 1992 THIS PAGE IS UNCLASSIFIED O0hK ri~t. GUt*Y . 00 Preformulation and Formulation of Investigational New Drugs Annual Progress...63764D99 BB 049 1I. ITLE (InCIu7* SOcurity Cfa.811 fcotIOn) (U) Preformulation and Formulation of Investigational New Drugs 13¶. PERSONAL AUTHORl Lach
García-García, Idrian; Hernández-González, Ignacio; Díaz-Machado, Alina; González-Delgado, Carlos A; Pérez-Rodríguez, Sonia; García-Vega, Yanelda; Campos-Mojena, Rosario; Tuero-Iglesias, Ángela D; Valenzuela-Silva, Carmen M; Cruz-Ramírez, Alieski; Martín-Trujillo, Alis; Santana-Milián, Héctor; López-Saura, Pedro A; Bello-Rivero, Iraldo
2016-12-07
More potent antitumor activity is desired in Interferon (IFN)-treated cancer patients. This could be achieved by combining IFN alpha and IFN gamma. The aim of this work was to characterize the pharmacokinetics and pharmacodynamics of a novel formulation containing a co-formulated combination of IFNs alpha-2b and gamma (CIGB-128-A). A group of nine healthy male subjects received intramuscularly 24.5 × 10 6 IU of CIGB-128-A. IFN concentrations were evaluated for 48 h. Serum neopterin, beta2-microglobulin (β 2 M) and 2'-5' oligoadenylate synthetase (2'-5' OAS), classical IFN-inducible serum markers, were measured during 192 h by enzyme immunoassay and body temperature was used as pharmacodynamic variable as well. Concerning pharmacokinetics, serum IFNs' profiles were better fitted to a mono-compartmental model with consecutive zero order and first order absorption, one bioavailability value. No interferences by simultaneous administered IFNs were observed in their typical similar systemic profiles. Neopterin and β 2 M time profiles showed a delay that was efficiently linked to pharmacokinetics by means of a zero order absorption rate constant. Neopterin level was nine-fold higher than initial values, 48 h post-administration, an increment not described before. At this time, mean serum β 2 M peaked around the double from baseline. Serum concentrations of the enzyme 2'-5' OAS was still elevated on the 8 day post-injection. The formulation was well tolerated. Most frequent adverse reactions were fever, headache, arthralgia and lymphopenia, mostly mild. The administration of co-formulated IFN alpha-2b and IFN gamma likely provides improved pharmacodynamic properties that may be beneficial to treat several malignancies. Cuban Public Registry of Clinical Trials RPCEC00000118 , May 24, 2011.
Abrasive Waterjet (AWJ) Titanium Tangental Turning Evaluation
Czech Academy of Sciences Publication Activity Database
Hloch, Sergej; Hlaváček, Petr; Vasilko, K.; Cárach, J.; Samardžič, I.; Kozak, D.; Ščučka, Jiří; Klich, Jiří; Klichová, Dagmar
2014-01-01
Roč. 53, č. 4 (2014), s. 537-540 ISSN 0543-5846 R&D Projects: GA MŠk ED2.1.00/03.0082 Institutional support: RVO:68145535 Keywords : titanium * abrasive waterjet turning * traverse speed Subject RIV: JQ - Machines ; Tools Impact factor: 0.959, year: 2014 http://public.carnet.hr/metalurg/Metalurgija/2014_vol_53/No_4/MET_53_4_537-540_Hloch.pdf
Hamiltonian formulation for conformal p-branes
Energy Technology Data Exchange (ETDEWEB)
Alvear, C.; Amorim, R.; Barcelos-Neto, J. (Inst. de Fisica, Univ. Federal do Rio de Janeiro (Brazil))
1991-12-26
We study the hamiltonian formulation for conformal p-branes. The difficulties which could arise from the substitution of velocities in terms of momenta, due to the nonlinearity of the theory, an circumvented. (orig.).
A Hamiltonian formulation for elasticity and thermoelasticity
Maugin, G A
2002-01-01
A Hamiltonian formulation for elasticity and thermoelasticity is proposed and its relation with the corresponding configurational setting is examined. Firstly, a variational principle, concerning the 'inverse motion' mapping, is formulated and the corresponding Euler-Lagrange equations are explored. Next, this Lagrangian formulation is used to define the Hamiltonian density function. The equations of Hamilton are derived in a form which is very similar to the one of the corresponding equations in particle mechanics (finite-dimensional case). From the Hamiltonian formulation it follows that the canonical momentum is identified with the pseudomomentum. Furthermore, a meaning for the Poisson bracket is defined and the entailed relations with the canonical variables as well as the balance laws are examined.
Understanding Pesticide Risks: Toxicity and Formulation
Muntz, Helen; Miller, Rhonda; Alston, Diane
2016-01-01
This fact sheet provides information about pesticide risks to human health, primary means of pesticide exposure, standardized measures of pesticide toxicity, pesticide signal words and type of pesticide formulations.
Formulation and Characterization of Biodegradable Medicated ...
African Journals Online (AJOL)
PEG)-600, tributyl citrate, PEG-200, PEG-300, PEG-400, PEG-4000, triethyl citrate and castor oil. The gum formulations were characterized for the following parameters: texture profile analysis (TPA), biodegradation, in vitro drug release using a ...
Chemical-Based Formulation Design: Virtual Experimentation
DEFF Research Database (Denmark)
Conte, Elisa; Gani, Rafiqul
This paper presents a software, the virtual Product-Process Design laboratory (virtual PPD-lab) and the virtual experimental scenarios for design/verification of consumer oriented liquid formulated products where the software can be used. For example, the software can be employed for the design......, the additives and/or their mixtures (formulations). Therefore, the experimental resources can focus on a few candidate product formulations to find the best product. The virtual PPD-lab allows various options for experimentations related to design and/or verification of the product. For example, the selection...... design, model adaptation). All of the above helps to perform virtual experiments by blending chemicals together and observing their predicted behaviour. The paper will highlight the application of the virtual PPD-lab in the design and/or verification of different consumer products (paint formulation...
Concepts and formulations for spatial multibody dynamics
Flores, Paulo
2015-01-01
This book will be particularly useful to those interested in multibody simulation (MBS) and the formulation for the dynamics of spatial multibody systems. The main types of coordinates that can be used in the formulation of the equations of motion of constrained multibody systems are described. The multibody system, made of interconnected bodies that undergo large displacements and rotations, is fully defined. Readers will discover how Cartesian coordinates and Euler parameters are utilized and are the supporting structure for all methodologies and dynamic analysis, developed within the multibody systems methodologies. The work also covers the constraint equations associated with the basic kinematic joints, as well as those related to the constraints between two vectors. The formulation of multibody systems adopted here uses the generalized coordinates and the Newton-Euler approach to derive the equations of motion. This formulation results in the establishment of a mixed set of differential and algebraic equ...
The Boltzmann equation in the difference formulation
Energy Technology Data Exchange (ETDEWEB)
Szoke, Abraham [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brooks III, Eugene D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-05-06
First we recall the assumptions that are needed for the validity of the Boltzmann equation and for the validity of the compressible Euler equations. We then present the difference formulation of these equations and make a connection with the time-honored Chapman - Enskog expansion. We discuss the hydrodynamic limit and calculate the thermal conductivity of a monatomic gas, using a simplified approximation for the collision term. Our formulation is more consistent and simpler than the traditional derivation.
Preformulation and Formulation Investigational New Drugs
1990-07-01
AD-A252 024 CONTRACT NO: DAMD17-85-C-5003 TITLE: PREFORMULATION AND FORMULATION INVESTIGATIONAL NEW DRUGS PRINCIPAL INVESTIGATOR: Douglas R. Flanagan... PREFORMULATION AND FOMULATION INVESTIGATIONAL Contract No. NEW DRUGS DAMD17-85-C-5003 6. AUTHOR(S) 0603807A Douglas R. Flanagan, John L. Lach (deceased), and...RA V, 16. PRICE CODE formulation studies , clinical studies 17. SECURITY CLASSIFICATION 18. SECURITY CtASSIFICATION 19. SECURITY CLASSIFICATION 20
Note on the ideal frame formulation
Lara, Martin
2017-09-01
An implementation of the ideal frame formulation of perturbed Keplerian motion is presented which only requires the integration of a differential system of dimension 7, contrary to the 8 variables traditionally integrated with this approach. The new formulation is based on the integration of a scaled version of the Eulerian set of redundant parameters and slightly improves runtime performance with respect to the 8-dimensional case while retaining comparable accuracy.
2013-08-01
..., and Treatment Plans for Use in Making Incremental Dental Appliances, the Appliances Made Therefrom... models, digital data, and treatment plans for use in making incremental dental appliances, the appliances...), which was previously performed in an analog manner, the type of advance which does not render the...
2013-08-01
..., and Treatment Plans for Use in Making Incremental Dental Appliances, the Appliances Made Therefrom..., digital data, and treatment plans for use in making incremental dental appliances, the appliances made...), which was previously performed in an analog manner, the type of advance which does not render the...
2012-04-05
... COMMISSION Certain Digital Models, Digital Data, and Treatment Plans for Use in Making Incremental Dental... of certain digital models, digital data, and treatment plans for use in making incremental dental... importation, or the sale within the United States after importation of certain digital models, digital data...
An herbal formulation for hemorrhoids treatment
Directory of Open Access Journals (Sweden)
S. Dehdari
2017-11-01
Full Text Available Background and objectives: Hemorrhoids is the most painful rectal disease. Straining and pregnancy seem playing chief roles in the development of hemorrhoids. Symptoms of hemorrhoids may include bleeding, inflammation and pain. Despite current medical efforts, many discomforts of hemorrhoids have not been handled. The aim of the present study was to formulate and evaluate Itrifal-e muqil (IM tablet to achieve desired pharmaceutical properties. Method: Quality control tests of Allium ampeloperasum L, Commiphora mukul (Hook. ex Stocks Engl., Phyllanthus emblica L., Terminalia chebula Retz. and Terminalia bellerica Retz. were performed. Afterwards, different formulations were prepared and their physical properties were evaluated. Subsequently, the formulation was coated and its physicochemical characteristics were assessed. Result: All of the herbs demonstrated good results in quality control tests according to United State Pharmacopeia (USP. Formulation-1 that was completely prepared based on explained manufacturing process of IM in traditional medicine manuscripts did not show suitable pharmaceutical properties. Among different formulations, Formulation-3 that consisted of A. ampeloperasum, C. mukul, P. emblica, T. chebula and T. bellerica, displayed best outcomes through different tests. Conclusion: Modern pharmaceutical approaches can excellently be adapted for IM preparations.
Microbiological quality of pediatric oral liquid formulations
Directory of Open Access Journals (Sweden)
Maria Josep Cabañas Poy
2016-09-01
Full Text Available The oral administration of drugs to the pediatric population involves the extemporaneous preparation of liquid formulations. These formulations have studies on their physicochemical stability, but they often lack microbiological studies. The objective of this study is to check the microbiological quality of five oral liquid formulations prepared with different excipients, which represent five major combinations, in two conditions: kept unopened until the day of the test, and in a multi-dose vial opened daily. The formulations were prepared according to standard operating procedures. Half of each batch was packaged in vials that remained closed until the day of testing, and the other half in a single container which was opened daily. Both the vials and the containers had been previously sterilized. Microbiological tests were performed weekly during the first month of the study, and then every two weeks, until the expiration date. The microbiological quality of oral liquid formulations is determined by the Royal Spanish Pharmacopoeia. The conclusion was that none of the formulations prepared that were packaged in sterilized containers became contaminated, either in unopened vials or in multi-dose containers when they were opened daily
The coevent formulation of quantum theory
International Nuclear Information System (INIS)
Wallden, Petros
2013-01-01
Understanding quantum theory has been a subject of debate from its birth. Many different formulations and interpretations have been proposed. Here we examine a recent novel formulation, namely the coevents formulation. It is a histories formulation and has as starting point the Feynman path integral and the decoherence functional. The new ontology turns out to be that of a coarse-grained history. We start with a quantum measure defined on the space of histories, and the existence of zero covers rules out single-history as potential reality (the Kochen Specker theorem casted in histories form is a special case of a zero cover). We see that allowing coarse-grained histories as potential realities avoids the previous paradoxes, maintains deductive non-contextual logic (alas non-Boolean) and gives rise to a unique classical domain. Moreover, we can recover the probabilistic predictions of quantum theory with the use of the Cournot's principle. This formulation, being both a realist formulation and based on histories, is well suited conceptually for the purposes of quantum gravity and cosmology.
Heritage, Brody; Gilbert, Jessica M.; Roberts, Lynne D.
2016-01-01
Job embeddedness is a construct that describes the manner in which employees can be enmeshed in their jobs, reducing their turnover intentions. Recent questions regarding the properties of quantitative job embeddedness measures, and their predictive utility, have been raised. Our study compared two competing reflective measures of job embeddedness, examining their convergent, criterion, and incremental validity, as a means of addressing these questions. Cross-sectional quantitative data from 246 Australian university employees (146 academic; 100 professional) was gathered. Our findings indicated that the two compared measures of job embeddedness were convergent when total scale scores were examined. Additionally, job embeddedness was capable of demonstrating criterion and incremental validity, predicting unique variance in turnover intention. However, this finding was not readily apparent with one of the compared job embeddedness measures, which demonstrated comparatively weaker evidence of validity. We discuss the theoretical and applied implications of these findings, noting that job embeddedness has a complementary place among established determinants of turnover intention. PMID:27199817
Improved incremental conductance method for maximum power point tracking using cuk converter
Directory of Open Access Journals (Sweden)
M. Saad Saoud
2014-03-01
Full Text Available The Algerian government relies on a strategy focused on the development of inexhaustible resources such as solar and uses to diversify energy sources and prepare the Algeria of tomorrow: about 40% of the production of electricity for domestic consumption will be from renewable sources by 2030, Therefore it is necessary to concentrate our forces in order to reduce the application costs and to increment their performances, Their performance is evaluated and compared through theoretical analysis and digital simulation. This paper presents simulation of improved incremental conductance method for maximum power point tracking (MPPT using DC-DC cuk converter. This improved algorithm is used to track MPPs because it performs precise control under rapidly changing Atmospheric conditions, Matlab/ Simulink were employed for simulation studies.
Murphy, Brett; Lilienfeld, Scott; Skeem, Jennifer; Edens, John
2016-01-01
Researchers are vigorously debating whether psychopathic personality includes seemingly adaptive traits, especially social and physical boldness. In a large sample (N=1565) of adult offenders, we examined the incremental validity of two operationalizations of boldness (Fearless Dominance traits in the Psychopathy Personality Inventory, Lilienfeld & Andrews, 1996; Boldness traits in the Triarchic Model of Psychopathy, Patrick et al, 2009), above and beyond other characteristics of psychopathy, in statistically predicting scores on four psychopathy-related measures, including the Psychopathy Checklist-Revised (PCL-R). The incremental validity added by boldness traits in predicting the PCL-R’s representation of psychopathy was especially pronounced for interpersonal traits (e.g., superficial charm, deceitfulness). Our analyses, however, revealed unexpected sex differences in the relevance of these traits to psychopathy, with boldness traits exhibiting reduced importance for psychopathy in women. We discuss the implications of these findings for measurement models of psychopathy. PMID:26866795
Directory of Open Access Journals (Sweden)
BEHZAD AZIZIAN ISALOO
2016-04-01
Full Text Available Maximum power point tracking (MPPT algorithms are employed in photovoltaic (PV systems to provide full utilization of PV array output power. Among all the MPPT algorithms, the Incremental Conductance (INC algorithm is widely used in PV systems due to the high tracking speed and accuracy. In this paper an improved variable step size algorithm which is based on incremental conductance algorithm is proposed that adjusts the step size according to PV output current. The result of this adaption is to make the algorithm suitable for practical operating conditions due to a wider operating range of irradiation changes. Simulation results confirm that the proposed algorithm increases convergence speed and efficiency in comparison with conventional fixed and variable step size INC algorithms.
International Nuclear Information System (INIS)
Kussmaul, K.; Diem, H.K.; Wachter, O.
1993-01-01
Experimental investigations into the stress/strain behavior of the niobium stabilized austenitic material with the German notation X6 CrNiNb 18 10 proved that a limited incrementally applied prior deformation will reduce the total deformation capability only by the amount of the prior deformation. It could especially be determined on the little changes in the reduction of area that the basically ductile deformation behavior will not be changed by the type of the prior loading. There is a correlation between the amount of deformation and the increase in hardness. It is possible to correlate both the changes in hardness and the material properties. In the case of low cycle fatigue tests with alternating temperature an incremental increase in total strain (ratcheting) was noted to depend on the strain range applied
Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.
Dalessandro, Brian; Perlich, Claudia; Raeder, Troy
2014-06-01
Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.
International Nuclear Information System (INIS)
Baraldi, Piero; Razavi-Far, Roozbeh; Zio, Enrico
2011-01-01
An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.
DEFF Research Database (Denmark)
Bosansky, Branislav; Xin Jiang, Albert; Tambe, Milind
2015-01-01
Many search and security games played on a graph can be modeled as normal-form zero-sum games with strategies consisting of sequences of actions. The size of the strategy space provides a computational challenge when solving these games. This complexity is tackled either by using the compact...... with incremental strategy generation. We experimentally compare CS-DO with the standard approaches and analyze the impact of the size of the support on the performance of the algorithms. Results show that CS-DO dramatically improves the convergence rate in games with non-trivial support...... representation of sequential strategies and linear programming, or by incremental strategy generation of iterative double-oracle methods. In this paper, we present novel hybrid of these two approaches: compact-strategy double-oracle (CS-DO) algorithm that combines the advantages of the compact representation...
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
The scope of application of incremental rapid prototyping methods in foundry engineering
Directory of Open Access Journals (Sweden)
M. Stankiewicz
2010-01-01
Full Text Available The article presents the scope of application of selected incremental Rapid Prototyping methods in the process of manufacturing casting models, casting moulds and casts. The Rapid Prototyping methods (SL, SLA, FDM, 3DP, JS are predominantly used for the production of models and model sets for casting moulds. The Rapid Tooling methods, such as: ZCast-3DP, ProMetalRCT and VoxelJet, enable the fabrication of casting moulds in the incremental process. The application of the RP methods in cast production makes it possible to speed up the prototype preparation process. This is particularly vital to elements of complex shapes. The time required for the manufacture of the model, the mould and the cast proper may vary from a few to several dozen hours.
Sufficient conditions for a period incrementing big bang bifurcation in one-dimensional maps
International Nuclear Information System (INIS)
Avrutin, V; Granados, A; Schanz, M
2011-01-01
Typically, big bang bifurcation occurs for one (or higher)-dimensional piecewise-defined discontinuous systems whenever two border collision bifurcation curves collide transversely in the parameter space. At that point, two (feasible) fixed points collide with one boundary in state space and become virtual, and, in the one-dimensional case, the map becomes continuous. Depending on the properties of the map near the codimension-two bifurcation point, there exist different scenarios regarding how the infinite number of periodic orbits are born, mainly the so-called period adding and period incrementing. In our work we prove that, in order to undergo a big bang bifurcation of the period incrementing type, it is sufficient for a piecewise-defined one-dimensional map that the colliding fixed points are attractive and with associated eigenvalues of different signs
Sufficient conditions for a period incrementing big bang bifurcation in one-dimensional maps
Avrutin, V.; Granados, A.; Schanz, M.
2011-09-01
Typically, big bang bifurcation occurs for one (or higher)-dimensional piecewise-defined discontinuous systems whenever two border collision bifurcation curves collide transversely in the parameter space. At that point, two (feasible) fixed points collide with one boundary in state space and become virtual, and, in the one-dimensional case, the map becomes continuous. Depending on the properties of the map near the codimension-two bifurcation point, there exist different scenarios regarding how the infinite number of periodic orbits are born, mainly the so-called period adding and period incrementing. In our work we prove that, in order to undergo a big bang bifurcation of the period incrementing type, it is sufficient for a piecewise-defined one-dimensional map that the colliding fixed points are attractive and with associated eigenvalues of different signs.
Dutta, Aritra
2017-07-02
Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.
Open Source Tools for Remote Incremental Backups on Linux: An Experimental Evaluation
Directory of Open Access Journals (Sweden)
Aurélio Santos
2014-07-01
Full Text Available Computer data has become one of the most valuable assets that individuals, organizations and enterprises own today. The majority of people agree that losing their data (programs, data sets, documentation files, email addresses, photos, customer data, etc. would be a disaster. The reason most individuals avoid performing data backups though, is because they feel the process is complicated, tedious and expensive. This is not always true. In fact, with the right tool, it’s very easy and affordable. In this paper we compare the performance and system resources usage of five remote incremental backup open source tools for Linux: Rsync, Rdiff-backup, Duplicity, Areca and Link-Backup. These tools are tested using three distinct remote backup operations: full backup, incremental backup and data restoration. The advantages of each tool are described and we select the most efficient backup tool for simple replication operations and the overall best backup tool.
Heritage, Brody; Gilbert, Jessica M; Roberts, Lynne D
2016-01-01
Job embeddedness is a construct that describes the manner in which employees can be enmeshed in their jobs, reducing their turnover intentions. Recent questions regarding the properties of quantitative job embeddedness measures, and their predictive utility, have been raised. Our study compared two competing reflective measures of job embeddedness, examining their convergent, criterion, and incremental validity, as a means of addressing these questions. Cross-sectional quantitative data from 246 Australian university employees (146 academic; 100 professional) was gathered. Our findings indicated that the two compared measures of job embeddedness were convergent when total scale scores were examined. Additionally, job embeddedness was capable of demonstrating criterion and incremental validity, predicting unique variance in turnover intention. However, this finding was not readily apparent with one of the compared job embeddedness measures, which demonstrated comparatively weaker evidence of validity. We discuss the theoretical and applied implications of these findings, noting that job embeddedness has a complementary place among established determinants of turnover intention.
Development of lapachol topical formulation: anti-inflammatory study of a selected formulation.
Lira, Ana Amélia M; Sester, Elizângela A; Carvalho, André Luis M; Strattmann, Ruth R; Albuquerque, Miracy M; Wanderley, Almir G; Santana, Davi P
2008-01-01
This study aimed at developing a topical formulation of lapachol, a compound isolated from various Bignoniaceae species and at evaluating its topical anti-inflammatory activity. The influence of the pharmaceutical form and different types of emulsifiers was evaluated by in-vitro release studies. The formulations showing the highest release rate were selected and assessed trough skin permeation and retention experiments. It was observed that the gel formulation provided significantly higher permeation and retained amount (3.9-fold) of lapachol as compared to the gel-cream formulation. Antinociceptive and antiedematogenic activities of the most promising formulation were also evaluated. Lapachol gel reduced the increase in hind-paw volume induced by carrageenan injection and reduced nociception produced by acetic acid (0.8% in water, i.p.) when used topically. These results suggest that topical delivery of lapachol from gel formulations may be an effective medication for both dermal and subdermal injuries.
Directory of Open Access Journals (Sweden)
Renata da Rocha Campos Franco
2012-04-01
Full Text Available O objetivo desta pesquisa foi verificar a validade incremental de duas técnicas projetivas, a partir da compreensão da personalidade de 20 dependentes químicos. Uma avaliação da personalidade de dez brasileiros adictos do álcool e dez franceses dependentes da heroína, que estavam vivenciando o processo do tratamento da desintoxicação da droga em centros especializados no Brasil ou na França, foi feita pela perspectiva da psicopatologia fenômeno-estrutural, que é uma abordagem teórica de origem francesa que compreende o funcionamento mental do indivíduo a partir do modo de viver o tempo e o espaço. A relação espaço-temporal e a personalidade foram compreendidas pela maneira como as pessoas se expressaram verbalmente e pelo modo como viram e construíram as imagens das técnicas projetivas de Zulliger e Pfister. Os resultados demonstraram coerência entre as informações geradas pelos instrumentos, provando que o Zulliger e o Pfister, quando aplicados de forma associada e interpretados pelo método fenômeno-estrutural, mostraram-se eficientes para conhecer as vivências de espaço e tempo dos sujeitos.El objetivo de esta investigación fue verificar la validez incremental de dos técnicas proyectivas, a partir de la comprensión de la personalidad de 20 dependientes químicos. Una evaluación de la personalidad de diez brasileños adictos del alcohol y diez franceses dependientes de heroína, que estaban vivenciando el proceso del tratamiento de la desintoxicación de droga en centros especializados en Brasil o en Francia, fue hecha por la perspectiva de la psicopatología fenómeno-estructural que es un abordaje teórico de origen francesa que comprende el funcionamiento mental del individuo por su modo de vivir el tiempo y el espacio. La relación espacio-temporal y la personalidad fueron comprendidas por el modo como las personas se expresaron verbalmente y como vieron y construyeron los imágenes de las t
Detection of milled 100Cr6 steel surface by eddy current and incremental permeance methods
Czech Academy of Sciences Publication Activity Database
Perevertov, Oleksiy; Neslušan, M.; Stupakov, Alexandr
2017-01-01
Roč. 87, Apr (2017), s. 15-23 ISSN 0963-8695 R&D Projects: GA ČR GB14-36566G; GA ČR GA13-18993S Institutional support: RVO:68378271 Keywords : Eddy currents * hard milling * incremental permeance * magnetic materials * surface characterization Subject RIV: JB - Sensors, Measurment, Regulation OBOR OECD: Electrical and electronic engineering Impact factor: 2.726, year: 2016
Formiga, Magno F; Campos, Michael A; Cahalin, Lawrence P
2018-01-01
Smoking has potential deleterious effects on respiratory muscle function. Smokers may present with reduced inspiratory muscle strength and endurance. We compared inspiratory muscle performance of nonsmokers with that of former smokers without overt respiratory problems via the Test of Incremental Respiratory Endurance. This study was performed on 42 healthy subjects between the ages of 30 and 79 y (mean ± SD of 56.5 ± 14.4 y). Fourteen male and 7 female former smokers were matched to nonsmokers based on sex, age, height, and weight. Subjects completed a questionnaire about their health and current smoking status. Testing included the best of 3 or more consistent trials. The Test of Incremental Respiratory Endurance measurements included maximal inspiratory pressure measured from residual volume as well as sustained maximal inspiratory pressure and inspiratory duration measured from residual volume to total lung capacity during a maximal sustained inhalation. No significant difference in inspiratory performance of the entire group of former smokers compared with nonsmokers was found. However, separate sex analyses found a significant difference in sustained maximal inspiratory pressure between male former smokers and nonsmokers (518.7 ± 205.0 pressure time units vs 676.5 ± 255.2 pressure time units, P = .041). We found similar maximal inspiratory pressure between former smokers and nonsmokers via the Test of Incremental Respiratory Endurance, but the significant difference in sustained maximal inspiratory pressure between male former smokers and nonsmokers suggests that the sustained maximal inspiratory pressure may have greater discriminatory ability in assessing the effects of smoking on inspiratory muscle performance. Further investigation of the effects of smoking on inspiratory performance via the Test of Incremental Respiratory Endurance is warranted. Copyright © 2018 by Daedalus Enterprises.
Just-in-Time Technology to Encourage Incremental, Dietary Behavior Change
Intille, Stephen S.; Kukla, Charles; Farzanfar, Ramesh; Bakr, Waseem
2003-01-01
Our multi-disciplinary team is developing mobile computing software that uses “just-in-time” presentation of information to motivate behavior change. Using a participatory design process, preliminary interviews have helped us to establish 10 design goals. We have employed some to create a prototype of a tool that encourages better dietary decision making through incremental, just-in-time motivation at the point of purchase. PMID:14728379
Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho
2013-10-01
Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.
A modular design of incremental Lyapunov functions for microgrid control with power sharing
Persis, Claudio De; Monshizadeh, Nima
2015-01-01
In this paper we contribute a theoretical framework that sheds a new light on the problem of microgrid analysis and control. The starting point is an energy function comprising the kinetic energy associated with the elements that emulate the rotating machinery and terms taking into account the reactive power stored in the lines and dissipated on shunt elements. We then shape this energy function with the addition of an adjustable voltage-dependent term, and construct incremental storage funct...
A High-Performance Adaptive Incremental Conductance MPPT Algorithm for Photovoltaic Systems
Chendi Li; Yuanrui Chen; Dongbao Zhou; Junfeng Liu; Jun Zeng
2016-01-01
The output characteristics of photovoltaic (PV) arrays vary with the change of environment, and maximum power point (MPP) tracking (MPPT) techniques are thus employed to extract the peak power from PV arrays. Based on the analysis of existing MPPT methods, a novel incremental conductance (INC) MPPT algorithm is proposed with an adaptive variable step size. The proposed algorithm automatically regulates the step size to track the MPP through a step size adjustment coefficient, and a user prede...
Integrated Personnel and Pay System-Army Increment 1 (IPPS-A Inc 1)
2016-03-01
Alternatives dated January 8, 2003, to use a Commercial-off-the-Shelf (COTS) Enterprise Resource Planning ( ERP ) product to develop and implement IPPS...A. The Milestone Decision Authority directed the continued use of the COTS ERP product in the DIMHRS Capability Way Ahead ADM dated September 8, 2009...Feasibility: The determination of the IPPS-A Increment 1 development and implementation contract type was based on cost risk associated with the
Incremental stiffness and electrical contact conductance in the contact of rough finite bodies
Barber, J. R.
2013-01-01
If two half spaces are in contact, there exists a formal mathematical relation between the electrical contact resistance and the incremental elastic compliance. Here, this relation is extended to the contact of finite bodies. In particular, it is shown that the additional resistance due to roughness of the contacting surfaces (the interface resistance) bears a similar relation to the additional compliance as that obtained for the total resistance in the half-space problem.
2013-09-01
19 3 Map showing location of Fort Eustis, VA, and the 1000-in. Rifle Range............................... 20 4 Map of Fort Wainwright, AK, and the...Realignment and Closure CEC Cation Exchange Capacity CMIST CRREL Multi-Increment Sampling Tool CRREL Cold Regions Research and Engineering Laboratory...sieved, and then mechanically pulverized . Table 2 sum- marizes the proposed changes to the sampling processing procedures for Method 3050B
Just-in-Time Technology to Encourage Incremental, Dietary Behavior Change
Intille, Stephen S.; Kukla, Charles; Farzanfar, Ramesh; Bakr, Waseem
2003-01-01
Our multi-disciplinary team is developing mobile computing software that uses “just-in-time” presentation of information to motivate behavior change. Using a participatory design process, preliminary interviews have helped us to establish 10 design goals. We have employed some to create a prototype of a tool that encourages better dietary decision making through incremental, just-in-time motivation at the point of purchase.
A Variable Incremental Conductance MPPT Algorithm Applied to Photovoltaic Water Pumping System
S. Abdourraziq; R. El Bachtiri
2015-01-01
The use of solar energy as a source for pumping water is one of the promising areas in the photovoltaic (PV) application. The energy of photovoltaic pumping systems (PVPS) can be widely improved by employing an MPPT algorithm. This will lead consequently to maximize the electrical motor speed of the system. This paper presents a modified incremental conductance (IncCond) MPPT algorithm with direct control method applied to a standalone PV pumping system. The influence of ...
Sleep quality and duration are associated with performance in maximal incremental test.
Antunes, B M; Campos, E Z; Parmezzani, S S; Santos, R V; Franchini, E; Lira, F S
2017-08-01
Inadequate sleep patterns may be considered a trigger to development of several metabolic diseases. Additionally, sleep deprivation and poor sleep quality can negatively impact performance in exercise training. However, the impact of sleep duration and sleep quality on performance during incremental maximal test performed by healthy men is unclear. Therefore, the purpose of the study was to analyze the association between sleep pattern (duration and quality) and performance during maximal incremental test in healthy male individuals. A total of 28 healthy males volunteered to take part in the study. Sleep quality, sleep duration and physical activity were subjectively assessed by questionnaires. Sleep pattern was classified by sleep duration (>7h or sleep per night) and sleep quality according to the sum of measured points and/or scores by the Pittsburgh Sleep Quality Index (PSQI). Incremental exercise test was performed at 35 watts for untrained subjects, 70 watts for physically active subjects and 105 watts for well-trained subjects. HR max was correlated with sleep quality (r=0.411, p=0.030) and sleep duration (r=-0.430, p=0.022). Participants reporting good sleep quality presented higher values of W max , VO 2max and lower values of HR max when compared to participants with altered sleep. Regarding sleep duration, only W max was influenced by the amount of sleeping hours per night and this association remained significant even after adjustment by VO 2max . Sleep duration and quality are associated, at least in part, with performance during maximal incremental test among healthy men, with losses in W max and HR max . In addition, our results suggest that the relationship between sleep patterns and performance, mainly in W max , is independent of fitness condition. Copyright © 2017 Elsevier Inc. All rights reserved.
Borel, Benoit; Leclair, Erwan; Thevenet, Delphine; Beghin, Laurent; Gottrand, Frédéric; Fabre, Claudine
2014-03-01
To analyze breathing pattern and mechanical ventilatory constraints during incremental exercise in healthy and cystic fibrosis (CF) children. Thirteen healthy children and 6 children with cystic fibrosis volunteered to perform an incremental test on a treadmill. Exercise tidal flow/volume loops were plotted every minute within a maximal flow/volume loop (MFVL). Expiratory flow limitation (expFL expressed in %Vt) was evaluated and end-expiratory and end-inspiratory lung volumes (EELV and EILV) were estimated from expiratory reserve volume relative to vital capacity (ERV/FVC) and from inspiratory reserve volume relative to vital capacity (IRV/FVC). During the incremental exercise, expFL was first observed at 40% of maximal aerobic speed in both groups. At maximal exercise, 46% of healthy children and 83% of CF children presented expFL, without significant effect of cystic fibrosis on the severity of expFL. According to the two-way ANOVA results, both groups adopted similar breathing pattern and breathing strategies as no significant effect of CF has been revealed. But, according to one-way ANOVA results, a significant increase of ERV/FVC associated with a significant decrease of IRV/FVC from resting value shave been observed in healthy children at maximal exercise, but not in CF children. The hypothesis of this study was based on the assumption that mild cystic fibrosis could induce more frequent and more severe mechanical ventilatory constraints due to pulmonary impairment and breathing pattern disturbances. But, this study did not succeed to highlight an effect of mild cystic fibrosis on the mechanical ventilatory constraints (expFL and dynamic hyperinflation) that occur during an incremental exercise. This absence of effect could be due to the absence of an impact of the disease on spirometric data, breathing pattern regulation during exercise and breathing strategy. © 2013 Wiley Periodicals, Inc.
The value increment of mass-customized products: An empirical assessment
Schreier, Martin
2006-01-01
The primary argument in favor of mass customization is the delivery of superior customer value. Using willingness-to-pay (WTP) measurements, Franke & Piller (2004) have recently shown that customers designing their own watches with design toolkits are willing to pay premiums of more than 100% (DWTP). In the course of three studies, we found that this type of value increment is not a singular occurrence but might rather be a general phenomenon, as we again found average DWTPs of...
LED session prior incremental step test enhance VO2maxin running.
Mezzaroba, Paulo V; Pessôa Filho, Dalton M; Zagatto, Alessandro M; Machado, Fabiana Andrade
2018-03-15
This study aimed to investigate the effect of prior LED sessions on the responses of cardiorespiratory parameters during the running incremental step test. Twenty-six healthy, physically active, young men, aged between 20 and 30 years, took part in this study. Participants performed two incremental load tests after placebo (PLA) and light-emitting diode application (LED), and had their gas exchange, heart rate (HR), blood lactate, and rating of perceived exertion (RPE) monitored during all tests. The PLA and LED conditions were compared using the dependent Student t test with significance set at 5%. The T test showed higher maximum oxygen uptake (VO 2max ) (PLA = 47.2 ± 5.7; LED = 48.0 ± 5.4 ml kg -1 min -1 , trivial effect size), peak velocity (V peak ) (PLA = 13.4 ± 1.2; LED = 13.6 ± 1.2 km h -1 , trivial effect size), and lower maximum HR (PLA = 195.3 ± 3.4; LED = 193.3 ± 3.9 b min -1 , moderate effect size) for LED compared to PLA conditions. Furthermore, submaximal values of HR and RPE were lower, and submaximal VO 2 values were higher when LED sessions prior to the incremental step test were applied. A positive response of the previous LED application in the blood lactate disappearance was also demonstrated, especially 13 and 15 min after the test. It is concluded that LED sessions prior to exercise modify cardiorespiratory response by affecting running tolerance during the incremental step test, metabolite clearance, and RPE. Therefore, LED could be used as a prior exercise strategy to modulate oxidative response acutely in targeted muscle and enhance exercise tolerance.
Incremental test design, peak 'aerobic' running speed and endurance performance in runners.
Machado, Fabiana A; Kravchychyn, Ana Claudia P; Peserico, Cecilia S; da Silva, Danilo F; Mezzaroba, Paulo V
2013-11-01
Peak running speed obtained during an incremental treadmill test (Vpeak) is a good predictor of endurance run performance. However, the best-designed protocol for Vpeak determination and the best Vpeak definition remain unknown. Therefore, this study examined the influence of stage duration and Vpeak definition on the relationship between Vpeak and endurance run performance. Relationship. Twenty-seven male, recreational, endurance-trained runners (10-km running pace: 10-17 k mh(-1)) performed, in counterbalanced order, three continuous incremental treadmill tests of different stage durations (1-, 2-, or 3-min) to determine Vpeak, and two 5-km and two 10-km time trials on a 400-m track to obtain their 5-km and 10-km run performances. Vpeak was defined as either (a) the highest speed that could be maintained for a complete minute (Vpeak-60 s), (b) the speed of the last complete stage (Vpeak-C), or (c) the speed of the last complete stage added to the multiplication of the speed increment by the completed fraction of the incomplete stage (Vpeak-P). The Vpeak determined during the 3-min stage duration protocol was the most highly correlated with both the 5-km (r=0.95) and 10-km (r=0.92) running performances and these relationships were minimally influenced by the Vpeak definition. However, independent of the stage duration, the Vpeak-P provided the highest correlation with both running performances. Incremental treadmill tests comprising 3-min stage duration is preferred to 1-min and 2-min stage duration protocols in order to determine Vpeak to accurately predict 5-km and 10-km running performances. Further, Vpeak-P should be used as standard for the determination of Vpeak. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Technological aspects regarding machining the titanium alloys by means of incremental forming
Directory of Open Access Journals (Sweden)
Bologa Octavian
2017-01-01
Full Text Available Titanium alloys are materials with reduced formability, due to their low plasticity. However, today there are high demands regarding their use in the automotive industry and in bio-medical industry, for prosthetic devices. This paper presents some technological aspects regarding the machinability of titanium alloys by means of incremental forming. The research presented in this paper aimed to demonstrate that the parts made from these materials could be machined at room temperature, in certain technological conditions.
Oral creatine supplementation's decrease of blood lactate during exhaustive, incremental cycling.
Oliver, Jonathan M; Joubert, Dustin P; Martin, Steven E; Crouse, Stephen F
2013-06-01
To determine the effects of creatine supplementation on blood lactate during incremental cycling exercise. Thirteen male subjects (M ± SD 23 ± 2 yr, 178.0 ± 8.1 cm, 86.3 ± 16.0 kg, 24% ± 9% body fat) performed a maximal, incremental cycling test to exhaustion before (Pre) and after (Post) 6 d of creatine supplementation (4 doses/d of 5 g creatine + 15 g glucose). Blood lactate was measured at the end of each exercise stage during the protocol, and the lactate threshold was determined as the stage before achieving 4 mmol/L. Lactate concentrations during the incremental test were analyzed using a 2 (condition) × 6 (exercise stage) repeated-measures ANOVA. Differences in power at lactate threshold, power at exhaustion, and total exercise time were determined by paired t tests and are presented as M ± SD. Lactate concentrations were reduced during exercise after supplementation, demonstrating a significant condition effect (p = .041). There was a tendency for increased power at the lactate threshold (Pre 128 ± 45 W, Post 143 ± 26 W; p = .11). Total time to fatigue approached significant increases (Pre 22.6 ± 3.2 min, Post 23.3 ± 3.3 min; p = .056), as did maximal power output (Pre 212.5 ± 32.5 W, Post 220 ± 34.6 W; p = .082). Our findings demonstrate that creatine supplementation decreases lactate during incremental cycling exercise and tends to raise lactate threshold. Therefore, creatine supplementation could potentially benefit endurance athletes.
Diagnostic of flow rate of the tumors of the boobs at increment of the blood pressure
International Nuclear Information System (INIS)
Pohlodek, K.; Sohn, Ch.
1998-01-01
54 patients with ultrasonography evident tumors of the mammary glands were examined by angiography on flow rate of the blood in the tumor (14 patients with benign tumor and 40 patients with carcinoma at increment of the blood pressure. At evaluating of the findings 4 characteristic curves were obtained: first type was typical for malignant tumors; second type was characteristic for benign findings and third and fourth types were non-specific. (authors)
Cheng, J; Chaffee, BW; Cheng, NF; Gansky, SA; Featherstone, JDB
2015-01-01
© International & American Associations for Dental Research 2014. The Caries Management By Risk Assessment (CAMBRA) randomized controlled trial showed that an intervention featuring combined antibacterial and fluoride therapy significantly reduced bacterial load and suggested reduced caries increment in adults with 1 to 7 baseline cavitated teeth. While trial results speak to the overall effectiveness of an intervention, insight can be gained from understanding the mechanism by which an int...
2015-06-01
polymerization shrinkage3-8, marginal leakage9, issues with accelerated wear10, unpolymerized resin11, fracture 12 and difficulty in establishing...flexural strength, and incomplete adhesion between the resin and tooth surface. This aspect of RBC restorations has not been well documented... tooth surface. The purpose of this investigation was to evaluate the difference between a conventional, hand-placed incremental application of RBC into
Brody eHeritage; Jessica Michelle Gilbert; Lynne D. Roberts
2016-01-01
Job embeddedness is a construct that describes the manner in which employees can be enmeshed in their jobs, reducing their turnover intentions. Recent questions regarding the properties of quantitative job embeddedness measures, and their predictive utility, have been raised. Our study compared two competing reflective measures of job embeddedness, examining their convergent, criterion, and incremental validity, as a means of addressing these questions. Cross-sectional quantitative data from ...
Banks, Phil; Wright, Jean; O'Brien, Kevin
2004-01-01
The aim of this study was to evaluate the effectiveness of incremental and maximum bite advancement during treatment of Class II Division 1 malocclusion with the Twin-block appliance in the permanent dentition. It was performed at 3 district general hospitals in the United Kingdom with 4 operators. Two hundred three patients, 10-14 years old, were randomized. Control patients had the initial bite taken edge-to-edge for appliance construction with a standard Twin-block. Experimental patients h...
Hadjtaieb, Amir
2013-09-12
In this paper, we propose an incremental multinode relaying protocol with arbitrary N-relay nodes that allows an efficient use of the channel spectrum. The destination combines the received signals from the source and the relays using maximal ratio Combining (MRC). The transmission ends successfully once the accumulated signal-to-noise ratio (SNR) exceeds a predefined threshold. The number of relays participating in the transmission is adapted to the channel conditions based on the feedback from the destination. The use of incremental relaying allows obtaining a higher spectral efficiency. Moreover, the symbol error probability (SEP) performance is enhanced by using MRC at the relays. The use of MRC at the relays implies that each relay overhears the signals from the source and all previous relays and combines them using MRC. The proposed protocol differs from most of existing relaying protocol by the fact that it combines both incremental relaying and MRC at the relays for a multinode topology. Our analyses for a decode-and-forward mode show that: (i) compared to existing multinode relaying schemes, the proposed scheme can essentially achieve the same SEP performance but with less average number of time slots, (ii) compared to schemes without MRC at the relays, the proposed scheme can approximately achieve a 3 dB gain.
First UHF Implementation of the Incremental Scheme for Open-Shell Systems.
Anacker, Tony; Tew, David P; Friedrich, Joachim
2016-01-12
The incremental scheme makes it possible to compute CCSD(T) correlation energies to high accuracy for large systems. We present the first extension of this fully automated black-box approach to open-shell systems using an Unrestricted Hartree-Fock (UHF) wave function, extending the efficient domain-specific basis set approach to handle open-shell references. We test our approach on a set of organic and metal organic structures and molecular clusters and demonstrate standard deviations from canonical CCSD(T) values of only 1.35 kJ/mol using a triple ζ basis set. We find that the incremental scheme is significantly more cost-effective than the canonical implementation even for relatively small systems and that the ease of parallelization makes it possible to perform high-level calculations on large systems in a few hours on inexpensive computers. We show that the approximations that make our approach widely applicable are significantly smaller than both the basis set incompleteness error and the intrinsic error of the CCSD(T) method, and we further demonstrate that incremental energies can be reliably used in extrapolation schemes to obtain near complete basis set limit CCSD(T) reaction energies for large systems.
Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds
Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.
2016-06-01
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir
2018-03-01
Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.
Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho
2017-04-01
This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
Directory of Open Access Journals (Sweden)
Fernando P Ponce
2011-01-01
Full Text Available In an experiment we examined whether the repeated presentation of tones of gradually increasing intensities produces greater decrement in the eyeblink reflex response in humans than the repetition of tones of constant intensities. Two groups of participants matched for their initial level of response were exposed to 110 tones of 100-ms duration. For the participants in the incremental group, the tones increased from 60- to 90- dB in 3-dB steps, whereas participants in the constant group received the tones at a fixed 90-dB intensity. The results indicated that the level of response in the last block of 10 trials, in which both groups received 90-dB tones, was significantly lower in the incremental group than in the constant group. These findings support the data presented by Davis and Wagner (7 with the acoustic response in rats, but differ from several reports with autonomic responses in humans, where the advantage of the incremental condition has not been observed unambiguously. The discussion analyzes theoretical approaches to this phenomenon and the possible involvement of separate neural circuits.
Two-Point Incremental Forming with Partial Die: Theory and Experimentation
Silva, M. B.; Martins, P. A. F.
2013-04-01
This paper proposes a new level of understanding of two-point incremental forming (TPIF) with partial die by means of a combined theoretical and experimental investigation. The theoretical developments include an innovative extension of the analytical model for rotational symmetric single point incremental forming (SPIF), originally developed by the authors, to address the influence of the major operating parameters of TPIF and to successfully explain the differences in formability between SPIF and TPIF. The experimental work comprised the mechanical characterization of the material and the determination of its formability limits at necking and fracture by means of circle grid analysis and benchmark incremental sheet forming tests. Results show the adequacy of the proposed analytical model to handle the deformation mechanics of SPIF and TPIF with partial die and demonstrate that neck formation is suppressed in TPIF, so that traditional forming limit curves are inapplicable to describe failure and must be replaced by fracture forming limits derived from ductile damage mechanics. The overall geometric accuracy of sheet metal parts produced by TPIF with partial die is found to be better than that of parts fabricated by SPIF due to smaller elastic recovery upon unloading.
Okada, Shogo; He, Xiaoyuan; Kojima, Ryo; Hasegawa, Osamu
This paper presents an unsupervised approach of integrating speech and visual information without using any prepared data(training data). The approach enables a humanoid robot, Incremental Knowledge Robot 1 (IKR1), to learn words' meanings. The approach is different from most existing approaches in that the robot learns online from audio-visual input, rather than from stationary data provided in advance. In addition, the robot is capable of incremental learning, which is considered to be indispensable to lifelong learning. A noise-robust self-organized incremental neural network(SOINN) is developed to represent the topological structure of unsupervised online data. We are also developing an active learning mechanism, called ``desire for knowledge'', to let the robot select the object for which it possesses the least information for subsequent learning. Experimental results show that the approach raises the efficiency of the learning process. Based on audio and visual data, we construct a mental model for the robot, which forms a basis for constructing IKR1's inner world and builds a bridge connecting the learned concepts with current and past scenes.
An application of the J-integral to an incremental analysis of blunting crack behavior
International Nuclear Information System (INIS)
Merkle, J.G.
1989-01-01
This paper describes an analytical approach to estimating the elastic-plastic stresses and strains near the tip of a blunting crack with a finite root radius. Rice's original derivation of the path independent J-integral considered the possibility of a finite crack tip root radius. For this problem Creager's elastic analysis gives the relation between the stress intensity factor K I and the near tip stresses. It can be shown that the relation K I 2 = E'J holds when the root radius is finite. Recognizing that elastic-plastic behavior is incrementally linear then allows a derivation to be performed for a bielastic specimen having a crack tip region of reduced modulus, and the result differentiated to estimate elastic-plastic behavior. The result is the incremental form of Neuber's equation. This result does not require the assumption of any particular stress-strain relation. However by assuming a pure power law stress-strain relation and using Ilyushin's principle, the ordinary deformation theory form of Neuber's equation, K σ K var epsilon = K t 2 , is obtained. Applications of the incremental form of Neuber's equation have already been made to fatigue and fracture analysis. This paper helps to provide a theoretical basis for these methods previously considered semiempirical. 26 refs., 4 figs
Directory of Open Access Journals (Sweden)
Yoo-Geun Ham
2016-01-01
Full Text Available This study introduces a modified version of the incremental analysis updates (IAU, called the nonstationary IAU (NIAU method, to improve the assimilation accuracy of the IAU while keeping the continuity of the analysis. Similar to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. However, unlike the IAU, the NIAU procedure uses time-evolved forcing using the forward operator as corrections to the model. The solution of the NIAU is superior to that of the forward IAU, of which analysis is performed at the beginning of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
Measurement of the Sunyaev-Zeldovich Effect Increment with Large Aperture Sub-mm Telescopes
Zemcov, Michael
2012-05-01
Measurement of the Sunyaev-Zeldovich (SZ) effect increment is critical for precision determination of the full spectrum of the SZ spectral distortion, which in turn is necessary for measurement of the relativistic and kinetic SZ effects that are largest shortward of the SZ null at 217 GHz. Maps of galaxy clusters at SZ increment frequencies have the added benefit of relatively high angular resolution, allowing a precise determination of the sub-mm galaxy contamination in clusters, which is a significant foreground to SZ spectral studies. Current and upcoming facilities including SPIRE, SCUBA-2, MUSIC on the CSO, and further in the future next generation instrumentation for CCAT, will provide extremely deep, high angular resolution, multi-band SZ spectrum measurements in many clusters. Such measurements will enable new types of SZ science, including detailed studies of the properties of the intra-cluster medium and line of sight velocity effects. In this talk I will review the status of measurements of the SZ increment, present new results from Herschel, and look forward to what developments we can expect over the coming years.
Cheng, Jian; Liu, Haijun; Wang, Feng; Li, Hongsheng; Zhu, Ce
2015-10-01
This paper develops a human action recognition method for human silhouette sequences based on supervised temporal t-stochastic neighbor embedding (ST-tSNE) and incremental learning. Inspired by the SNE and its variants, ST-tSNE is proposed to learn the underlying relationship between action frames in a manifold, where the class label information and temporal information are introduced to well represent those frames from the same action class. As to the incremental learning, an important step for action recognition, we introduce three methods to perform the low-dimensional embedding of new data. Two of them are motivated by local methods, locally linear embedding and locality preserving projection. Those two techniques are proposed to learn explicit linear representations following the local neighbor relationship, and their effectiveness is investigated for preserving the intrinsic action structure. The rest one is based on manifold-oriented stochastic neighbor projection to find a linear projection from high-dimensional to low-dimensional space capturing the underlying pattern manifold. Extensive experimental results and comparisons with the state-of-the-art methods demonstrate the effectiveness and robustness of the proposed ST-tSNE and incremental learning methods in the human action silhouette analysis.
Pornographic image recognition and filtering using incremental learning in compressed domain
Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao
2015-11-01
With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.
Directory of Open Access Journals (Sweden)
Pankaj Kailasrao Bhoyar
Full Text Available Abstract Introduction The implantable devices are having enormous market. These products are basically made by traditional manufacturing process, but for the custom-made implants Incremental Sheet Forming is a paramount alternative. Single Point Incremental Forming (SPIF is a manufacturing process to form intricate, asymmetrical components. It forms the component using stretching and bending by maintaining materials crystal structure. SPIF process can be performed using conventional Computer Numerical Control (CNC milling machine. Review This review paper elaborates the various manufacturing processes carried on various biocompatible metallic and nonmetallic customised implantable devices. Conclusion Ti-6Al-4V alloy is broadly used for biomedical implants, but in this alloy, Vanadium is toxic so this alloy is not compatible for implants. The attention of researchers is towards the non toxic and suitable biocompatible materials. For this reason, a novel approach was developed in order to enhance the mechanical properties of this material. . The development of incremental forming technique can improve the formability of existing alloys and may meet the current strict requirements for performance of dies and punches.
Chelli, Ali
2014-11-01
In this paper, we study the performance of hybrid automatic repeat request (HARQ) with incremental redundancy over double Rayleigh channels, a common model for the fading amplitude of vehicle-to-vehicle communication systems. We investigate the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\\\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum number of HARQ rounds. Moreover, we evaluate the delay experienced by Poisson arriving packets for HARQ with incremental redundancy. We provide analytical expressions for the expected waiting time, the packet\\'s sojourn time in the queue, the average consumed power, and the energy efficiency. In our study, the communication rate per HARQ round is adjusted to the average signal-to-noise ratio (SNR) such that a target outage probability is not exceeded. This setting conforms with communication systems in which a quality of service is expected regardless of the channel conditions. Our analysis underscores the importance of HARQ in improving the spectral efficiency and reliability of communication systems. We demonstrate as well that the explored HARQ scheme achieves full diversity. Additionally, we investigate the tradeoff between energy efficiency and spectral efficiency.
Incremental View Maintenance for Deductive Graph Databases Using Generalized Discrimination Networks
Directory of Open Access Journals (Sweden)
Thomas Beyhl
2016-12-01
Full Text Available Nowadays, graph databases are employed when relationships between entities are in the scope of database queries to avoid performance-critical join operations of relational databases. Graph queries are used to query and modify graphs stored in graph databases. Graph queries employ graph pattern matching that is NP-complete for subgraph isomorphism. Graph database views can be employed that keep ready answers in terms of precalculated graph pattern matches for often stated and complex graph queries to increase query performance. However, such graph database views must be kept consistent with the graphs stored in the graph database. In this paper, we describe how to use incremental graph pattern matching as technique for maintaining graph database views. We present an incremental maintenance algorithm for graph database views, which works for imperatively and declaratively specified graph queries. The evaluation shows that our maintenance algorithm scales when the number of nodes and edges stored in the graph database increases. Furthermore, our evaluation shows that our approach can outperform existing approaches for the incremental maintenance of graph query results.
Directory of Open Access Journals (Sweden)
Šoltýsová I.
2016-12-01
Full Text Available Active ingredients in pharmaceuticals differ by their physico-chemical properties and their bioavailability therefore varies. The most frequently used and most convenient way of administration of medicines is oral, however many drugs are little soluble in water. Thus they are not sufficiently effective and suitable for such administration. For this reason a system of lipid based formulations (LBF was developed. Series of formulations were prepared and tested in water and biorelevant media. On the basis of selection criteria, there were selected formulations with the best emulsification potential, good dispersion in the environment and physical stability. Samples of structurally different drugs included in the Class II of the Biopharmaceutics classification system (BCS were obtained, namely Griseofulvin, Glibenclamide, Carbamazepine, Haloperidol, Itraconazol, Triclosan, Praziquantel and Rifaximin, for testing of maximal saturation in formulations prepared from commercially available excipients. Methods were developed for preparation of formulations, observation of emulsification and its description, determination of maximum solubility of drug samples in the respective formulation and subsequent analysis. Saturation of formulations with drugs showed that formulations 80 % XA and 20 % Xh, 35 % XF and 65 % Xh were best able to dissolve the drugs which supports the hypothesis that it is desirable to identify limited series of formulations which could be generally applied for this purpose.
International Nuclear Information System (INIS)
Belsher, J.D.; Meinert, F.L.
2009-01-01
This document presents the differences between two HLW glass formulation models (GFM): The 1996 GFM and 2009 GFM. A glass formulation model is a collection of glass property correlations and associated limits, as well as model validity and solubility constraints; it uses the pretreated HLW feed composition to predict the amount and composition of glass forming additives necessary to produce acceptable HLW glass. The 2009 GFM presented in this report was constructed as a nonlinear optimization calculation based on updated glass property data and solubility limits described in PNNL-18501 (2009). Key mission drivers such as the total mass of HLW glass and waste oxide loading are compared between the two glass formulation models. In addition, a sensitivity study was performed within the 2009 GFM to determine the effect of relaxing various constraints on the predicted mass of the HLW glass.
Cheng, J; Chaffee, B W; Cheng, N F; Gansky, S A; Featherstone, J D B
2015-01-01
The Caries Management By Risk Assessment (CAMBRA) randomized controlled trial showed that an intervention featuring combined antibacterial and fluoride therapy significantly reduced bacterial load and suggested reduced caries increment in adults with 1 to 7 baseline cavitated teeth. While trial results speak to the overall effectiveness of an intervention, insight can be gained from understanding the mechanism by which an intervention acts on putative intermediate variables (mediators) to affect outcomes. This study conducted mediation analyses on 109 participants who completed the trial to understand whether the intervention reduced caries increment through its action on potential mediators (oral bacterial load, fluoride levels, and overall caries risk based on the composite of bacterial challenge and salivary fluoride) between the intervention and dental outcomes. The primary outcome was the increment from baseline in decayed, missing, and filled permanent surfaces (ΔDMFS) 24 mo after completing restorations for baseline cavitated lesions. Analyses adjusted for baseline overall risk, bacterial challenge, and fluoride values under a potential outcome framework using generalized linear models. Overall, the CAMBRA intervention was suggestive in reducing the 24-mo DMFS increment (reduction in ΔDMFS: -0.96; 95% confidence interval [CI]: -2.01 to 0.08; P = 0.07); the intervention significantly reduced the 12-mo overall risk (reduction in overall risk: -19%; 95% CI, -7 to -41%;], P = 0.005). Individual mediators, salivary log10 mutans streptococci, log10 lactobacilli, and fluoride level, did not represent statistically significant pathways alone through which the intervention effect was transmitted. However, 36% of the intervention effect on 24-mo DMFS increment was through a mediation effect on 12-mo overall risk (P = 0.03). These findings suggest a greater intervention effect carried through the combined action on multiple aspects of the caries process rather than
Self-Setting Calcium Orthophosphate Formulations
Dorozhkin, Sergey V.
2013-01-01
In early 1980s, researchers discovered self-setting calcium orthophosphate cements, which are bioactive and biodegradable grafting bioceramics in the form of a powder and a liquid. After mixing, both phases form pastes, which set and harden forming either a non-stoichiometric calcium deficient hydroxyapatite or brushite. Since both of them are remarkably biocompartible, bioresorbable and osteoconductive, self-setting calcium orthophosphate formulations appear to be promising bioceramics for bone grafting. Furthermore, such formulations possess excellent molding capabilities, easy manipulation and nearly perfect adaptation to the complex shapes of bone defects, followed by gradual bioresorption and new bone formation. In addition, reinforced formulations have been introduced, which might be described as calcium orthophosphate concretes. The discovery of self-setting properties opened up a new era in the medical application of calcium orthophosphates and many commercial trademarks have been introduced as a result. Currently such formulations are widely used as synthetic bone grafts, with several advantages, such as pourability and injectability. Moreover, their low-temperature setting reactions and intrinsic porosity allow loading by drugs, biomolecules and even cells for tissue engineering purposes. In this review, an insight into the self-setting calcium orthophosphate formulations, as excellent bioceramics suitable for both dental and bone grafting applications, has been provided. PMID:24956191
Bioequivalence assessment of two formulations of ibuprofen
Al-Talla, Zeyad
2011-10-19
Background: This study assessed the relative bioavailability of two formulations of ibuprofen. The first formulation was Doloraz , produced by Al-Razi Pharmaceutical Company, Amman, Jordan. The second forumulation was Brufen , manufactured by Boots Company, Nottingham, UK. Methods and results: A prestudy validation of ibuprofen demonstrated long-term stability, freeze-thaw stability, precision, and accuracy. Twenty-four healthy volunteers were enrolled in this study. After overnight fasting, the two formulations (test and reference) of ibuprofen (100 mg ibuprofen/5 mL suspension) were administered as a single dose on two treatment days separated by a one-week washout period. After dosing, serial blood samples were drawn for a period of 14 hours. Serum harvested from the blood samples was analyzed for the presence of ibuprofen by high-pressure liquid chromatography with ultraviolet detection. Pharmacokinetic parameters were determined from serum concentrations for both formulations. The 90% confidence intervals of the ln-transformed test/reference treatment ratios for peak plasma concentration and area under the concentration-time curve (AUC) parameters were found to be within the predetermined acceptable interval of 80%-125% set by the US Food and Drug Administration. Conclusion: Analysis of variance for peak plasma concentrations and AUC parameters showed no significant difference between the two formulations and, therefore, Doloraz was considered bioequivalent to Brufen. 2011 Al-Talla et al, publisher and licensee Dove Medical Press Ltd.
Kuhn, Matthew R.; Daouadji, Ali
2018-05-01
The paper addresses a common assumption of elastoplastic modeling: that the recoverable, elastic strain increment is unaffected by alterations of the elastic moduli that accompany loading. This assumption is found to be false for a granular material, and discrete element (DEM) simulations demonstrate that granular materials are coupled materials at both micro- and macro-scales. Elasto-plastic coupling at the macro-scale is placed in the context of thermomechanics framework of Tomasz Hueckel and Hans Ziegler, in which the elastic moduli are altered by irreversible processes during loading. This complex behavior is explored for multi-directional loading probes that follow an initial monotonic loading. An advanced DEM model is used in the study, with non-convex non-spherical particles and two different contact models: a conventional linear-frictional model and an exact implementation of the Hertz-like Cattaneo-Mindlin model. Orthotropic true-triaxial probes were used in the study (i.e., no direct shear strain), with tiny strain increments of 2 ×10-6 . At the micro-scale, contact movements were monitored during small increments of loading and load-reversal, and results show that these movements are not reversed by a reversal of strain direction, and some contacts that were sliding during a loading increment continue to slide during reversal. The probes show that the coupled part of a strain increment, the difference between the recoverable (elastic) increment and its reversible part, must be considered when partitioning strain increments into elastic and plastic parts. Small increments of irreversible (and plastic) strain and contact slipping and frictional dissipation occur for all directions of loading, and an elastic domain, if it exists at all, is smaller than the strain increment used in the simulations.
A novel molluscicidal formulation of niclosamide.
Dai, Jian-Rong; Wang, Wei; Liang, You-Sheng; Li, Hong-Jun; Guan, Xiao-Hong; Zhu, Yin-Chang
2008-07-01
Snail control by molluscicides is an important strategy for schistosomiasis control in China. Currently, only one chemical molluscicide, niclosamide, which is used as 50% wettable powder of niclosamide ethanolamine salt (WPN), is commercially available for field snail control in China. However, WPN is costly, toxic, and has a lower dispersibility and precipitates rapidly. In this paper, we describe the development of a novel formulation of niclosamide, a suspension concentrate of niclosamide (SCN). The efficacy of SCN was evaluated both in the laboratory and field. SCN showed better molluscicidal effects than conventional formulation of WPN, as determined by LC(50) for adult snails, young snails, and snail eggs. The acute toxicity of SCN to Brachdanio rerio hamiton was less than WPN. In conclusion, the novel formulation of SCN suspension is physically more stable, more effective, and less toxic. Therefore, it can be more useful for controlling snails in endemic areas of schistosomiasis in China.
Engaged Problem Formulation in IS Research
DEFF Research Database (Denmark)
Nielsen, Peter Axel; Persson, John Stouby
2016-01-01
problems requires a more substantial engagement with the different stakeholders, especially when their problems are ill structured and situated in complex organizational settings. On this basis, we present an engaged approach to formulating IS problems with, not for, IS practitioners. We have come...... to understand engaged problem formulation as joint researching and as the defining of contemporary and complex problems by researchers and those practitioners who experience and know these problems. We used this approach in investigating IS management in Danish municipalities. In this paper, we present...... the approach to formulating problems in an engaged way. We discuss it in relation to ideas and assumptions that underpin engaged scholarship, and we discuss the implications for IS action research, design science research, and mixed approaches....
Engaged Problem Formulation in IS Research
DEFF Research Database (Denmark)
Nielsen, Peter Axel; Persson, John Stouby
2016-01-01
“Is this the problem?”: the question that haunts many information systems (IS) researchers when they pursue work relevant to both practice and research. Nevertheless, a deliberate answer to this question requires more than simply asking the involved IS practitioners. Deliberately formulating prob...... the approach to formulating problems in an engaged way. We discuss it in relation to ideas and assumptions that underpin engaged scholarship, and we discuss the implications for IS action research, design science research, and mixed approaches....... to understand engaged problem formulation as joint researching and as the defining of contemporary and complex problems by researchers and those practitioners who experience and know these problems. We used this approach in investigating IS management in Danish municipalities. In this paper, we present...
RAACFDb: Rheumatoid arthritis ayurvedic classical formulations database.
Mohamed Thoufic Ali, A M; Agrawal, Aakash; Sajitha Lulu, S; Mohana Priya, A; Vino, S
2017-02-02
In the past years, the treatment of rheumatoid arthritis (RA) has undergone remarkable changes in all therapeutic modes. The present newfangled care in clinical research is to determine and to pick a new track for better treatment options for RA. Recent ethnopharmacological investigations revealed that traditional herbal remedies are the most preferred modality of complementary and alternative medicine (CAM). However, several ayurvedic modes of treatments and formulations for RA are not much studied and documented from Indian traditional system of medicine. Therefore, this directed us to develop an integrated database, RAACFDb (acronym: Rheumatoid Arthritis Ayurvedic Classical Formulations Database) by consolidating data from the repository of Vedic Samhita - The Ayurveda to retrieve the available formulations information easily. Literature data was gathered using several search engines and from ayurvedic practitioners for loading information in the database. In order to represent the collected information about classical ayurvedic formulations, an integrated database is constructed and implemented on a MySQL and PHP back-end. The database is supported by describing all the ayurvedic classical formulations for the treatment rheumatoid arthritis. It includes composition, usage, plant parts used, active ingredients present in the composition and their structures. The prime objective is to locate ayurvedic formulations proven to be quite successful and highly effective among the patients with reduced side effects. The database (freely available at www.beta.vit.ac.in/raacfdb/index.html) hopefully enables easy access for clinical researchers and students to discover novel leads with reduced side effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Farnaz Monajjemzadeh
2014-12-01
Full Text Available Purpose: In this research the effect of vitamin B1 and B6 on cyanocobalamin stability in commercial light protected parenteral formulations and upon adding stabilizing agents will be investigated and best formulation composition and proper storage condition will be introduced. Methods: In this research some additives such as co solvents and tonicity adjusters, surfactants, antioxidants and chelating agents as well as buffer solutions, were used to improve the stability of the parenteral mixed formulations of B12 in the presence of other B vitamins (B1 and B6. Screening tests and accelerated stability tests were performed according to ICH guidelines Q1A (R2. Results: Shelf life evaluation revealed the best formulation and the proper storage condition. The results indicated the first kinetic models for all tested formulations and the optimum pH value was determined to be 5.8. There was no evidence of B12 loss when mixed with B1 and B6 in a medical syringe at room temperature for maximum of 8 hours. Conclusion: It is necessary to formulate vitamin B12 mixed parenteral solutions using proper phosphate buffers (pH=5.8 and to indicate “Store in refrigerator” on the mixed parenteral formulations of vitamin B12 with other B vitamins, which has not been expressed on the label of tested Brand formulations at the time of this study.
International Nuclear Information System (INIS)
El-Hawary, M.E.; El-Hawary, F.; Mbamalu, G.A.N.
1991-01-01
A questionnaire was mailed to ten Canadian utilities to determine the methods the utilities use in determining the incremental cost of delivering energy at any time. The questionnaire was divided into three parts: generation, transmission and general. The generation section dealt with heat rates, fuel, operation and maintenance, startup and shutdown, and method of prioritizing and economic evaluation of interchange transactions. Transmission dealt with inclusion of transmission system incremental maintenance costs, and transmission losses determination. The general section dealt with incremental costs aspects, and various other economic considerations. A summary is presented of responses to the questionnaire
PCA/INCREMENT MEMORY interface for analog processors on-line with PC-XT/AT IBM
International Nuclear Information System (INIS)
Biri, S.; Buttsev, V.S.; Molnar, J.; Samojlov, V.N.
1989-01-01
The functional and operational descriptions on PCA/INCREMENT MEMORY interface are discussed. The following is solved with this unit: connection between the analogue signal processor and PC, nuclear spectrum acquisition up to 2 24 -1 counts/channel using increment or decrement method, data read/write from or to memory via data bus PC during the spectrum acquisition. Dual ported memory organization is 4096x24 bit, increment cycle time at 4.77 MHz system clock frequency is 1.05 μs. 6 refs.; 2 figs
Formulation and stability testing of photolabile drugs.
Tønnesen, H H
2001-08-28
Exposure of a drug to irradiation can influence the stability of the formulation, leading to changes in the physicochemical properties of the product. The influence of excipients of frequently used stabilizers is often difficult to predict and, therefore, stability testing of the final preparation is important. The selection of a protective packaging must be based on knowledge about the wavelength causing the instability. Details on drug photoreactivity will also be helpful in order to minimize side-effects and/or optimize drug targeting by developing photoresponsive drug delivery systems. This review focuses on practical problems related to formulation and stability testing of photolabile drugs.
Quaternionic formulation of the exact parity model
International Nuclear Information System (INIS)
Brumby, S.P.; Foot, R.; Volkas, R.R.
1996-01-01
The exact parity model (EPM) is a simple extension of the standard model which reinstates parity invariance as an unbroken symmetry of nature. The mirror matter sector of the model can interact with ordinary matter through gauge boson mixing, Higgs boson mixing and, if neutrinos are massive, through neutrino mixing. The last effect has experimental support through the observed solar and atmospheric neutrino anomalies. In the paper it is shown that the exact parity model can be formulated in a quaternionic framework. This suggests that the idea of mirror matter and exact parity may have profound implications for the mathematical formulation of quantum theory. 13 refs
Pharmacokinetics of formulated tenoxicam transdermal delivery systems.
Kim, Taekyung; Kang, Eunyoung; Chun, Inkoo; Gwak, Hyesun
2008-01-01
To investigate the feasibility of developing a new tenoxicam transdermal delivery system (TDS), the pharmacokinetics of tenoxicam from various formulated TDS were evaluated and compared with values following oral administration of tenoxicam and with application of a piroxicam plaster (Trast) marketed in Korea. Based on previous in-vitro study results, a mixture of diethylene glycol monoethyl ether (DGME) and propylene glycol monolaurate (PGML) (40:60) was used as a vehicle, and caprylic acid, capric acid, lauric acid, oleic acid or linoleic acid (each at 3%) was added as an enhancer. Triethanolamine (5%) was used as a solubilizer, and Duro-Tak 87-2510 as a pressure-sensitive adhesive. Among these fatty acids used for the formulation of tenoxicam TDS, caprylic acid showed the greatest enhancing effect; the area under the plasma concentration-time profile (AUC) decreased in the order of caprylic acid>linoleic acid>or=oleic acid>lauric acid>capric acid. Compared with oral administration, maximum plasma concentration (Cmax) was significantly lower, and time to reach Cmax (Tmax) delayed with all formulated tenoxicam TDS. All formulated TDS resulted in a lower AUC than with the oral formulation, except for TDS containing caprylic acid, although the difference was statistically significant only with capric acid. The AUC for all the formulated tenoxicam TDS was significantly higher than that of the piroxicam plaster; TDS with caprylic acid increased AUC 8.53-fold compared with the piroxicam plaster. Even though the Tmax of tenoxicam TDS was not significantly different from that of the piroxicam plaster, Cmax was higher; formulations containing caprylic acid and linoleic acid increased Cmax by 7.39- and 8.76-fold, respectively. In conclusion, a formulation containing 1.5 mL DGME-PGML (40:60) with 3% caprylic acid and 5% triethanolamine mixed with 6 g Duro-Tak 87-2510 could be a good candidate for developing a new tenoxicam TDS to maintain a comparable extent of absorption
Design of formulated products: a systematic methodology
DEFF Research Database (Denmark)
Conte, Elisa; Gani, Rafiqul; Ng, K.M.
2011-01-01
-based computer-aided methodology for design and verification of a class of chemical-based products (liquid formulations) is presented. This methodology is part of an integrated three-stage approach for design/verification of liquid formulations where stage-1 generates a list of feasible product candidates and....../or verifies a specified set through a sequence of predefined activities (work-flow). Stage-2 and stage-3 (not presented here) deal with the planning and execution of experiments, for product validation. Four case studies have been developed to test the methodology. The computer-aided design (stage-1...
Boundary layer computations using a generalized formulation
Bergeron, D.; Zingg, D. W.
A numerical solution procedure for a generalized form of the boundary-layer equations based on the formulation of Steger and Van Dalsem is described. The formulation, which is intended for use in a fortified Navier-Stokes procedure, uses the boundary-layer equations expressed in body-confromal coordinates but transformed into generalized coordinates for the solution process. Results are presented for attached and separated transonic airfoil flows with external pressure gradient given from a Navier-Stokes solution in the boundary layers. Discrepancies are noted near shocks and trailing edges, where normal pressure gradients can be appreciable and streamwise velocity gradients can be high.
Problem formulation as a discursive design activity
DEFF Research Database (Denmark)
Hansen, Claus Thorp; Dorst, Kees; Andreasen, Mogens Myrup
2009-01-01
part of any engineering design education. Yet the assumptions behind the rational problem solving approach to design do not sit well with some of the experiences we have in design teaching and design practice. Problem formulation is one such area where we might have to look for a different way...... the intricacies of problem formulation in design. Key notions in design methodology, like ‘design problem’, ‘design solution’ and ‘ill-structuredness’ are reconsidered in this light. This directly leads to identifying further lines for investigation, and an agenda for design research....
An exact approach for aggregated formulations
DEFF Research Database (Denmark)
Gamst, Mette; Spoorendonk, Simon
Aggregating formulations is a powerful approach for transforming problems into taking more tractable forms. Aggregated formulations can, though, have drawbacks: some information may get lost in the aggregation and { put in a branch-and-bound context { branching may become very di_cult and even....... The paper includes general considerations on types of problems for which the method is of particular interest. Furthermore, we prove the correctness of the procedure and consider how to include extensions such as cutting planes and advanced branching strategies....
Determination of carbamazepine in pharmaceutical formulations
Directory of Open Access Journals (Sweden)
Lílian Grace da Silva Solon
2010-09-01
Full Text Available The aim of this study was to evaluate the quality of five different solid formulations of carbamazepine. The reference formulation was Tegretol® 200.00 mg (Novartis and the others were: generic formulation of carbamazepine 200.00 mg (National Industry, similar formulation of carbamazepine 200.00 mg (National Industry, and two formulations of carbamazepine 200.00 mg acquired from two different compounding pharmacies. The latter consisted of capsules obtained in Natal, the capital city of the Brazilian State of Rio Grande do Norte. The quality of samples was evaluated through physical and physical-chemical tests, including: weight, diameter, thickness, content, dissolution, disintegration, hardness, friability and moisture. The results of friability analysis showed that all formulations met Brazilian and United States Pharmacopeia (USP specifications. In spite of having a higher hardness compared to the reference, the generic formulation had a lower disintegration time. This could be associated to the presence of crospovidone in its formulation. Results of this study showed that all formulations had dissolutions which were in accordance with Brazilian Pharmacopoeia specifications, and quality control tests. An exception was found for the similar formulation, which had a hardness parameter that exceeded the USP standard. However, this difference was not significant given the similar formulation's satisfactory disintegration time.O objetivo desse trabalho foi avaliar a qualidade de cinco formulações de carbamazepina na dosagem de 200,00 mg: medicamento referência Tegretol® (Novartis, medicamento genérico (indústria nacional, medicamento similar (indústria nacional e cápsulas do mesmo medicamento obtidas de duas farmácias de manipulação da cidade do Natal, RN. Os ensaios realizados foram: peso médio, diâmetro, espessura, teor, dissolução, desintegração, dureza, friabilidade e umidade. Foi observado que nenhuma das amostras
Stability of Formulations Contained in the Pharmaceutical Payload Aboard Space Missions
Putcha, Lakshmi; Du, Brian; Daniels, Vernie; Boyd, Jason L.; Crady, Camille; Satterfield, Rick
2008-01-01
Efficacious pharmaceuticals with adequate shelf life are essential for successful space medical operations in support of space exploration missions. Physical and environmental factors unique to space missions such as vibration, G forces and ionizing radiation may adversely affect stability of pharmaceuticals intended for standard care of astronauts aboard space missions. Stable pharmaceuticals, therefore, are of paramount importance for assuring health and wellness of astronauts in space. Preliminary examination of stability of formulations from Shuttle and International Space Station (ISS) medical kits revealed that some of these medications showed physical and chemical degradation after flight raising concern of reduced therapeutic effectiveness with these medications in space. A research payload experiment was conducted with a select set of formulations stowed aboard a shuttle flight and on ISS. The payload consisted of four identical pharmaceutical kits containing 31 medications in different dosage forms that were transported to the International Space Station (ISS) aboard the Space Shuttle, STS 121. One of the four kits was stored on the shuttle and the other three were stored on the ISS for return to Earth at six months intervals on a pre-designated Shuttle flight for each kit; the shuttle kit was returned to Earth on the same flight. Standard stability indicating physical and chemical parameters were measured for all pharmaceuticals returned from the shuttle and from the first ISS increment payload along with ground-based matching controls. Results were compared between shuttle, ISS and ground controls. Evaluation of data from the three paradigms indicates that some of the formulations exhibited significant degradation in space compared to respective ground controls; a few formulations were unstable both on the ground and in space. An increase in the number of pharmaceuticals from ISS failing USP standards was noticed compared to those from the shuttle
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
Soliman, Ahmed M; Taylor, Hugh; Bonafede, Machaon; Nelson, James K; Castelli-Haley, Jane
2017-05-01
To compare direct and indirect costs between endometriosis patients who underwent endometriosis-related surgery (surgery cohort) and those who have not received surgery (no-surgery cohort). Retrospective cohort study. Not applicable. Endometriosis patients (aged 18-49 years) with (n = 124,530) or without (n = 37,106) a claim for endometriosis-related surgery were identified from the Truven Health MarketScan Commercial and Health and Productivity Management databases for 2006-2014. Not applicable. Primary outcomes were healthcare utilization during 12-month pre- and post-index periods, annual direct (healthcare) and indirect (absenteeism and short- and long-term disability) costs during the 12-month post-index period (in 2014 US dollars). Indirect costs were assessed for patients with available productivity data. Patients in the surgery cohort had significantly higher healthcare resource utilization during the post-index period and had mean annual total adjusted post-index direct costs approximately three times the costs among patients in the no-surgery cohort ($19,203 [SD $7,133] vs. $6,365 [SD $2,364]; average incremental annual direct cost = $12,838). The mean cost of surgery ($7,268 [SD $7,975]) was the single largest contributor to incremental annual direct cost. Mean estimated annual total indirect costs were $8,843 (surgery cohort) vs. $5,603 (no-surgery cohort); average incremental annual indirect cost = $3,240. Endometriosis patients who underwent surgery, compared with endometriosis patients who did not, incurred significantly higher direct costs due to healthcare utilization and indirect costs due to absenteeism or short-term disability. Regardless of the surgery type, the cost of index surgery contributed substantially to the total healthcare expenditure. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Partial and incremental PCMH practice transformation: implications for quality and costs.
Paustian, Michael L; Alexander, Jeffrey A; El Reda, Darline K; Wise, Chris G; Green, Lee A; Fetters, Michael D
2014-02-01
To examine the associations between partial and incremental implementation of the Patient Centered Medical Home (PCMH) model and measures of cost and quality of care. We combined validated, self-reported PCMH capabilities data with administrative claims data for a diverse statewide population of 2,432 primary care practices in Michigan. These data were supplemented with contextual data from the Area Resource File. We measured medical home capabilities in place as of June 2009 and change in medical home capabilities implemented between July 2009 and June 2010. Generalized estimating equations were used to estimate the mean effect of these PCMH measures on total medical costs and quality of care delivered in physician practices between July 2009 and June 2010, while controlling for potential practice, patient cohort, physician organization, and practice environment confounders. Based on the observed relationships for partial implementation, full implementation of the PCMH model is associated with a 3.5 percent higher quality composite score, a 5.1 percent higher preventive composite score, and $26.37 lower per member per month medical costs for adults. Full PCMH implementation is also associated with a 12.2 percent higher preventive composite score, but no reductions in costs for pediatric populations. Incremental improvements in PCMH model implementation yielded similar positive effects on quality of care for both adult and pediatric populations but were not associated with cost savings for either population. Estimated effects of the PCMH model on quality and cost of care appear to improve with the degree of PCMH implementation achieved and with incremental improvements in implementation. © Health Research and Educational Trust.
Experimental measurement of enthalpy increments of Th0.25Ce0.75O2
International Nuclear Information System (INIS)
Babu, R.; Balakrishnan, S.; Ananthasivan, K.; Nagarajan, K.
2013-01-01
Thorium has been suggested as an alternative fertile material for a nuclear fuel cycle, and an inert matrix for burning plutonium and for waste disposal. The third stage of India's nuclear power programme envisages utilization of thorium and plutonium as a fuel in Advanced Heavy Water Reactor (AHWR) and Accelerator Driven Sub-critical Systems (ADSS). Solid solutions of ThO 2 -PuO 2 are of importance because of coexistence of Th with Pu during the breeding cycle. CeO 2 is used as a PuO 2 analog due to similar ionic radii of cations and similar physico-chemical properties of the oxides. ThO 2 forms a homogeneous solid solution with the cubic fluorite structure when doped with Ce in the entire compositional range. In the development of mixed oxide nuclear fuels, knowledge of thermodynamic properties of thorium oxide and its mixtures has become extremely importance for understanding the fuel behavior during irradiation and for predicting the performance of the fuel under accidental conditions. Thermodynamic functions such as the enthalpy increment and heat capacity of the theria-ceria solid solution have not been measured experimentally. Hence, the enthalpy increments of thoria-ceria solid solutions, Th 0.25 Ce 0.75 O 2 by inverse drop calorimetry in the temperature range 523-1723 K have been measured. The measured enthalpy increments were fitted in to polynomial functions by using the least squares method and the other thermodynamic functions such as heat capacity, entropy and Gibbs energy functions were computed in the temperature range 298-1800 K. The reported thermodynamic functions for Th 0.25 Ce 0.75 O 2 forms the first experimental data and the heat capacity of (Th,Ce)O 2 solid solutions was shown to obey the Neumann-Kopp's rule. (author)
International Nuclear Information System (INIS)
Camic, Clayton L; Kovacs, Attila J; Hill, Ethan C; Calantoni, Austin M; Yemm, Allison J; Enquist, Evan A; VanDusseldorp, Trisha A
2014-01-01
The purposes of the present study were two fold: (1) to determine if the model used for estimating the physical working capacity at the fatigue threshold (PWC FT ) from electromyographic (EMG) amplitude data during incremental cycle ergometry could be applied to treadmill running to derive a new neuromuscular fatigue threshold for running, and (2) to compare the running velocities associated with the PWC FT , ventilatory threshold (VT), and respiratory compensation point (RCP). Fifteen college-aged subjects (21.5 ± 1.3 y, 68.7 ± 10.5 kg, 175.9 ± 6.7 cm) performed an incremental treadmill test to exhaustion with bipolar surface EMG signals recorded from the vastus lateralis. There were significant (p < 0.05) mean differences in running velocities between the VT (11.3 ± 1.3 km h −1 ) and PWC FT (14.0 ± 2.3 km h −1 ), VT and RCP (14.0 ± 1.8 km h −1 ), but not the PWC FT and RCP. The findings of the present study indicated that the PWC FT model could be applied to a single continuous, incremental treadmill test to estimate the maximal running velocity that can be maintained prior to the onset of neuromuscular fatigue. In addition, these findings suggested that the PWC FT , like the RCP, may be used to differentiate the heavy from severe domains of exercise intensity. (paper)
da Silva, Arlindo M.; Alpert, Pinhas
2016-01-01
In the late 1990's, prior to the launch of the Terra satellite, atmospheric general circulation models (GCMs) did not include aerosol processes because aerosols were not properly monitored on a global scale and their spatial distributions were not known well enough for their incorporation in operational GCMs. At the time of the first GEOS Reanalysis (Schubert et al. 1993), long time series of analysis increments (the corrections to the atmospheric state by all available meteorological observations) became readily available, enabling detailed analysis of the GEOS-1 errors on a global scale. Such analysis revealed that temperature biases were particularly pronounced in the Tropical Atlantic region, with patterns depicting a remarkable similarity to dust plumes emanating from the African continent as evidenced by TOMS aerosol index maps. Yoram Kaufman was instrumental encouraging us to pursue this issue further, resulting in the study reported in Alpert et al. (1998) where we attempted to assess aerosol forcing by studying the errors of a the GEOS-1 GCM without aerosol physics within a data assimilation system. Based on this analysis, Alpert et al. (1998) put forward that dust aerosols are an important source of inaccuracies in numerical weather-prediction models in the Tropical Atlantic region, although a direct verification of this hypothesis was not possible back then. Nearly 20 years later, numerical prediction models have increased in resolution and complexity of physical parameterizations, including the representation of aerosols and their interactions with the circulation. Moreover, with the advent of NASA's EOS program and subsequent satellites, atmospheric aerosols are now monitored globally on a routine basis, and their assimilation in global models are becoming well established. In this talk we will reexamine the Alpert et al. (1998) hypothesis using the most recent version of the GEOS-5 Data Assimilation System with assimilation of aerosols. We will
International Nuclear Information System (INIS)
Shin, Hong Seop; Lee, Dong Ho; Kim, Yoon Hwa; Ko, Young Tae; Lim, Joo Won; Yoon, Yup
1996-01-01
To evaluate the enhancing pattern of gastric carcinomas at dynamic incremental CT and to correlate it with pathologic findings. We retrospectively evaluated the enhancement pattern of stomach cancer on dynamic incremental CT of the 78 patients. All the lesions had been pathologically proved after surgery. The enhancement pattern was categorized as good or poor in the early phase;homogeneous, heterogeneous or ring enhancement;the presence or absence of delayed enhancement. There were 16 cases of early gastric cancer (EGC), and 62 cases of advanced gastric cancer(AGC). The Borrmann type of AGC were 1(n=1), 2(n=20), 3=(n=32), 4(n=8) and 5(n=1). The histologic patterns of AGC were tubular(n=49), signet ring cell(n=10), and mucinous(n=3). The enhancing patterns were compared with gross and histologic findings and delayed enhancement was correlated with pathologic evidence of desmoplasia. Good enhancement of tumor was seen in 24/41cases (58.5%) with AGC Borrmann type 3-5, in 6/21(28.6%) with AGC Borrmann type 1-2, and in 3/16(18.8%) with EGC (P<.05). By histologic pattern, good enhancement of tumor was seen in 8/10(80%) with signet ring cell type, in 21/49(42.9%) with tubular type, and in 1/3(33.3%) with mucinous type(P<.05). EGC was homogeneously enhanced in 14/16cases (87.5%), but AGC was heterogeneously enhanced in 33/62(53.2%), respectively(P<.01). There was no significant correlation between delayed enhancement and the presence of desmoplasia. AGC Borrmann type 3-5 and signet ring cell type have a tendency to show good enhancement and EGC is more homogeneously enhanced at dynamic incremental CT
Blood flow patterns during incremental and steady-state aerobic exercise.
Coovert, Daniel; Evans, LeVisa D; Jarrett, Steven; Lima, Carla; Lima, Natalia; Gurovich, Alvaro N
2017-05-30
Endothelial shear stress (ESS) is a physiological stimulus for vascular homeostasis, highly dependent on blood flow patterns. Exercise-induced ESS might be beneficial on vascular health. However, it is unclear what type of ESS aerobic exercise (AX) produces. The aims of this study are to characterize exercise-induced blood flow patterns during incremental and steady-state AX. We expect blood flow pattern during exercise will be intensity-dependent and bidirectional. Six college-aged students (2 males and 4 females) were recruited to perform 2 exercise tests on cycleergometer. First, an 8-12-min incremental test (Test 1) where oxygen uptake (VO2), heart rate (HR), blood pressure (BP), and blood lactate (La) were measured at rest and after each 2-min step. Then, at least 48-hr. after the first test, a 3-step steady state exercise test (Test 2) was performed measuring VO2, HR, BP, and La. The three steps were performed at the following exercise intensities according to La: 0-2 mmol/L, 2-4 mmol/L, and 4-6 mmol/L. During both tests, blood flow patterns were determined by high-definition ultrasound and Doppler on the brachial artery. These measurements allowed to determine blood flow velocities and directions during exercise. On Test 1 VO2, HR, BP, La, and antegrade blood flow velocity significantly increased in an intensity-dependent manner (repeated measures ANOVA, pblood flow velocity did not significantly change during Test 1. On Test 2 all the previous variables significantly increased in an intensity-dependent manner (repeated measures ANOVA, pblood flow patterns during incremental and steady-state exercises include both antegrade and retrograde blood flows.
Incremental health care utilization and costs for acute otitis media in children.
Ahmed, Sameer; Shapiro, Nina L; Bhattacharyya, Neil
2014-01-01
Determine the incremental health care costs associated with the diagnosis and treatment of acute otitis media (AOM) in children. Cross-sectional analysis of a national health-care cost database. Pediatric patients (age children with and without a diagnosis of AOM, adjusting for age, sex, region, race, ethnicity, insurance coverage, and Charlson comorbidity Index. A total of 8.7 ± 0.4 million children were diagnosed with AOM (10.7 ± 0.4% annually, mean age 5.3 years, 51.3% male) among 81.5 ± 2.3 million children sampled (mean age 8.9 years, 51.3% male). Children with AOM manifested an additional +2.0 office visits, +0.2 emergency department visits, and +1.6 prescription fills (all P <0.001) per year versus those without AOM, adjusting for demographics and medical comorbidities. Similarly, AOM was associated with an incremental increase in outpatient health care costs of $314 per child annually (P <0.001) and an increase of $17 in patient medication costs (P <0.001), but was not associated with an increase in total prescription expenses ($13, P = 0.766). The diagnosis of AOM confers a significant incremental health-care utilization burden on both patients and the health care system. With its high prevalence across the United States, pediatric AOM accounts for approximately $2.88 billion in added health care expense annually and is a significant health-care utilization concern. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
Stable Myoelectric Control of a Hand Prosthesis using Non-Linear Incremental Learning
Directory of Open Access Journals (Sweden)
Arjan eGijsberts
2014-02-01
Full Text Available Stable myoelectric control of hand prostheses remains an open problem. The only successful human-machine interface is surface electromyography, typically allowing control of a few degrees of freedom. Machine learning techniques may have the potential to remove these limitations, but their performance is thus far inadequate: myoelectric signals change over time under the influence of various factors, deteriorating control performance. It is therefore necessary, in the standard approach, to regularly retrain a new model from scratch.We hereby propose a non-linear incremental learning method in which occasional updates with a modest amount of novel training data allow continual adaptation to the changes in the signals. In particular, Incremental Ridge Regression and an approximation of the Gaussian Kernel known as Random Fourier Features are combined to predict finger forces from myoelectric signals, both finger-by-finger and grouped in grasping patterns.We show that the approach is effective and practically applicable to this problem by first analyzing its performance while predicting single-finger forces. Surface electromyography and finger forces were collected from 10 intact subjects during four sessions spread over two different days; the results of the analysis show that small incremental updates are indeed effective to maintain a stable level of performance.Subsequently, we employed the same method on-line to teleoperate a humanoid robotic arm equipped with a state-of-the-art commercial prosthetic hand. The subject could reliably grasp, carry and release everyday-life objects, enforcing stable grasping irrespective of the signal changes, hand/arm movements and wrist pronation and supination.
Analytic expression of the temperature increment in a spin transfer torque nanopillar structure
International Nuclear Information System (INIS)
You, Chun-Yeol; Ha, Seung-Seok; Lee, Hyun-Woo
2009-01-01
The temperature increment due to the Joule heating in a nanopillar spin transfer torque system is investigated. We obtain a time-dependent analytic solution of the heat conduction equation in nanopillar geometry by using the Green's function method after some simplifications of the problem. While Holm's equation is applicable only to steady states in metallic systems, our solution describes the time dependence and is also applicable to a nanopillar-shaped magnetic tunneling junction with an insulator barrier layer. The validity of the analytic solution is confirmed by numerical finite element method simulations and by the comparison with Holm's equation.