WorldWideScience

Sample records for revisiting single-point incremental

  1. Revisiting the fundamentals of single point incremental forming by

    DEFF Research Database (Denmark)

    Silva, Beatriz; Skjødt, Martin; Martins, Paulo A.F.

    2008-01-01

    Knowledge of the physics behind the fracture of material at the transition between the inclined wall and the corner radius of the sheet is of great importance for understanding the fundamentals of single point incremental forming (SPIF). How the material fractures, what is the state of strain...

  2. Theory of Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Martins, P.A.F.; Bay, Niels; Skjødt, Martin

    2008-01-01

    This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in-plan......-plane contact friction and is focused on the extreme modes of deformation that are likely to be found in single point incremental forming processes. The overall investigation is supported by experimental work performed by the authors and data retrieved from the literature.......This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in...

  3. Creating Helical Tool Paths for Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Skjødt, Martin; Hancock, Michael H.; Bay, Niels

    2007-01-01

    Single point incremental forming (SPIF) is a relatively new sheet forming process. A sheet is clamped in a rig and formed incrementally using a rotating single point tool in the form of a rod with a spherical end. The process is often performed on a CNC milling machine and the tool movement...

  4. Single-point incremental forming and formability-failure diagrams

    DEFF Research Database (Denmark)

    Silva, M.B.; Skjødt, Martin; Atkins, A.G.

    2008-01-01

    In a recent work [1], the authors constructed a closed-form analytical model that is capable of dealing with the fundamentals of single point incremental forming and explaining the experimental and numerical results published in the literature over the past couple of years. The model is based...... of deformation that are commonly found in general single point incremental forming processes; and (ii) to investigate the formability limits of SPIF in terms of ductile damage mechanics and the question of whether necking does, or does not, precede fracture. Experimentation by the authors together with data...

  5. Single point incremental forming: Formability of PC sheets

    Science.gov (United States)

    Formisano, A.; Boccarusso, L.; Carrino, L.; Lambiase, F.; Minutolo, F. Memola Capece

    2018-05-01

    Recent research on Single Point Incremental Forming of polymers has slightly covered the possibility of expanding the materials capability window of this flexible forming process beyond metals, by demonstrating the workability of thermoplastic polymers at room temperature. Given the different behaviour of polymers compared to metals, different aspects need to be deepened to better understand the behaviour of these materials when incrementally formed. Thus, the aim of the work is to investigate the formability of incrementally formed polycarbonate thin sheets. To this end, an experimental investigation at room temperature was conducted involving formability tests; varying wall angle cone and pyramid frusta were manufactured by processing polycarbonate sheets with different thicknesses and using tools with different diameters, in order to draw conclusions on the formability of polymer sheets through the evaluation of the forming angles and the observation of the failure mechanisms.

  6. Failure mechanisms in single-point incremental forming of metals

    DEFF Research Database (Denmark)

    Silva, Maria B.; Nielsen, Peter Søe; Bay, Niels

    2011-01-01

    The last years saw the development of two different views on how failure develops in single-point incremental forming (SPIF). Today, researchers are split between those claiming that fracture is always preceded by necking and those considering that fracture occurs with suppression of necking. Each...... on formability limits and development of fracture. The unified view conciliates the aforementioned different explanations on the role of necking in fracture and is consistent with the experimental observations that have been reported in the past years. The work is performed on aluminium AA1050-H111 sheets...

  7. Single Point Incremental Forming using a Dummy Sheet

    DEFF Research Database (Denmark)

    Skjødt, Martin; Silva, Beatriz; Bay, Niels

    2007-01-01

    A new version of single point incremental forming (SPIF) is presented. This version includes a dummy sheet on top of the work piece, thus forming two sheets instead of one. The dummy sheet, which is in contact with the rotating tool pin, is discarded after forming. The new set-up influences....... The possible influence of friction between the two sheets is furthermore investigated. The results show that the use of a dummy sheet reduces wear of the work piece to almost zero, but also causes a decrease in formability. Bulging of the planar sides of the pyramid is reduced and surface roughness...

  8. Rapid Prototyping by Single Point Incremental Forming of Sheet Metal

    DEFF Research Database (Denmark)

    Skjødt, Martin

    2008-01-01

    . The process is incremental forming since plastic deformation takes place in a small local zone underneath the forming tool, i.e. the sheet is formed as a summation of the movement of the local plastic zone. The process is slow and therefore only suited for prototypes or small batch production. On the other...... in the plastic zone. Using these it is demonstrated that the growth rate of accumulated damage in SPIF is small compared to conventional sheet forming processes. This combined with an explanation why necking is suppressed is a new theory stating that SPIF is limited by fracture and not necking. The theory...... SPIF. A multi stage strategy is presented which allows forming of a cup with vertical sides in about half of the depth. It is demonstrated that this results in strain paths which are far from straight, but strains are still limited by a straight fracture line in the principal strain space. The multi...

  9. Substructuring in the implicit simulation of single point incremental sheet forming

    NARCIS (Netherlands)

    Hadoush, A.; van den Boogaard, Antonius H.

    2009-01-01

    This paper presents a direct substructuring method to reduce the computing time of implicit simulations of single point incremental forming (SPIF). Substructuring is used to divide the finite element (FE) mesh into several non-overlapping parts. Based on the hypothesis that plastic deformation is

  10. A numerical analysis on forming limits during spiral and concentric single point incremental forming

    Science.gov (United States)

    Gipiela, M. L.; Amauri, V.; Nikhare, C.; Marcondes, P. V. P.

    2017-01-01

    Sheet metal forming is one of the major manufacturing industries, which are building numerous parts for aerospace, automotive and medical industry. Due to the high demand in vehicle industry and environmental regulations on less fuel consumption on other hand, researchers are innovating new methods to build these parts with energy efficient sheet metal forming process instead of conventionally used punch and die to form the parts to achieve the lightweight parts. One of the most recognized manufacturing process in this category is Single Point Incremental Forming (SPIF). SPIF is the die-less sheet metal forming process in which the single point tool incrementally forces any single point of sheet metal at any process time to plastic deformation zone. In the present work, finite element method (FEM) is applied to analyze the forming limits of high strength low alloy steel formed by single point incremental forming (SPIF) by spiral and concentric tool path. SPIF numerical simulations were model with 24 and 29 mm cup depth, and the results were compare with Nakajima results obtained by experiments and FEM. It was found that the cup formed with Nakajima tool failed at 24 mm while cups formed by SPIF surpassed the limit for both depths with both profiles. It was also notice that the strain achieved in concentric profile are lower than that in spiral profile.

  11. Single point incremental forming of tailored blanks produced by friction stir welding

    DEFF Research Database (Denmark)

    Silva, M.B.; Skjødt, Martin; Vilaca, P.

    2009-01-01

    fromthe rotating single point-forming tool. Formability of the tailor welded blanks (TWB) is evaluated by means of benchmark tests carried out on truncated conical and pyramidal shapes and results are compared with similar tests performed on conventional reference blanks of the same material. Results show......This paper is focused on the single point incremental forming (SPIF) of tailored welded blanks produced by friction stirwelding (FSW). Special emphasis is placed on the know-how for producing the tailored blanks and on the utilization of innovative forming strategies to protect thewelding joint...... that the combination of SPIF with tailored welded blanks produced by FSW seems promising in the manufacture of complex sheet metal parts with high depths....

  12. Optimization of the single point incremental forming process for titanium sheets by using response surface

    Directory of Open Access Journals (Sweden)

    Saidi Badreddine

    2016-01-01

    Full Text Available The single point incremental forming process is well-known to be perfectly suited for prototyping and small series. One of its fields of applicability is the medicine area for the forming of titanium prostheses or titanium medical implants. However this process is not yet very industrialized, mainly due its geometrical inaccuracy, its not homogeneous thickness distribution& Moreover considerable forces can occur. They must be controlled in order to preserve the tooling. In this paper, a numerical approach is proposed in order to minimize the maximum force achieved during the incremental forming of titanium sheets and to maximize the minimal thickness. A surface response methodology is used to find the optimal values of two input parameters of the process, the punch diameter and the vertical step size of the tool path.

  13. The use of single point incremental forming for customized implants of unicondylar knee arthroplasty: a review

    Directory of Open Access Journals (Sweden)

    Pankaj Kailasrao Bhoyar

    Full Text Available Abstract Introduction The implantable devices are having enormous market. These products are basically made by traditional manufacturing process, but for the custom-made implants Incremental Sheet Forming is a paramount alternative. Single Point Incremental Forming (SPIF is a manufacturing process to form intricate, asymmetrical components. It forms the component using stretching and bending by maintaining materials crystal structure. SPIF process can be performed using conventional Computer Numerical Control (CNC milling machine. Review This review paper elaborates the various manufacturing processes carried on various biocompatible metallic and nonmetallic customised implantable devices. Conclusion Ti-6Al-4V alloy is broadly used for biomedical implants, but in this alloy, Vanadium is toxic so this alloy is not compatible for implants. The attention of researchers is towards the non toxic and suitable biocompatible materials. For this reason, a novel approach was developed in order to enhance the mechanical properties of this material. . The development of incremental forming technique can improve the formability of existing alloys and may meet the current strict requirements for performance of dies and punches.

  14. Analysis of residual stress state in sheet metal parts processed by single point incremental forming

    Science.gov (United States)

    Maaß, F.; Gies, S.; Dobecki, M.; Brömmelhoff, K.; Tekkaya, A. E.; Reimers, W.

    2018-05-01

    The mechanical properties of formed metal components are highly affected by the prevailing residual stress state. A selective induction of residual compressive stresses in the component, can improve the product properties such as the fatigue strength. By means of single point incremental forming (SPIF), the residual stress state can be influenced by adjusting the process parameters during the manufacturing process. To achieve a fundamental understanding of the residual stress formation caused by the SPIF process, a valid numerical process model is essential. Within the scope of this paper the significance of kinematic hardening effects on the determined residual stress state is presented based on numerical simulations. The effect of the unclamping step after the manufacturing process is also analyzed. An average deviation of the residual stress amplitudes in the clamped and unclamped condition of 18 % reveals, that the unclamping step needs to be considered to reach a high numerical prediction quality.

  15. Optimization of Single Point Incremental Forming of Al5052-O Sheet

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chan Il; Xiao, Xiao; Do, Van Cuong; Kim, Young Suk [Kyungpook Nat’l Univ., Daegu (Korea, Republic of)

    2017-03-15

    Single point incremental forming (SPIF) is a sheet-forming technique. It is a die-less sheet metal manufacturing process for rapid prototyping and small batch production. The Critical parameters in the forming process include tool diameter, step depth, feed rate, spindle speed, etc. In this study, these parameters and the die shape corresponding to the Varying Wall Angle Conical Frustum(VWACF) model were used for forming 0.8mm in thick Al5052-O sheets. The Taguchi method of Experiments of Design (DOE) and Grey relational optimization were used to determine the optimum parameters in SPIF. A response study was performed on formability, spring back, and thickness reduction. The research shows that the optimum combination of these parameters that yield best performance of SPIF is as follows: tool diameter, 6mm; spin speed, 60rpm; step depth, 0.3mm; and feed rate, 500mm/min.

  16. Strategies and limits in multi-stage single-point incremental forming

    DEFF Research Database (Denmark)

    Skjødt, Martin; Silva, M.B.; Martins, P. A. F.

    2010-01-01

    paths. The results also reveal that the sequence of multi-stage forming has a large effect on the location of strain points in the principal strain space. Strain paths are linear in the first stage and highly non-linear in the subsequent forming stages. The overall results show that the experimentally......Multi-stage single-point incremental forming (SPIF) is a state-of-the-art manufacturing process that allows small-quantity production of complex sheet metal parts with vertical walls. This paper is focused on the application of multi-stage SPIF with the objective of producing cylindrical cups......-limit curves and fracture forming-limit curves (FFLCs), numerical simulation, and experimentation, namely the evaluation of strain paths and fracture strains in actual multi-stage parts. Assessment of numerical simulation with experimentation shows good agreement between computed and measured strain and strain...

  17. Prediction and control of pillow defect in single point incremental forming using numerical simulations

    International Nuclear Information System (INIS)

    Isidore, B. B. Lemopi; Hussain, G.; Khan, Wasim A.; Shamachi, S. Pourhassan

    2016-01-01

    Pillows formed at the center of sheets in Single point incremental forming (SPIF) are fabrication defects which adversely affect the geometrical accuracy and formability of manufactured parts. This study is focused on using FEA as a tool to predict and control pillowing in SPIF by varying tool size and shape. 3D Finite element analysis (FEA) and experiments are carried out using annealed Aluminum 1050. From FEA, it is found out that the stress/strain state in the immediate vicinity of the forming tool in the transverse direction plays a determinant role on sheet pillowing. Furthermore, pillow height increases as compression in the sheet-plane increases. The nature of in-plane stresses in the transverse direction varies from compressive to tensile as the tool-end geometry is changed from spherical to flat. Additionally, the magnitude of corresponding in-plane stresses decreases as the tool radius increases. According to measurements from the FEA model, flat end tools and large radii both retard pillow formation. However, the influence of changing tool end shape from hemispherical to flat is observed to be more important than the effect of varying tool radius, because the deformation zone remains in tension in the transverse direction while forming with flat end tools. These findings are verified by conducting a set of experiments. A fair agreement between the FEM and empirical results show that FEM can be employed as a tool to predict and control the pillow defect in SPIF.

  18. Prediction and control of pillow defect in single point incremental forming using numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Isidore, B. B. Lemopi [Eastern Mediterranean University, Gazimagusa (Turkmenistan); Hussain, G.; Khan, Wasim A. [GIK Institute of Engineering, Swabi (Pakistan); Shamachi, S. Pourhassan [University of Minho, Guimaraes (Portugal)

    2016-05-15

    Pillows formed at the center of sheets in Single point incremental forming (SPIF) are fabrication defects which adversely affect the geometrical accuracy and formability of manufactured parts. This study is focused on using FEA as a tool to predict and control pillowing in SPIF by varying tool size and shape. 3D Finite element analysis (FEA) and experiments are carried out using annealed Aluminum 1050. From FEA, it is found out that the stress/strain state in the immediate vicinity of the forming tool in the transverse direction plays a determinant role on sheet pillowing. Furthermore, pillow height increases as compression in the sheet-plane increases. The nature of in-plane stresses in the transverse direction varies from compressive to tensile as the tool-end geometry is changed from spherical to flat. Additionally, the magnitude of corresponding in-plane stresses decreases as the tool radius increases. According to measurements from the FEA model, flat end tools and large radii both retard pillow formation. However, the influence of changing tool end shape from hemispherical to flat is observed to be more important than the effect of varying tool radius, because the deformation zone remains in tension in the transverse direction while forming with flat end tools. These findings are verified by conducting a set of experiments. A fair agreement between the FEM and empirical results show that FEM can be employed as a tool to predict and control the pillow defect in SPIF.

  19. Single Point Incremental Forming to increase material knowledge and production flexibility

    International Nuclear Information System (INIS)

    Habraken, A.M.

    2016-01-01

    Nowadays, manufactured pieces can be divided into two groups: mass production and production of low volume number of parts. Within the second group (prototyping or small batch production), an emerging solution relies on Incremental Sheet Forming or ISF. ISF refers to processes where the plastic deformation occurs by repeated contact with a relatively small tool. More specifically, many publications over the past decade investigate Single Point Incremental Forming (SPIF) where the final shape is determined only by the tool movement. This manufacturing process is characterized by the forming of sheets by means of a CNC controlled generic tool stylus, with the sheets clamped by means of a non-workpiece-specific clamping system and in absence of a partial or a full die. The advantage is no tooling requirements and often enhanced formability, however it poses a challenge in term of process control and accuracy assurance. Note that the most commonly used materials in incremental forming are aluminum and steel alloys however other alloys are also used especially for medical industry applications, such as cobalt and chromium alloys, stainless steel and titanium alloys. Some scientists have applied incremental forming on PVC plates and other on sandwich panels composed of propylene with mild steel and aluminum metallic foams with aluminum sheet metal. Micro incremental forming of thin foils has also been developed. Starting from the scattering of the results of Finite Element (FE) simulations, when one tries to predict the tool force (see SPIF benchmark of 2014 Numisheet conference), we will see how SPIF and even micro SPIF (process applied on thin metallic sheet with a few grains within the thickness) allow investigating the material behavior. This lecture will focus on the identification of constitutive laws, on the SPIF forming mechanisms and formability as well as the failure mechanism. Different hypotheses have been proposed to explain SPIF formability, they will be

  20. Single Point Incremental Forming to increase material knowledge and production flexibility

    Science.gov (United States)

    Habraken, A. M.

    2016-08-01

    Nowadays, manufactured pieces can be divided into two groups: mass production and production of low volume number of parts. Within the second group (prototyping or small batch production), an emerging solution relies on Incremental Sheet Forming or ISF. ISF refers to processes where the plastic deformation occurs by repeated contact with a relatively small tool. More specifically, many publications over the past decade investigate Single Point Incremental Forming (SPIF) where the final shape is determined only by the tool movement. This manufacturing process is characterized by the forming of sheets by means of a CNC controlled generic tool stylus, with the sheets clamped by means of a non-workpiece-specific clamping system and in absence of a partial or a full die. The advantage is no tooling requirements and often enhanced formability, however it poses a challenge in term of process control and accuracy assurance. Note that the most commonly used materials in incremental forming are aluminum and steel alloys however other alloys are also used especially for medical industry applications, such as cobalt and chromium alloys, stainless steel and titanium alloys. Some scientists have applied incremental forming on PVC plates and other on sandwich panels composed of propylene with mild steel and aluminum metallic foams with aluminum sheet metal. Micro incremental forming of thin foils has also been developed. Starting from the scattering of the results of Finite Element (FE) simulations, when one tries to predict the tool force (see SPIF benchmark of 2014 Numisheet conference), we will see how SPIF and even micro SPIF (process applied on thin metallic sheet with a few grains within the thickness) allow investigating the material behavior. This lecture will focus on the identification of constitutive laws, on the SPIF forming mechanisms and formability as well as the failure mechanism. Different hypotheses have been proposed to explain SPIF formability, they will be

  1. Control of anisotropic shape deviation in single point incremental forming of paperboard

    Science.gov (United States)

    Stein, Philipp; Franke, Wilken; Hoppe, Florian; Hesse, Daniel; Mill, Katharina; Groche, Peter

    2017-10-01

    The increasing social demand for sustainable material use leads to new process strategies as well as to the use of new materials in nearly all industries. In light of this demand, paperboard shows potential to substitute polymer-based components while also exhibiting improved ecological properties. However, in contrast to polymer-based products, the forming limits of paperboard are relatively low. Therefore, three dimensional forming of paperboard is subject of current research. One area of research focuses on the control of the fiber orientation dependent anisotropic material behavior of industrial paperboard in forming processes. For an examined industrial paperboard, an average elongation at break of 1.2% in the so called machine direction (fiber preferential direction, MD) has been determined at standard climate conditions. In contrast, in cross-direction (orthogonal to the machine direction, CD) a value of 2.6% was observed. With increased moisture content of the specimens the difference between the mechanical properties in MD and CD even increases. As a result of the various fiber-orientation dependent mechanical properties, forming with symmetric tools leads to asymmetrically shaped final parts. Within this article, an approach to reduce the asymmetric shape of three-dimensional formed paperboard by using single point incremental forming technology is presented. For a free spatial processing strategy the 3D Servo Press Technology, which enables circular as well as free processing strategies, is used. Based on reference tests with a circular processing strategy, it is shown that by using an adapted, elliptical tool path, an almost symmetric shaped part can be formed.

  2. Shape measurement system for single point incremental forming (SPIF) manufacts by using trinocular vision and random pattern

    International Nuclear Information System (INIS)

    Setti, Francesco; Bini, Ruggero; Lunardelli, Massimo; Bosetti, Paolo; Bruschi, Stefania; De Cecco, Mariolino

    2012-01-01

    Many contemporary works show the interest of the scientific community in measuring the shape of artefacts made by single point incremental forming. In this paper, we will present an algorithm able to detect feature points with a random pattern, check the compatibility of associations exploiting multi-stereo constraints and reject outliers and perform a 3D reconstruction by dense random patterns. The algorithm is suitable for a real-time application, in fact it needs just three images and a synchronous relatively fast processing. The proposed method has been tested on a simple geometry and results have been compared with a coordinate measurement machine acquisition. (paper)

  3. Springback effects during single point incremental forming: Optimization of the tool path

    Science.gov (United States)

    Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick

    2018-05-01

    Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.

  4. Single-Point Incremental Forming of Two Biocompatible Polymers: An Insight into Their Thermal and Structural Properties

    Directory of Open Access Journals (Sweden)

    Luis Marcelo Lozano-Sánchez

    2018-04-01

    Full Text Available Sheets of polycaprolactone (PCL and ultra-high molecular weight polyethylene (UHMWPE were fabricated and shaped by the Single-Point Incremental Forming process (SPIF. The performance of these biocompatible polymers in SPIF was assessed through the variation of four main parameters: the diameter of the forming tool, the spindle speed, the feed rate, and the step size based on a Box–Behnken design of experiments of four variables and three levels. The design of experiments allowed us to identify the parameters that most affect the forming of PCL and UHMWPE. The study was completed by means of a deep characterization of the thermal and structural properties of both polymers. These properties were correlated to the performance of the polymers observed in SPIF, and it was found that the polymer chains are oriented as a consequence of the SPIF processing. Moreover, by X-ray diffraction it was proved that polymer chains behave differently on each surface of the fabricated parts, since the chains on the surface in contact with the forming tool are oriented horizontally, while on the opposite surface they are oriented in the vertical direction. The unit cell of UHMWPE is distorted, passing from an orthorhombic cell to a monoclinic due to the slippage between crystallites. This slippage between crystallites was observed in both PCL and UHMWPE, and was identified as an alpha star thermal transition located in the rubbery region between the glass transition and the melting point of each polymer.

  5. Comparison of plastic strains on AA5052 by single point incremental forming process using digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Mugendiran, V.; Gnanavelbabu, A. [Anna University, Chennai, Tamilnadu (India)

    2017-06-15

    In this study, a surface based strain measurement was used to determine the formability of the sheet metal. A strain measurement may employ manual calculation of plastic strains based on the reference circle and the deformed circle. The manual calculation method has a greater margin of error in the practical applications. In this paper, an attempt has been made to compare the formability by implementing three different theoretical approaches: Namely conventional method, least square method and digital based strain measurements. As the sheet metal was formed by a single point incremental process the etched circles get deformed into elliptical shapes approximately, image acquisition has been done before and after forming. The plastic strains of the deformed circle grids are calculated based on the non- deformed reference. The coordinates of the deformed circles are measured by various image processing steps. Finally the strains obtained from the deformed circle are used to plot the forming limit diagram. To evaluate the accuracy of the system, the conventional, least square and digital based method of prediction of the forming limit diagram was compared. Conventional method and least square method have marginal error when compared with digital based processing method. Measurement of strain based on image processing agrees well and can be used to improve the accuracy and to reduce the measurement error in prediction of forming limit diagram.

  6. Comparison of plastic strains on AA5052 by single point incremental forming process using digital image processing

    International Nuclear Information System (INIS)

    Mugendiran, V.; Gnanavelbabu, A.

    2017-01-01

    In this study, a surface based strain measurement was used to determine the formability of the sheet metal. A strain measurement may employ manual calculation of plastic strains based on the reference circle and the deformed circle. The manual calculation method has a greater margin of error in the practical applications. In this paper, an attempt has been made to compare the formability by implementing three different theoretical approaches: Namely conventional method, least square method and digital based strain measurements. As the sheet metal was formed by a single point incremental process the etched circles get deformed into elliptical shapes approximately, image acquisition has been done before and after forming. The plastic strains of the deformed circle grids are calculated based on the non- deformed reference. The coordinates of the deformed circles are measured by various image processing steps. Finally the strains obtained from the deformed circle are used to plot the forming limit diagram. To evaluate the accuracy of the system, the conventional, least square and digital based method of prediction of the forming limit diagram was compared. Conventional method and least square method have marginal error when compared with digital based processing method. Measurement of strain based on image processing agrees well and can be used to improve the accuracy and to reduce the measurement error in prediction of forming limit diagram.

  7. Revisiting single-point incremental forming and formability/failure diagrams by means of finite elements and experimentation

    DEFF Research Database (Denmark)

    Silva, M. B.; Skjødt, Martin; Bay, Niels

    2009-01-01

    framework accounts for the influence of major process parameters and their mutual interaction to be studied both qualitatively and quantitatively. It enables the conclusion to be drawn that the probable mode of material failure in SPIF is consistent with stretching, rather than shearing being the governing...... mode of deformation. The study of the morphology of the cracks combined with the experimentally observed suppression of neck formation enabled the authors to conclude that traditional forming limit curves are inapplicable for describing failure. Instead, fracture forming limit curves should be employed...... the forming limits determined by the analytical framework with experimental values. It is shown that agreement between analytical, finite element, and experimental results is good, implying that the previously proposed analytical framework can be utilized to explain the mechanics of deformation...

  8. Single Point Incremental Forming and Multi-Stage Incremental Forming on Aluminium Alloy 1050

    Science.gov (United States)

    Suriyaprakan, Premika

    As piroxenas sao um vasto grupo de silicatos minerais encontrados em muitas rochas igneas e metamorficas. Na sua forma mais simples, estes silicatos sao constituidas por cadeias de SiO3 ligando grupos tetrahedricos de SiO4. A formula quimica geral das piroxenas e M2M1T2O6, onde M2 se refere a catioes geralmente em uma coordenacao octaedrica distorcida (Mg2+, Fe2+, Mn2+, Li+, Ca2+, Na+), M1 refere-se a catioes numa coordenacao octaedrica regular (Al3+, Fe3+, Ti4+, Cr3+, V3+, Ti3+, Zr4+, Sc3+, Zn2+, Mg2+, Fe2+, Mn2+), e T a catioes em coordenacao tetrahedrica (Si4+, Al3+, Fe3+). As piroxenas com estrutura monoclinica sao designadas de clinopiroxenes. A estabilidade das clinopyroxenes num espectro de composicoes quimicas amplo, em conjugacao com a possibilidade de ajustar as suas propriedades fisicas e quimicas e a durabilidade quimica, tem gerado um interesse mundial devido a suas aplicacoes em ciencia e tecnologia de materiais. Este trabalho trata do desenvolvimento de vidros e de vitro-cerâmicos baseadas de clinopiroxenas para aplicacoes funcionais. O estudo teve objectivos cientificos e tecnologicos; nomeadamente, adquirir conhecimentos fundamentais sobre a formacao de fases cristalinas e solucoes solidas em determinados sistemas vitro-cerâmicos, e avaliar a viabilidade de aplicacao dos novos materiais em diferentes areas tecnologicas, com especial enfase sobre a selagem em celulas de combustivel de oxido solido (SOFC). Com este intuito, prepararam-se varios vidros e materiais vitro-cerâmicos ao longo das juntas Enstatite (MgSiO3) - diopsidio (CaMgSi2O6) e diopsidio (CaMgSi2O6) - Ca - Tschermak (CaAlSi2O6), os quais foram caracterizados atraves de um vasto leque de tecnicas. Todos os vidros foram preparados por fusao-arrefecimento enquanto os vitro-cerâmicos foram obtidos quer por sinterizacao e cristalizacao de fritas, quer por nucleacao e cristalizacao de vidros monoliticos. Estudaram-se ainda os efeitos de varias substituicoes ionicas em composicoes de diopsidio contendo Al na estrutura, sinterizacao e no comportamento durante a cristalizacao de vidros e nas propriedades dos materiais vitro-cerâmicos, com relevância para a sua aplicacao como selantes em SOFC. Verificou-se que Foi observado que os vidros/vitro-cerâmicos a base de enstatite nao apresentavam as caracteristicas necessarias para serem usados como materiais selantes em SOFC, enquanto as melhores propriedades apresentadas pelos vitro-cerâmicos a base de diopsidio qualificaram-nos para futuros estudos neste tipo de aplicacoes. Para alem de investigar a adequacao dos vitro-cerâmicos a base de clinopyroxene como selantes, esta tese tem tambem como objetivo estudar a influencia dos agentes de nucleacao na nucleacao em volume dos vitro-cerâmicos resultantes a base de diopsidio, de modo a qualifica-los como potenciais materiais hopedeiros de residuos nucleares radioactivos.

  9. Finding a single point of truth

    Energy Technology Data Exchange (ETDEWEB)

    Sokolov, S.; Thijssen, H. [Autodesk Inc, Toronto, ON (Canada); Laslo, D.; Martin, J. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Electric utilities collect large volumes of data at every level of their business, including SCADA, Smart Metering and Smart Grid initiatives, LIDAR and other 3D imagery surveys. Different types of database systems are used to store the information, rendering data flow within the utility business process extremely complicated. The industry trend has been to endure redundancy of data input and maintenance of multiple copies of the same data across different solution data sets. Efforts have been made to improve the situation with point to point interfaces, but with the tools and solutions available today, a single point of truth can be achieved. Consolidated and validated data can be published into a data warehouse at the right point in the process, making the information available to all other enterprise systems and solutions. This paper explained how the single point of truth spatial data warehouse and process automation services can be configured to streamline the flow of data within the utility business process using the initiate-plan-execute-close (IPEC) utility workflow model. The paper first discussed geospatial challenges faced by utilities and then presented the approach and technology aspects. It was concluded that adoption of systems and solutions that can function with and be controlled by the IPEC workflow can provide significant improvement for utility operations, particularly if those systems are coupled with the spatial data warehouse that reflects a single point of truth. 6 refs., 3 figs.

  10. Revisited

    DEFF Research Database (Denmark)

    Tegtmeier, Silke; Meyer, Verena; Pakura, Stefanie

    2017-01-01

    were captured when they described entrepreneurs. Therefore, this paper aims to revisit gender role stereotypes among young adults. Design/methodology/approach: To measure stereotyping, participants were asked to describe entrepreneurs in general and either women or men in general. The Schein......Purpose: Entrepreneurship is shaped by a male norm, which has been widely demonstrated in qualitative studies. The authors strive to complement these methods by a quantitative approach. First, gender role stereotypes were measured in entrepreneurship. Second, the explicit notions of participants......: The images of men and entrepreneurs show a high and significant congruence (r = 0.803), mostly in those adjectives that are untypical for men and entrepreneurs. The congruence of women and entrepreneurs was low (r = 0.152) and insignificant. Contrary to the participants’ beliefs, their explicit notions did...

  11. Program computes single-point failures in critical system designs

    Science.gov (United States)

    Brown, W. R.

    1967-01-01

    Computer program analyzes the designs of critical systems that will either prove the design is free of single-point failures or detect each member of the population of single-point failures inherent in a system design. This program should find application in the checkout of redundant circuits and digital systems.

  12. Multi Stage Strategies for Single Point Incremental Forming of a Cup

    DEFF Research Database (Denmark)

    Skjødt, Martin; Bay, Niels; Endelt, Benny

    2008-01-01

    A five stage forming strategy for forming of a circular cylindrical cup with a height/radius ratio of one is presented. Geometrical relations are discussed and theoretical strains are calculated. The influence of forming direction (upwards or downwards) is investigated for the second stage...... comparing explicit FE analysis with experiments. Good agreement is found between calculated and measured thickness distribution, overall geometry and strains. Using the proposed multi stage strategy it is shown possible to produce a cup with a height close to the radius and side parallel to the symmetry...

  13. Strain Paths and Fractures in Rotational Symmetric Multi Stage Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Skjødt, Martin; Silva, M.B.; Martins, P.A.F.

    2008-01-01

    A multi stage strategy, which allows forming of SPIF parts with vertical walls, is investigated with emphasis on strain paths and fracture strains. Whereas downwards movement of the tool pin results in deformation close to plane strain upwards moving tool results in biaxial strains. A good correl...

  14. Laser-induced single point nanowelding of silver nanowires

    International Nuclear Information System (INIS)

    Dai, Shuowei; Li, Qiang; Liu, Guoping; Yang, Hangbo; Yang, Yuanqing; Zhao, Ding; Wang, Wei; Qiu, Min

    2016-01-01

    Nanowelding of nanomaterials opens up an emerging set of applications in transparent conductors, thin-film solar cells, nanocatalysis, cancer therapy, and nanoscale patterning. Single point nanowelding (SPNW) is highly demanded for building complex nanostructures. In this letter, the precise control of SPNW of silver nanowires is explored in depth, where the nanowelding is laser-induced through the plasmonic resonance enhanced photothermal effect. It is shown that the illumination position is a critical factor for the nanowelding process. As an example of performance enhancement, output at wire end can be increased by 65% after welding for a plasmonic nanocoupler. Thus, single point nanowelding technique shows great potentials for high-performance electronic and photonic devices based on nanowires, such as nanoelectronic circuits and plasmonic nanodevices.

  15. Laser-induced single point nanowelding of silver nanowires

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Shuowei; Li, Qiang, E-mail: qiangli@zju.edu.cn; Liu, Guoping; Yang, Hangbo; Yang, Yuanqing; Zhao, Ding; Wang, Wei; Qiu, Min, E-mail: minqiu@zju.edu.cn [State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027 (China)

    2016-03-21

    Nanowelding of nanomaterials opens up an emerging set of applications in transparent conductors, thin-film solar cells, nanocatalysis, cancer therapy, and nanoscale patterning. Single point nanowelding (SPNW) is highly demanded for building complex nanostructures. In this letter, the precise control of SPNW of silver nanowires is explored in depth, where the nanowelding is laser-induced through the plasmonic resonance enhanced photothermal effect. It is shown that the illumination position is a critical factor for the nanowelding process. As an example of performance enhancement, output at wire end can be increased by 65% after welding for a plasmonic nanocoupler. Thus, single point nanowelding technique shows great potentials for high-performance electronic and photonic devices based on nanowires, such as nanoelectronic circuits and plasmonic nanodevices.

  16. Analysis on Single Point Vulnerabilities of Plant Control System

    International Nuclear Information System (INIS)

    Chi, Moon Goo; Lee, Eun Chan; Bae, Yeon Kyoung

    2011-01-01

    The Plant Control System (PCS) is a system that controls pumps, valves, dampers, etc. in nuclear power plants with an OPR-1000 design. When there is a failure or spurious actuation of the critical components in the PCS, it can result in unexpected plant trips or transients. From this viewpoint, single point vulnerabilities are evaluated in detail using failure mode effect analyses (FMEA) and fault tree analyses (FTA). This evaluation demonstrates that the PCS has many vulnerable components and the analysis results are provided for OPR-1000 plants for reliability improvements that can reduce their vulnerabilities

  17. Analysis on Single Point Vulnerabilities of Plant Control System

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Moon Goo; Lee, Eun Chan; Bae, Yeon Kyoung [Korea Hydro and Nuclear Power Co., Daejeon (Korea, Republic of)

    2011-08-15

    The Plant Control System (PCS) is a system that controls pumps, valves, dampers, etc. in nuclear power plants with an OPR-1000 design. When there is a failure or spurious actuation of the critical components in the PCS, it can result in unexpected plant trips or transients. From this viewpoint, single point vulnerabilities are evaluated in detail using failure mode effect analyses (FMEA) and fault tree analyses (FTA). This evaluation demonstrates that the PCS has many vulnerable components and the analysis results are provided for OPR-1000 plants for reliability improvements that can reduce their vulnerabilities.

  18. Securing Single Points of Compromise (SPoC)

    Energy Technology Data Exchange (ETDEWEB)

    Belangia, David Warren [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-06-25

    Securing the Single Points of Compromise that provide central services to the institution’s environment is paramount to success when trying to protect the business. (Fisk, 2014) Time Based Security mandates protection (erecting and ensuring effective controls) that last longer than the time to detect and react to a compromise. When enterprise protections fail, providing additional layered controls for these central services provides more time to detect and react. While guidance is readily available for securing the individual critical asset, protecting these assets as a group is not often discussed. Using best business practices to protect these resources as individual assets while leveraging holistic defenses for the group increases the opportunity to maximize protection time, allowing detection and reaction time for the SPoCs that is commensurate with the inherent risk of these centralized services.

  19. Single Point Vulnerability Analysis of Automatic Seismic Trip System

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Seo Bin; Chung, Soon Il; Lee, Yong Suk [FNC Technology Co., Yongin (Korea, Republic of); Choi, Byung Pil [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    Single Point Vulnerability (SPV) analysis is a process used to identify individual equipment whose failure alone will result in a reactor trip, turbine generator failure, or power reduction of more than 50%. Automatic Seismic Trip System (ASTS) is a newly installed system to ensure the safety of plant when earthquake occurs. Since this system directly shuts down the reactor, the failure or malfunction of its system component can cause a reactor trip more frequently than other systems. Therefore, an SPV analysis of ASTS is necessary to maintain its essential performance. To analyze SPV for ASTS, failure mode and effect analysis (FMEA) and fault tree analysis (FTA) was performed. In this study, FMEA and FTA methods were performed to select SPV equipment of ASTS. D/O, D/I, A/I card, seismic sensor, and trip relay had an effect on the reactor trip but their single failure will not cause reactor trip. In conclusion, ASTS is excluded as SPV. These results can be utilized as the basis data for ways to enhance facility reliability such as design modification and improvement of preventive maintenance procedure.

  20. Single Point Vulnerability Analysis of Automatic Seismic Trip System

    International Nuclear Information System (INIS)

    Oh, Seo Bin; Chung, Soon Il; Lee, Yong Suk; Choi, Byung Pil

    2016-01-01

    Single Point Vulnerability (SPV) analysis is a process used to identify individual equipment whose failure alone will result in a reactor trip, turbine generator failure, or power reduction of more than 50%. Automatic Seismic Trip System (ASTS) is a newly installed system to ensure the safety of plant when earthquake occurs. Since this system directly shuts down the reactor, the failure or malfunction of its system component can cause a reactor trip more frequently than other systems. Therefore, an SPV analysis of ASTS is necessary to maintain its essential performance. To analyze SPV for ASTS, failure mode and effect analysis (FMEA) and fault tree analysis (FTA) was performed. In this study, FMEA and FTA methods were performed to select SPV equipment of ASTS. D/O, D/I, A/I card, seismic sensor, and trip relay had an effect on the reactor trip but their single failure will not cause reactor trip. In conclusion, ASTS is excluded as SPV. These results can be utilized as the basis data for ways to enhance facility reliability such as design modification and improvement of preventive maintenance procedure

  1. Lithium-Ion Cell Fault Detection by Single-Point Impedance Diagnostic and Degradation Mechanism Validation for Series-Wired Batteries Cycled at 0 °C

    Directory of Open Access Journals (Sweden)

    Corey T. Love

    2018-04-01

    Full Text Available The utility of a single-point impedance-based technique to monitor the state-of-health of a pack of four 18650 lithium-ion cells wired in series (4S was demonstrated in a previous publication. This work broadens the applicability of the single-point monitoring technique to identify temperature induced faults within 4S packs at 0 °C by two distinct discharge cut-off thresholds: individual cell cut-off and pack voltage cut-off. The results show how the single-point technique applied to a 4S pack can identify cell faults induced by low temperature degradation when plotted on a unique state-of-health map. Cell degradation is validated through an extensive incremental capacity technique to quantify capacity loss due to low temperature cycling and investigate the underpinnings of cell failure.

  2. Mechanism of DNA–binding loss upon single-point mutation in p53

    Indian Academy of Sciences (India)

    PRAKASH KUMAR

    loss in protein−DNA binding affinity and specificity upon single point ..... we computed the root–mean–square–deviations (RMSDs) of each residue's ...... Petsko G and Ringe D 1984 Fluctuations in protein structure from. X-ray diffraction; Annu.

  3. Single Point Adjustments: A New Definition with Examples. Acquisition Review Quarterly, Fall 2001

    National Research Council Canada - National Science Library

    Bachman, David

    2002-01-01

    .... A single point adjustment (SPA) is made when a contract's existing cost and/or schedule variances are set to zero and all the remaining work is replanned with the goal of completing the project on schedule and on budget...

  4. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    Science.gov (United States)

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.

  5. Planning Through Incrementalism

    Science.gov (United States)

    Lasserre, Ph.

    1974-01-01

    An incremental model of decisionmaking is discussed and compared with the Comprehensive Rational Approach. A model of reconciliation between the two approaches is proposed, and examples are given in the field of economic development and educational planning. (Author/DN)

  6. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  7. New organophilic kaolin clays based on single-point grafted 3-aminopropyl dimethylethoxysilane.

    Science.gov (United States)

    Zaharia, A; Perrin, F-X; Teodorescu, M; Radu, A-L; Iordache, T-V; Florea, A-M; Donescu, D; Sarbu, A

    2015-10-14

    In this study, the organophilization procedure of kaolin rocks with a monofunctional ethoxysilane- 3 aminopropyl dimethyl ethoxysilane (APMS) is depicted for the first time. The two-step organophilization procedure, including dimethyl sulfoxide intercalation and APMS grafting onto the inner hydroxyl surface of kaolinite (the mineral) layers was tested for three sources of kaolin rocks (KR, KC and KD) with various morphologies and kaolinite compositions. The load of APMS in the kaolinite interlayer space was higher than that of 3-aminopropyl triethoxysilane (APTS) due to the single-point grafting nature of the organophilization reaction. A higher long-distance order of kaolinite layers with low staking was obtained for the APMS, due to a more controllable organiphilization reaction. Last but not least, the solid state (29)Si-NMR tests confirmed the single-point grafting mechanism of APMS, corroborating monodentate fixation on the kaolinite hydroxyl facets, with no contribution to the bidentate or tridentate fixation as observed for APTS.

  8. Controllable resonant tunnelling through single-point potentials: A point triode

    International Nuclear Information System (INIS)

    Zolotaryuk, A.V.; Zolotaryuk, Yaroslav

    2015-01-01

    A zero-thickness limit of three-layer heterostructures under two bias voltages applied externally, where one of which is supposed to be a gate parameter, is studied. As a result, an effect of controllable resonant tunnelling of electrons through single-point potentials is shown to exist. Therefore the limiting structure may be termed a “point triode” and considered in the theory of point interactions as a new object. The simple limiting analytical expressions adequately describe the resonant behaviour in the transistor with realistic parameter values and thus one can conclude that the zero-range limit of multi-layer structures may be used in fabricating nanodevices. The difference between the resonant tunnelling across single-point potentials and the Fabry–Pérot interference effect is also emphasized. - Highlights: • The zero-thickness limit of three-layer heterostructures is described in terms of point interactions. • The effect of resonant tunnelling through these single-point potentials is established. • The resonant tunnelling is shown to be controlled by a gate voltage

  9. Lakatos Revisited.

    Science.gov (United States)

    Court, Deborah

    1999-01-01

    Revisits and reviews Imre Lakatos' ideas on "Falsification and the Methodology of Scientific Research Programmes." Suggests that Lakatos' framework offers an insightful way of looking at the relationship between theory and research that is relevant not only for evaluating research programs in theoretical physics, but in the social…

  10. An ultra-precision tool nanoindentation instrument for replication of single point diamond tool cutting edges

    Science.gov (United States)

    Cai, Yindi; Chen, Yuan-Liu; Xu, Malu; Shimizu, Yuki; Ito, So; Matsukuma, Hiraku; Gao, Wei

    2018-05-01

    Precision replication of the diamond tool cutting edge is required for non-destructive tool metrology. This paper presents an ultra-precision tool nanoindentation instrument designed and constructed for replication of the cutting edge of a single point diamond tool onto a selected soft metal workpiece by precisely indenting the tool cutting edge into the workpiece surface. The instrument has the ability to control the indentation depth with a nanometric resolution, enabling the replication of tool cutting edges with high precision. The motion of the diamond tool along the indentation direction is controlled by the piezoelectric actuator of a fast tool servo (FTS). An integrated capacitive sensor of the FTS is employed to detect the displacement of the diamond tool. The soft metal workpiece is attached to an aluminum cantilever whose deflection is monitored by another capacitive sensor, referred to as an outside capacitive sensor. The indentation force and depth can be accurately evaluated from the diamond tool displacement, the cantilever deflection and the cantilever spring constant. Experiments were carried out by replicating the cutting edge of a single point diamond tool with a nose radius of 2.0 mm on a copper workpiece surface. The profile of the replicated tool cutting edge was measured using an atomic force microscope (AFM). The effectiveness of the instrument in precision replication of diamond tool cutting edges is well-verified by the experimental results.

  11. FUNDAMENTAL ASPECTS OF EPISODIC ACCRETION CHEMISTRY EXPLORED WITH SINGLE-POINT MODELS

    International Nuclear Information System (INIS)

    Visser, Ruud; Bergin, Edwin A.

    2012-01-01

    We explore a set of single-point chemical models to study the fundamental chemical aspects of episodic accretion in low-mass embedded protostars. Our goal is twofold: (1) to understand how the repeated heating and cooling of the envelope affects the abundances of CO and related species; and (2) to identify chemical tracers that can be used as a novel probe of the timescales and other physical aspects of episodic accretion. We develop a set of single-point models that serve as a general prescription for how the chemical composition of a protostellar envelope is altered by episodic accretion. The main effect of each accretion burst is to drive CO ice off the grains in part of the envelope. The duration of the subsequent quiescent stage (before the next burst hits) is similar to or shorter than the freeze-out timescale of CO, allowing the chemical effects of a burst to linger long after the burst has ended. We predict that the resulting excess of gas-phase CO can be observed with single-dish or interferometer facilities as evidence of an accretion burst in the past 10 3 -10 4 yr.

  12. The validity of multiphase DNS initialized on the basis of single--point statistics

    Science.gov (United States)

    Subramaniam, Shankar

    1999-11-01

    A study of the point--process statistical representation of a spray reveals that single--point statistical information contained in the droplet distribution function (ddf) is related to a sequence of single surrogate--droplet pdf's, which are in general different from the physical single--droplet pdf's. The results of this study have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single--point statistics such as the average number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets.

  13. Design of an omnidirectional single-point photodetector for large-scale spatial coordinate measurement

    Science.gov (United States)

    Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei

    2017-10-01

    In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.

  14. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2005-01-01

    This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.

  15. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2006-01-01

    This is the second of two volumes containing the revised and completed notes of lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald in March, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present second volume contains the following lectures: "Random Walks on Finite Quantum Groups" by Uwe Franz and Rolf Gohm, "Quantum Markov Processes and Applications in Physics" by Burkhard Kümmerer, Classical and Free Infinite Divisibility and Lévy Processes" by Ole E. Barndorff-Nielsen, Steen Thorbjornsen, and "Lévy Processes on Quantum Groups and Dual Groups" by Uwe Franz.

  16. Efficient incremental relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2013-07-01

    We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.

  17. Sensemaking Revisited

    DEFF Research Database (Denmark)

    Holt, Robin; Cornelissen, Joep

    2014-01-01

    We critique and extend theory on organizational sensemaking around three themes. First, we investigate sense arising non-productively and so beyond any instrumental relationship with things; second, we consider how sense is experienced through mood as well as our cognitive skills of manipulation ...... research by revisiting Weick’s seminal reading of Norman Maclean’s book surrounding the tragic events of a 1949 forest fire at Mann Gulch, USA....

  18. Analysis of payload bay magnetic fields due to dc power multipoint and single point ground configurations

    Science.gov (United States)

    Lawton, R. M.

    1976-01-01

    An analysis of magnetic fields in the Orbiter Payload Bay resulting from the present grounding configuration (structure return) was presented and the amount of improvement that would result from installing wire returns for the three dc power buses was determined. Ac and dc magnetic fields at five points in a cross-section of the bay are calculated for both grounding configurations. Y and Z components of the field at each point are derived in terms of a constant coefficient and the current amplitude of each bus. The dc loads assumed are 100 Amperes for each bus. The ac noise current used is a spectrum 6 db higher than the Orbiter equipment limit for narrowband conducted emissions. It was concluded that installing return wiring to provide a single point ground for the dc Buses in the Payload Bay would reduce the ac and dc magnetic field intensity by approximately 30 db.

  19. Combined contactless conductometric, photometric, and fluorimetric single point detector for capillary separation methods.

    Science.gov (United States)

    Ryvolová, Markéta; Preisler, Jan; Foret, Frantisek; Hauser, Peter C; Krásenský, Pavel; Paull, Brett; Macka, Mirek

    2010-01-01

    This work for the first time combines three on-capillary detection methods, namely, capacitively coupled contactless conductometric (C(4)D), photometric (PD), and fluorimetric (FD), in a single (identical) point of detection cell, allowing concurrent measurements at a single point of detection for use in capillary electrophoresis, capillary electrochromatography, and capillary/nanoliquid chromatography. The novel design is based on a standard 6.3 mm i.d. fiber-optic SMA adapter with a drilled opening for the separation capillary to go through, to which two concentrically positioned C(4)D detection electrodes with a detection gap of 7 mm were added on each side acting simultaneously as capillary guides. The optical fibers in the SMA adapter were used for the photometric signal (absorbance), and another optical fiber at a 45 degrees angle to the capillary was applied to collect the emitted light for FD. Light emitting diodes (255 and 470 nm) were used as light sources for the PD and FD detection modes. LOD values were determined under flow-injection conditions to exclude any stacking effects: For the 470 nm LED limits of detection (LODs) for FD and PD were for fluorescein (1 x 10(-8) mol/L) and tartrazine (6 x 10(-6) mol/L), respectively, and the LOD for the C(4)D was for magnesium chloride (5 x 10(-7) mol/L). The advantage of the three different detection signals in a single point is demonstrated in capillary electrophoresis using model mixtures and samples including a mixture of fluorescent and nonfluorescent dyes and common ions, underivatized amino acids, and a fluorescently labeled digest of bovine serum albumin.

  20. Hysteresis compensation for piezoelectric actuators in single-point diamond turning

    Science.gov (United States)

    Wang, Haifeng; Hu, Dejin; Wan, Daping; Liu, Hongbin

    2006-02-01

    In recent years, interests have been growing for fast tool servo (FTS) systems to increase the capability of existing single-point diamond turning machines. Although piezoelectric actuator is the most universal base of FTS system due to its high stiffness, accuracy and bandwidth, nonlinearity in piezoceramics limits both the static and dynamic performance of piezoelectric-actuated control systems evidently. To compensate the nonlinear hysteresis behavior of piezoelectric actuators, a hybrid model coupled with Preisach model and feedforward neural network (FNN) has been described. Since the training of FNN does not require a special calibration sequence, it is possible for on-line identification and real-time implementation with general operating data of a specific piezoelectric actuator. To describe the rate dependent behavior of piezoelectric actuators, a hybrid dynamic model was developed to predict the response of piezoelectric actuators in a wider range of input frequency. Experimental results show that a maximal error of less than 3% was accomplished by this dynamic model.

  1. A rapid and robust gradient measurement technique using dynamic single-point imaging.

    Science.gov (United States)

    Jang, Hyungseok; McMillan, Alan B

    2017-09-01

    We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Improving access in gastroenterology: The single point of entry model for referrals

    Science.gov (United States)

    Novak, Kerri L; Van Zanten, Sander Veldhuyzen; Pendharkar, Sachin R

    2013-01-01

    In 2005, a group of academic gastroenterologists in Calgary (Alberta) adopted a centralized referral intake system known as central triage. This system provided a single point of entry model (SEM) for referrals rather than the traditional system of individual practitioners managing their own referrals and queues. The goal of central triage was to improve wait times and referral management. In 2008, a similar system was developed in Edmonton at the University of Alberta Hospital (Edmonton, Alberta). SEMs have subsequently been adopted by numerous subspecialties throughout Alberta. There are many benefits of SEMs including improved access and reduced wait times. Understanding and measuring complex patient flow systems is key to improving access, and centralized intake systems provide an opportunity to better understand total demand and system bottlenecks. This knowledge is particularly important for specialties such as gastroenterology (GI), in which demand exceeds supply. While it is anticipated that SEMs will reduce wait times for GI care in Canada, the lack of sufficient resources to meet the demand for GI care necessitates additional strategies. PMID:24040629

  3. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  4. Criteria for evaluating protection from single points of failure for partially expanded fault trees

    International Nuclear Information System (INIS)

    Aswani, D.; Badreddine, B.; Malone, M.; Gauthier, G.; Proietty, J.

    2008-01-01

    Fault tree analysis (FTA) is a technique that describes the combinations of events in a system which result in an undesirable outcome. FTA is used as a tool to quantitatively assess a system's probability for an undesirable outcome. Time constraints from concept to production in modern engineering often limit the opportunity for a thorough statistical analysis of a system. Furthermore, when undesirable outcomes are considered such as hazard to human(s), it becomes difficult to identify strict statistical targets for what is acceptable. Consequently, when hazard to human(s) is concerned a common design target is to protect the system from single points of failure (SPOF) which means that no failure mode caused by a single event, concern, or error has a critical consequence on the system. Such a design target is common with 'by-wire' systems. FTA can be used to verify if a system is protected from SPOF. In this paper, sufficient criteria for evaluating protection from SPOF for partially expanded fault trees are proposed along with proof. The proposed criteria consider potential interactions between the lowest drawn events of a partial fault tree expansion which otherwise easily leads to an overly optimistic analysis of protection from SPOF. The analysis is limited to fault trees that are coherent and static

  5. An application of eddy current damping effect on single point diamond turning of titanium alloys

    Science.gov (United States)

    Yip, W. S.; To, S.

    2017-11-01

    Titanium alloys Ti6Al4V (TC4) have been popularly applied in many industries. They have superior material properties including an excellent strength-to-weight ratio and corrosion resistance. However, they are regarded as difficult to cut materials; serious tool wear, a high level of cutting vibration and low surface integrity are always involved in machining processes especially in ultra-precision machining (UPM). In this paper, a novel hybrid machining technology using an eddy current damping effect is firstly introduced in UPM to suppress machining vibration and improve the machining performance of titanium alloys. A magnetic field was superimposed on samples during single point diamond turning (SPDT) by exposing the samples in between two permanent magnets. When the titanium alloys were rotated within a magnetic field in the SPDT, an eddy current was generated through a stationary magnetic field inside the titanium alloys. An eddy current generated its own magnetic field with the opposite direction of the external magnetic field leading a repulsive force, compensating for the machining vibration induced by the turning process. The experimental results showed a remarkable improvement in cutting force variation, a significant reduction in adhesive tool wear and an extreme long chip formation in comparison to normal SPDT of titanium alloys, suggesting the enhancement of the machinability of titanium alloys using an eddy current damping effect. An eddy current damping effect was firstly introduced in the area of UPM to deliver the results of outstanding machining performance.

  6. An application of eddy current damping effect on single point diamond turning of titanium alloys

    International Nuclear Information System (INIS)

    Yip, W S; To, S

    2017-01-01

    Titanium alloys Ti6Al4V (TC4) have been popularly applied in many industries. They have superior material properties including an excellent strength-to-weight ratio and corrosion resistance. However, they are regarded as difficult to cut materials; serious tool wear, a high level of cutting vibration and low surface integrity are always involved in machining processes especially in ultra-precision machining (UPM). In this paper, a novel hybrid machining technology using an eddy current damping effect is firstly introduced in UPM to suppress machining vibration and improve the machining performance of titanium alloys. A magnetic field was superimposed on samples during single point diamond turning (SPDT) by exposing the samples in between two permanent magnets. When the titanium alloys were rotated within a magnetic field in the SPDT, an eddy current was generated through a stationary magnetic field inside the titanium alloys. An eddy current generated its own magnetic field with the opposite direction of the external magnetic field leading a repulsive force, compensating for the machining vibration induced by the turning process. The experimental results showed a remarkable improvement in cutting force variation, a significant reduction in adhesive tool wear and an extreme long chip formation in comparison to normal SPDT of titanium alloys, suggesting the enhancement of the machinability of titanium alloys using an eddy current damping effect. An eddy current damping effect was firstly introduced in the area of UPM to deliver the results of outstanding machining performance. (paper)

  7. Improving Access in Gastroenterology: The Single Point of Entry Model for Referrals

    Directory of Open Access Journals (Sweden)

    Kerri L Novak

    2013-01-01

    Full Text Available In 2005, a group of academic gastroenterologists in Calgary (Alberta adopted a centralized referral intake system known as central triage. This system provided a single point of entry model (SEM for referrals rather than the traditional system of individual practitioners managing their own referrals and queues. The goal of central triage was to improve wait times and referral management. In 2008, a similar system was developed in Edmonton at the University of Alberta Hospital (Edmonton, Alberta. SEMs have subsequently been adopted by numerous subspecialties throughout Alberta. There are many benefits of SEMs including improved access and reduced wait times. Understanding and measuring complex patient flow systems is key to improving access, and centralized intake systems provide an opportunity to better understand total demand and system bottlenecks. This knowledge is particularly important for specialties such as gastroenterology (GI, in which demand exceeds supply. While it is anticipated that SEMs will reduce wait times for GI care in Canada, the lack of sufficient resources to meet the demand for GI care necessitates additional strategies.

  8. Improving access in gastroenterology: the single point of entry model for referrals.

    Science.gov (United States)

    Novak, Kerri; Veldhuyzen Van Zanten, Sander; Pendharkar, Sachin R

    2013-11-01

    In 2005, a group of academic gastroenterologists in Calgary (Alberta) adopted a centralized referral intake system known as central triage. This system provided a single point of entry model (SEM) for referrals rather than the traditional system of individual practitioners managing their own referrals and queues. The goal of central triage was to improve wait times and referral management. In 2008, a similar system was developed in Edmonton at the University of Alberta Hospital (Edmonton, Alberta). SEMs have subsequently been adopted by numerous subspecialties throughout Alberta. There are many benefits of SEMs including improved access and reduced wait times. Understanding and measuring complex patient flow systems is key to improving access, and centralized intake systems provide an opportunity to better understand total demand and system bottlenecks. This knowledge is particularly important for specialties such as gastroenterology (GI), in which demand exceeds supply. While it is anticipated that SEMs will reduce wait times for GI care in Canada, the lack of sufficient resources to meet the demand for GI care necessitates additional strategies.

  9. Legislative Bargaining and Incremental Budgeting

    OpenAIRE

    Dhammika Dharmapala

    2002-01-01

    The notion of 'incrementalism', formulated by Aaron Wildavsky in the 1960's, has been extremely influential in the public budgeting literature. In essence, it entails the claim that legislators engaged in budgetary policymaking accept past allocations, and decide only on the allocation of increments to revenue. Wildavsky explained incrementalism with reference to the cognitive limitations of lawmakers and their desire to reduce conflict. This paper uses a legislative bargaining framework to u...

  10. Space nuclear reactor concepts for avoidance of a single point failure

    International Nuclear Information System (INIS)

    El-Genk, M. S.

    2007-01-01

    This paper presents three space nuclear reactor concepts for future exploration missions requiring electrical power of 10's to 100's kW, for 7-10 years. These concepts avoid a single point failure in reactor cooling; and they could be used with a host of energy conversion technologies. The first is lithium or sodium heat pipes cooled reactor. The heat pipes operate at a fraction of their prevailing capillary or sonic limit. Thus, when a number of heat pipes fail, those in the adjacent modules remove their heat load, maintaining reactor core adequately cooled. The second is a reactor with a circulating liquid metal coolant. The reactor core is divided into six identical sectors, each with a separate energy conversion loop. The sectors in the reactor core are neurotically coupled, but hydraulically decoupled. Thus, when a sector experiences a loss of coolant, the fission power generated in it will be removed by the circulating coolant in the adjacent sectors. In this case, however, the reactor fission power would have to decrease to avoid exceeding the design temperature limits in the sector with a failed loop. These two reactor concepts are used with energy conversion technologies, such as advanced Thermoelectric (TE), Free Piston Stirling Engines (FPSE), and Alkali Metal Thermal-to- Electric Conversion (AMTEC). Gas cooled reactors are a better choice to use with Closed Brayton Cycle engines, such as the third reactor concept to be presented in the paper. It has a sectored core that is cooled with a binary mixture of He-Xe (40 gm/mole). Each of the three sectors in the reactor has its own CBC and neutronically, but not hydraulically, coupled to the other sectors

  11. Obtaining Global Picture From Single Point Observations by Combining Data Assimilation and Machine Learning Tools

    Science.gov (United States)

    Shprits, Y.; Zhelavskaya, I. S.; Kellerman, A. C.; Spasojevic, M.; Kondrashov, D. A.; Ghil, M.; Aseev, N.; Castillo Tibocha, A. M.; Cervantes Villa, J. S.; Kletzing, C.; Kurth, W. S.

    2017-12-01

    Increasing volume of satellite measurements requires deployment of new tools that can utilize such vast amount of data. Satellite measurements are usually limited to a single location in space, which complicates the data analysis geared towards reproducing the global state of the space environment. In this study we show how measurements can be combined by means of data assimilation and how machine learning can help analyze large amounts of data and can help develop global models that are trained on single point measurement. Data Assimilation: Manual analysis of the satellite measurements is a challenging task, while automated analysis is complicated by the fact that measurements are given at various locations in space, have different instrumental errors, and often vary by orders of magnitude. We show results of the long term reanalysis of radiation belt measurements along with fully operational real-time predictions using data assimilative VERB code. Machine Learning: We present application of the machine learning tools for the analysis of NASA Van Allen Probes upper-hybrid frequency measurements. Using the obtained data set we train a new global predictive neural network. The results for the Van Allen Probes based neural network are compared with historical IMAGE satellite observations. We also show examples of predictions of geomagnetic indices using neural networks. Combination of machine learning and data assimilation: We discuss how data assimilation tools and machine learning tools can be combine so that physics-based insight into the dynamics of the particular system can be combined with empirical knowledge of it's non-linear behavior.

  12. An approach to eliminate stepped features in multistage incremental sheet forming process: Experimental and FEA analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nirala, Harish Kumar; Jain, Prashant K.; Tandon, Puneet [PDPM Indian Institute of Information Technology, Design and Manufacturing Jabalpur Jabalpur-482005, Madhya Pradesh (India); Roy, J. J.; Samal, M. K. [Bhabha Atomic Research Centre, Mumbai (India)

    2017-02-15

    Incremental sheet forming (ISF) is a recently developed manufacturing technique. In ISF, forming is done by applying deformation force through the motion of Numerically controlled (NC) single point forming tool on the clamped sheet metal blank. Single Point Incremental sheet forming (SPISF) is also known as a die-less forming process because no die is required to fabricate any component by using this process. Now a day it is widely accepted for rapid manufacturing of sheet metal components. The formability of SPISF process improves by adding some intermediate stages into it, which is known as Multi-stage SPISF (MSPISF) process. However during forming in MSPISF process because of intermediate stages stepped features are generated. This paper investigates the generation of stepped features with simulation and experimental results. An effective MSPISF strategy is proposed to remove or eliminate this generated undesirable stepped features.

  13. Effect of multiple forming tools on geometrical and mechanical properties in incremental sheet forming

    Science.gov (United States)

    Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.

    2018-05-01

    The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.

  14. Estimating envelope thermal characteristics from single point in time thermal images

    Science.gov (United States)

    Alshatshati, Salahaldin Faraj

    Energy efficiency programs implemented nationally in the U.S. by utilities have rendered savings which have cost on average 0.03/kWh. This cost is still well below generation costs. However, as the lowest cost energy efficiency measures are adopted, this the cost effectiveness of further investment declines. Thus there is a need to more effectively find the most opportunities for savings regionally and nationally, so that the greatest cost effectiveness in implementing energy efficiency can be achieved. Integral to this process. are at scale energy audits. However, on-site building energy audits process are expensive, in the range of US1.29/m2-$5.37/m2 and there are an insufficient number of professionals to perform the audits. Energy audits that can be conducted at-scale and at low cost are needed. Research is presented that addresses at community-wide scales characterization of building envelope thermal characteristics via drive-by and fly-over GPS linked thermal imaging. A central question drives this research: Can single point-in-time thermal images be used to infer U-values and thermal capacitances of walls and roofs? Previous efforts to use thermal images to estimate U-values have been limited to rare steady exterior weather conditions. The approaches posed here are based upon the development two models first is a dynamic model of a building envelope component with unknown U-value and thermal capacitance. The weather conditions prior to the thermal image are used as inputs to the model. The model is solved to determine the exterior surface temperature, ultimately predicted the temperature at the thermal measurement time. The model U-value and thermal capacitance are tuned in order to force the error between the predicted surface temperature and the measured surface temperature from thermal imaging to be near zero. This model is developed simply to show that such a model cannot be relied upon to accurately estimate the U-value. The second is a data

  15. FEM Simulation of Incremental Shear

    International Nuclear Information System (INIS)

    Rosochowski, Andrzej; Olejnik, Lech

    2007-01-01

    A popular way of producing ultrafine grained metals on a laboratory scale is severe plastic deformation. This paper introduces a new severe plastic deformation process of incremental shear. A finite element method simulation is carried out for various tool geometries and process kinematics. It has been established that for the successful realisation of the process the inner radius of the channel as well as the feeding increment should be approximately 30% of the billet thickness. The angle at which the reciprocating die works the material can be 30 deg. . When compared to equal channel angular pressing, incremental shear shows basic similarities in the mode of material flow and a few technological advantages which make it an attractive alternative to the known severe plastic deformation processes. The most promising characteristic of incremental shear is the possibility of processing very long billets in a continuous way which makes the process more industrially relevant

  16. FDTD Stability: Critical Time Increment

    OpenAIRE

    Z. Skvor; L. Pauk

    2003-01-01

    A new approach suitable for determination of the maximal stable time increment for the Finite-Difference Time-Domain (FDTD) algorithm in common curvilinear coordinates, for general mesh shapes and certain types of boundaries is presented. The maximal time increment corresponds to a characteristic value of a Helmholz equation that is solved by a finite-difference (FD) method. If this method uses exactly the same discretization as the given FDTD method (same mesh, boundary conditions, order of ...

  17. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  18. Highly macroscopically degenerated single-point ground states as source of specific heat capacity anomalies in magnetic frustrated systems

    Science.gov (United States)

    Jurčišinová, E.; Jurčišin, M.

    2018-04-01

    Anomalies of the specific heat capacity are investigated in the framework of the exactly solvable antiferromagnetic spin- 1 / 2 Ising model in the external magnetic field on the geometrically frustrated tetrahedron recursive lattice. It is shown that the Schottky-type anomaly in the behavior of the specific heat capacity is related to the existence of unique highly macroscopically degenerated single-point ground states which are formed on the borders between neighboring plateau-like ground states. It is also shown that the very existence of these single-point ground states with large residual entropies predicts the appearance of another anomaly in the behavior of the specific heat capacity for low temperatures, namely, the field-induced double-peak structure, which exists, and should be observed experimentally, along with the Schottky-type anomaly in various frustrated magnetic system.

  19. Expanding the Operational Limits of the Single-Point Impedance Diagnostic for Internal Temperature Monitoring of Lithium-ion Batteries

    International Nuclear Information System (INIS)

    Spinner, Neil S.; Love, Corey T.; Rose-Pehrsson, Susan L.; Tuttle, Steven G.

    2015-01-01

    Highlights: • Single-point impedance diagnostic technique demonstrated for lithium-ion batteries • Correlation between imaginary impedance and internal temperature determined • Instantaneous monitoring of commercial lithium-ion battery internal temperature • Expanded temperature range from −10°C up to 95°C • Non-invasive method useful for practical temperature monitoring of commercial cells - Abstract: Instantaneous internal temperature monitoring of a commercial 18650 LiCoO 2 lithium-ion battery was performed using a single-point EIS measurement. A correlation between the imaginary impedance, –Z imag , and internal temperature at 300 Hz was developed that was independent of the battery’s state of charge. An Arrhenius-type dependence was applied, and the activation energy for SEI ionic conductivity was found to be 0.13 eV. Two separate temperature-time experiments were conducted with different sequences of temperature, and single-point impedance tests at 300 Hz were performed to validate the correlation. Limitations were observed with the upper temperature range (68°C < T < 95°C), and consequently a secondary, empirical fit was applied for this upper range to improve accuracy. Average differences between actual and fit temperatures decreased around 3-7°C for the upper range with the secondary correlation. The impedance response at this frequency corresponded to the anode/SEI layer, and the SEI is reported to be thermally stable up to around 100°C, at which point decomposition may occur leading to battery deactivation and/or total failure. It is therefore of great importance to be able to track internal battery temperatures up to this critical point of 100°C, and this work demonstrates an expansion of the single-point EIS diagnostic to these elevated temperatures

  20. Influence of Fiber Orientation on Single-Point Cutting Fracture Behavior of Carbon-Fiber/Epoxy Prepreg Sheets

    OpenAIRE

    Wei, Yingying; An, Qinglong; Cai, Xiaojiang; Chen, Ming; Ming, Weiwei

    2015-01-01

    The purpose of this article is to investigate the influences of carbon fibers on the fracture mechanism of carbon fibers both in macroscopic view and microscopic view by using single-point flying cutting method. Cutting tools with three different materials were used in this research, namely, PCD (polycrystalline diamond) tool, CVD (chemical vapor deposition) diamond thin film coated carbide tool and uncoated carbide tool. The influence of fiber orientation on the cutting force and fracture to...

  1. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  2. Incremental Trust in Grid Computing

    DEFF Research Database (Denmark)

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  3. Convergent systems vs. incremental stability

    NARCIS (Netherlands)

    Rüffer, B.S.; Wouw, van de N.; Mueller, M.

    2013-01-01

    Two similar stability notions are considered; one is the long established notion of convergent systems, the other is the younger notion of incremental stability. Both notions require that any two solutions of a system converge to each other. Yet these stability concepts are different, in the sense

  4. Apollo: giving application developers a single point of access to public health models using structured vocabularies and Web services.

    Science.gov (United States)

    Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.

  5. The influence of shrinkage-cracking on the drying behaviour of White Portland cement using Single-Point Imaging (SPI).

    Science.gov (United States)

    Beyea, S D; Balcom, B J; Bremner, T W; Prado, P J; Cross, A R; Armstrong, R L; Grattan-Bellew, P E

    1998-11-01

    The removal of water from pores in hardened cement paste smaller than 50 nm results in cracking of the cement matrix due to the tensile stresses induced by drying shrinkage. Cracks in the matrix fundamentally alter the permeability of the material, and therefore directly affect the drying behaviour. Using Single-Point Imaging (SPI), we obtain one-dimensional moisture profiles of hydrated White Portland cement cylinders as a function of drying time. The drying behaviour of White Portland cement, is distinctly different from the drying behaviour of related concrete materials containing aggregates.

  6. Single-point reactive power control method on voltage rise mitigation in residential networks with high PV penetration

    DEFF Research Database (Denmark)

    Hasheminamin, Maryam; Agelidis, Vassilios; Ahmadi, Abdollah

    2018-01-01

    Voltage rise (VR) due to reverse power flow is an important obstacle for high integration of Photovoltaic (PV) into residential networks. This paper introduces and elaborates a novel methodology of an index-based single-point-reactive power-control (SPRPC) methodology to mitigate voltage rise by ...... system with high r/x ratio. Efficacy, effectiveness and cost study of SPRPC is compared to droop control to evaluate its advantages.......Voltage rise (VR) due to reverse power flow is an important obstacle for high integration of Photovoltaic (PV) into residential networks. This paper introduces and elaborates a novel methodology of an index-based single-point-reactive power-control (SPRPC) methodology to mitigate voltage rise...... by absorbing adequate reactive power from one selected point. The proposed index utilizes short circuit analysis to select the best point to apply this Volt/Var control method. SPRPC is supported technically and financially by distribution network operator that makes it cost effective, simple and efficient...

  7. Oxidative phosphorylation revisited

    DEFF Research Database (Denmark)

    Nath, Sunil; Villadsen, John

    2015-01-01

    The fundamentals of oxidative phosphorylation and photophosphorylation are revisited. New experimental data on the involvement of succinate and malate anions respectively in oxidative phosphorylation and photophosphorylation are presented. These new data offer a novel molecular mechanistic...

  8. Mitigation of Critical Single Point Failure (SPF) Material - Laminac 4116 Binder Replacement Program for Parachute and Cluster Stars Illuminant Compositions for Hand Held Signals

    National Research Council Canada - National Science Library

    Lakshminarayanan, G. R; Chen, Gary; Ames, Richard; Lee, Wai T; Wejsa, James L

    2006-01-01

    Laminac 4116 binder has been identified as a single point failure (SPF) material since it is being produced by only one company and there is a possibility that the company may discontinue production due to low product demand...

  9. Unmanned Maritime Systems Incremental Acquisition Approach

    Science.gov (United States)

    2016-12-01

    REPORT TYPE AND DATES COVERED MBA professional report 4. TITLE AND SUBTITLE UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH 5. FUNDING...Approved for public release. Distribution is unlimited. UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH Thomas Driscoll, Lieutenant...UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH ABSTRACT The purpose of this MBA report is to explore and understand the issues

  10. Incremental deformation: A literature review

    Directory of Open Access Journals (Sweden)

    Nasulea Daniel

    2017-01-01

    Full Text Available Nowadays the customer requirements are in permanent changing and according with them the tendencies in the modern industry is to implement flexible manufacturing processes. In the last decades, metal forming gained attention of the researchers and considerable changes has occurred. Because for a small number of parts, the conventional metal forming processes are expensive and time-consuming in terms of designing and manufacturing preparation, the manufacturers and researchers became interested in flexible processes. One of the most investigated flexible processes in metal forming is incremental sheet forming (ISF. ISF is an advanced flexible manufacturing process which allows to manufacture complex 3D products without expensive dedicated tools. In most of the cases it is needed for an ISF process the following: a simple tool, a fixing device for sheet metal blank and a universal CNC machine. Using this process it can be manufactured axis-symmetric parts, usually using a CNC lathe but also complex asymmetrical parts using CNC milling machines, robots or dedicated equipment. This paper aim to present the current status of incremental sheet forming technologies in terms of process parameters and their influences, wall thickness distribution, springback effect, formability, surface quality and the current main research directions.

  11. FDTD Stability: Critical Time Increment

    Directory of Open Access Journals (Sweden)

    Z. Skvor

    2003-06-01

    Full Text Available A new approach suitable for determination of the maximal stable timeincrement for the Finite-Difference Time-Domain (FDTD algorithm incommon curvilinear coordinates, for general mesh shapes and certaintypes of boundaries is presented. The maximal time incrementcorresponds to a characteristic value of a Helmholz equation that issolved by a finite-difference (FD method. If this method uses exactlythe same discretization as the given FDTD method (same mesh, boundaryconditions, order of precision etc., the maximal stable time incrementis obtained from the highest characteristic value. The FD system issolved by an iterative method, which uses only slightly alteredoriginal FDTD formulae. The Courant condition yields a stable timeincrement, but in certain cases the maximum increment is slightlygreater [2].

  12. Incremental Observer Relative Data Extraction

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    2004-01-01

    The visual exploration of large databases calls for a tight coupling of database and visualization systems. Current visualization systems typically fetch all the data and organize it in a scene tree that is then used to render the visible data. For immersive data explorations in a Cave...... or a Panorama, where an observer is data space this approach is far from optimal. A more scalable approach is to make the observer-aware database system and to restrict the communication between the database and visualization systems to the relevant data. In this paper VR-tree, an extension of the R......-tree, is used to index visibility ranges of objects. We introduce a new operator for incremental Observer Relative data Extraction (iORDE). We propose the Volatile Access STructure (VAST), a lightweight main memory structure that is created on the fly and is maintained during visual data explorations. VAST...

  13. Influence of Fiber Orientation on Single-Point Cutting Fracture Behavior of Carbon-Fiber/Epoxy Prepreg Sheets

    Directory of Open Access Journals (Sweden)

    Yingying Wei

    2015-10-01

    Full Text Available The purpose of this article is to investigate the influences of carbon fibers on the fracture mechanism of carbon fibers both in macroscopic view and microscopic view by using single-point flying cutting method. Cutting tools with three different materials were used in this research, namely, PCD (polycrystalline diamond tool, CVD (chemical vapor deposition diamond thin film coated carbide tool and uncoated carbide tool. The influence of fiber orientation on the cutting force and fracture topography were analyzed and conclusions were drawn that cutting forces are not affected by cutting speeds but significantly influenced by the fiber orientation. Cutting forces presented smaller values in the fiber orientation of 0/180° and 15/165° but the highest one in 30/150°. The fracture mechanism of carbon fibers was studied in different cutting conditions such as 0° orientation angle, 90° orientation angle, orientation angles along fiber direction, and orientation angles inverse to the fiber direction. In addition, a prediction model on the cutting defects of carbon fiber reinforced plastic was established based on acoustic emission (AE signals.

  14. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  15. Evaluation of mixing downstream of tees in duct systems with respect to single point representative air sampling.

    Science.gov (United States)

    Kim, Taehong; O'Neal, Dennis L; Ortiz, Carlos

    2006-09-01

    Air duct systems in nuclear facilities must be monitored with continuous sampling in case of an accidental release of airborne radionuclides. The purpose of this work is to identify the air sampling locations where the velocity and contaminant concentrations fall below the 20% coefficient of variation required by the American National Standards Institute/Health Physics Society N13.1-1999. Experiments of velocity and tracer gas concentration were conducted on a generic "T" mixing system which included combinations of three sub ducts, one main duct, and air velocities from 0.5 to 2 m s (100 to 400 fpm). The experimental results suggest that turbulent mixing provides the accepted velocity coefficients of variation after 6 hydraulic diameters downstream of the T-junction. About 95% of the cases achieved coefficients of variation below 10% by 6 hydraulic diameters. However, above a velocity ratio (velocity in the sub duct/velocity in the main duct) of 2, velocity profiles were uniform in a shorter distance downstream of the T-junction as the velocity ratio went up. For the tracer gas concentration, the distance needed for the coefficients of variation to drop 20% decreased with increasing velocity ratio due to the sub duct airflow momentum. The results may apply to other duct systems with similar geometries and, ultimately, be a basis for selecting a proper sampling location under the requirements of single point representative sampling.

  16. Role of single-point mutations and deletions on transition temperatures in ideal proteinogenic heteropolymer chains in the gas phase.

    Science.gov (United States)

    Olivares-Quiroz, L

    2016-07-01

    A coarse-grained statistical mechanics-based model for ideal heteropolymer proteinogenic chains of non-interacting residues is presented in terms of the size K of the chain and the set of helical propensities [Formula: see text] associated with each residue j along the chain. For this model, we provide an algorithm to compute the degeneracy tensor [Formula: see text] associated with energy level [Formula: see text] where [Formula: see text] is the number of residues with a native contact in a given conformation. From these results, we calculate the equilibrium partition function [Formula: see text] and characteristic temperature [Formula: see text] at which a transition from a low to a high entropy states is observed. The formalism is applied to analyze the effect on characteristic temperatures [Formula: see text] of single-point mutations and deletions of specific amino acids [Formula: see text] along the chain. Two probe systems are considered. First, we address the case of a random heteropolymer of size K and given helical propensities [Formula: see text] on a conformational phase space. Second, we focus our attention to a particular set of neuropentapeptides, [Met-5] and [Leu-5] enkephalins whose thermodynamic stability is a key feature on their coupling to [Formula: see text] and [Formula: see text] receptors and the triggering of biochemical responses.

  17. Noncontact on-machine measurement system based on capacitive displacement sensors for single-point diamond turning

    Science.gov (United States)

    Li, Xingchang; Zhang, Zhiyu; Hu, Haifei; Li, Yingjie; Xiong, Ling; Zhang, Xuejun; Yan, Jiwang

    2018-04-01

    On-machine measurements can improve the form accuracy of optical surfaces in single-point diamond turning applications; however, commercially available linear variable differential transformer sensors are inaccurate and can potentially scratch the surface. We present an on-machine measurement system based on capacitive displacement sensors for high-precision optical surfaces. In the proposed system, a position-trigger method of measurement was developed to ensure strict correspondence between the measurement points and the measurement data with no intervening time-delay. In addition, a double-sensor measurement was proposed to reduce the electric signal noise during spindle rotation. Using the proposed system, the repeatability of 80-nm peak-to-valley (PV) and 8-nm root-mean-square (RMS) was achieved through analyzing four successive measurement results. The accuracy of 109-nm PV and 14-nm RMS was obtained by comparing with the interferometer measurement result. An aluminum spherical mirror with a diameter of 300 mm was fabricated, and the resulting measured form error after one compensation cut was decreased to 254 nm in PV and 52 nm in RMS. These results confirm that the measurements of the surface form errors were successfully used to modify the cutting tool path during the compensation cut, thereby ensuring that the diamond turning process was more deterministic. In addition, the results show that the noise level was significantly reduced with the reference sensor even under a high rotational speed.

  18. Influence of Fiber Orientation on Single-Point Cutting Fracture Behavior of Carbon-Fiber/Epoxy Prepreg Sheets.

    Science.gov (United States)

    Wei, Yingying; An, Qinglong; Cai, Xiaojiang; Chen, Ming; Ming, Weiwei

    2015-10-02

    The purpose of this article is to investigate the influences of carbon fibers on the fracture mechanism of carbon fibers both in macroscopic view and microscopic view by using single-point flying cutting method. Cutting tools with three different materials were used in this research, namely, PCD (polycrystalline diamond) tool, CVD (chemical vapor deposition) diamond thin film coated carbide tool and uncoated carbide tool. The influence of fiber orientation on the cutting force and fracture topography were analyzed and conclusions were drawn that cutting forces are not affected by cutting speeds but significantly influenced by the fiber orientation. Cutting forces presented smaller values in the fiber orientation of 0/180° and 15/165° but the highest one in 30/150°. The fracture mechanism of carbon fibers was studied in different cutting conditions such as 0° orientation angle, 90° orientation angle, orientation angles along fiber direction, and orientation angles inverse to the fiber direction. In addition, a prediction model on the cutting defects of carbon fiber reinforced plastic was established based on acoustic emission (AE) signals.

  19. On excursion increments in heartbeat dynamics

    International Nuclear Information System (INIS)

    Guzmán-Vargas, L.; Reyes-Ramírez, I.; Hernández-Pérez, R.

    2013-01-01

    We study correlation properties of excursion increments of heartbeat time series from healthy subjects and heart failure patients. We construct the excursion time based on the original heartbeat time series, representing the time employed by the walker to return to the local mean value. Next, the detrended fluctuation analysis and the fractal dimension method are applied to the magnitude and sign of the increments in the time excursions between successive excursions for the mentioned groups. Our results show that for magnitude series of excursion increments both groups display long-range correlations with similar correlation exponents, indicating that large (small) increments (decrements) are more likely to be followed by large (small) increments (decrements). For sign sequences and for both groups, we find that increments are short-range anti-correlated, which is noticeable under heart failure conditions

  20. Integrating single-point vibrometer and full-field electronic speckle pattern interferometer to evaluate a micro-speaker

    Science.gov (United States)

    Chang, Wen-Chi; Chen, Yu-Chi; Chien, Chih-Jen; Wang, An-Bang; Lee, Chih-Kung

    2011-04-01

    A testing system contains an advanced vibrometer/interferometer device (AVID) and a high-speed electronic speckle pattern interferometer (ESPI) was developed. AVID is a laser Doppler vibrometer that can be used to detect single-point linear and angular velocity with DC to 20 MHz bandwidth and with nanometer resolution. In swept frequency mode, frequency response from mHz to MHz of the structure of interest can be measured. The ESPI experimental setup can be used to measure full-field out-of-plane displacement. A 5-1 phase shifting method and a correlation algorithm were used to analyze the phase difference between the reference signal and the speckle signal scattered from the sample surface. In order to show the efficiency and effectiveness of AVID and ESPI, we designed a micro-speaker composed of a plate with fixed boundaries and two piezo-actuators attached to the sides of the plate. The AVID was used to measure the vibration of one of the piezo-actuators and the ESPI was adopted to measure the two-dimensional out-of-plane displacement of the plate. A microphone was used to measure the acoustic response created by the micro-speaker. Driving signal includes random signal, sinusoidal signal, amplitude modulated high-frequency carrier signal, etc. Angular response induced by amplitude modulated high-frequency carrier signal was found to be significantly narrower than the frequency responses created by other types of driving signals. The validity of our newly developed NDE system are detailed by comparing the relationship between the vibration signal of the micro-speaker and the acoustic field generated.

  1. A single point acupuncture treatment at large intestine meridian: a randomized controlled trial in acute tonsillitis and pharyngitis.

    Science.gov (United States)

    Fleckenstein, Johannes; Lill, Christian; Lüdtke, Rainer; Gleditsch, Jochen; Rasp, Gerd; Irnich, Dominik

    2009-09-01

    One out of 4 patients visiting a general practitioner reports of a sore throat associated with pain on swallowing. This study was established to examine the immediate pain alleviating effect of a single point acupuncture treatment applied to the large intestine meridian of patients with sore throat. Sixty patients with acute tonsillitis and pharyngitis were enrolled in this randomized placebo-controlled trial. They either received acupuncture, or sham laser acupuncture, directed to the large intestine meridian section between acupuncture points LI 8 and LI 10. The main outcome measure was the change of pain intensity on swallowing a sip of water evaluated by a visual analog scale 15 minutes after treatment. A credibility assessment regarding the respective treatment was performed. The pain intensity for the acupuncture group before and immediately after therapy was 5.6+/-2.8 and 3.0+/-3.0, and for the sham group 5.6+/-2.5 and 3.8+/-2.5, respectively. Despite the articulation of a more pronounced improvement among the acupuncture group, there was no significant difference between groups (Delta=0.9, confidence interval: -0.2-2.0; P=0.12; analysis of covariance). Patients' satisfaction was high in both treatment groups. The study was prematurely terminated due to a subsequent lack of suitable patients. A single acupuncture treatment applied to a selected area of the large intestine meridian was no more effective in the alleviation of pain associated with clinical sore throat than sham laser acupuncture applied to the same area. Hence, clinically relevant improvement could be achieved. Pain alleviation might partly be due to the intense palpation of the large intestine meridian. The benefit of a comprehensive acupuncture treatment protocol in this condition should be subject to further trials.

  2. Van Allen Probes Science Gateway: Single-Point Access to Long-Term Radiation Belt Measurements and Space Weather Nowcasting

    Science.gov (United States)

    Romeo, G.; Barnes, R. J.; Ukhorskiy, A. Y.; Sotirelis, T.; Stephens, G.

    2017-12-01

    The Science Gateway gives single-point access to over 4.5 years of comprehensive wave and particle measurements from the Van Allen Probes NASA twin-spacecraft mission. The Gateway provides a set of visualization and data analysis tools including: HTML5-based interactive visualization of high-level data products from all instrument teams in the form of: line plots, orbital content plots, dynamical energy spectra, L-shell context plots (including two-spacecraft plotting), FFT spectra of wave data, solar wind and geomagnetic indices data, etc.; download custom multi-instrument CDF data files of selected data products; publication quality plots of digital data; combined orbit predicts for mission planning and coordination including: Van Allen Probes, MMS, THEMIS, Arase (ERG), Cluster, GOES, Geotail, FIREBIRD; magnetic footpoint calculator for coordination with LEO and ground-based assets; real-time computation and processing of empirical magnetic field models - computation of magnetic ephemeris, computation of adiabatic invariants. Van Allen Probes is the first spacecraft mission to provide a nowcast of the radiation environment in the heart of the radiation belts, where the radiation levels are the highest and most dangerous for spacecraft operations. For this purpose, all instruments continuously broadcast a subset of their science data in real time. Van Allen Probes partners with four foreign institutions who operate ground stations that receive the broadcast: Korea (KASI), the Czech republic (CAS), Argentina (CONAE), and Brazil (INPE). The SpWx broadcast is then collected at APL and delivered to the community via the Science Gateway.

  3. Increment memory module for spectrometric data recording

    International Nuclear Information System (INIS)

    Zhuchkov, A.A.; Myagkikh, A.I.

    1988-01-01

    Incremental memory unit designed to input differential energy spectra of nuclear radiation is described. ROM application as incremental device has allowed to reduce the number of elements and do simplify information readout from the unit. 12-bit 2048 channels present memory unit organization. The device is connected directly with the bus of microprocessor systems similar to KR 580. Incrementation maximal time constitutes 3 mks. It is possible to use this unit in multichannel counting mode

  4. Small Diameter Bomb Increment II (SDB II)

    Science.gov (United States)

    2015-12-01

    Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-439 Small Diameter Bomb Increment II (SDB II) As of FY 2017 President’s Budget Defense... Bomb Increment II (SDB II) DoD Component Air Force Joint Participants Department of the Navy Responsible Office References SAR Baseline (Production...Mission and Description Small Diameter Bomb Increment II (SDB II) is a joint interest United States Air Force (USAF) and Department of the Navy

  5. Incremental Query Rewriting with Resolution

    Science.gov (United States)

    Riazanov, Alexandre; Aragão, Marcelo A. T.

    We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.

  6. Revisiting Okun's Relationship

    NARCIS (Netherlands)

    Dixon, R.; Lim, G.C.; van Ours, Jan

    2016-01-01

    Our paper revisits Okun's relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985-2013. We find that the

  7. Revisiting the Okun relationship

    NARCIS (Netherlands)

    Dixon, R. (Robert); Lim, G.C.; J.C. van Ours (Jan)

    2017-01-01

    textabstractOur article revisits the Okun relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985–2013. We

  8. Bounded Intention Planning Revisited

    OpenAIRE

    Sievers Silvan; Wehrle Martin; Helmert Malte

    2014-01-01

    Bounded intention planning provides a pruning technique for optimal planning that has been proposed several years ago. In addition partial order reduction techniques based on stubborn sets have recently been investigated for this purpose. In this paper we revisit bounded intention planning in the view of stubborn sets.

  9. A Hydrostatic Paradox Revisited

    Science.gov (United States)

    Ganci, Salvatore

    2012-01-01

    This paper revisits a well-known hydrostatic paradox, observed when turning upside down a glass partially filled with water and covered with a sheet of light material. The phenomenon is studied in its most general form by including the mass of the cover. A historical survey of this experiment shows that a common misunderstanding of the phenomenon…

  10. The Faraday effect revisited

    DEFF Research Database (Denmark)

    Cornean, Horia; Nenciu, Gheorghe

    2009-01-01

    This paper is the second in a series revisiting the (effect of) Faraday rotation. We formulate and prove the thermodynamic limit for the transverse electric conductivity of Bloch electrons, as well as for the Verdet constant. The main mathematical tool is a regularized magnetic and geometric...

  11. MRI of hip prostheses using single-point methods : in vitro studies towards the artifact-free imaging of individuals with metal implants

    NARCIS (Netherlands)

    Ramos Cabrer, P.; Duynhoven, van J.P.M.; Toorn, van der A.; Nicolaij, K.

    2004-01-01

    Use of magnetic resonance imaging (MRI) in individuals with orthopedic implants is limited because of the large distortions caused by metallic components. As a possible solution for this problem, we suggest the use of single-point imaging (SPI) methods, which are immune to the susceptibility

  12. 'Felson Signs' revisited

    International Nuclear Information System (INIS)

    George, Phiji P.; Irodi, Aparna; Keshava, Shyamkumar N.; Lamont, Anthony C.

    2014-01-01

    In this article we revisit, with the help of images, those classic signs in chest radiography described by Dr Benjamin Felson himself, or other illustrious radiologists of his time, cited and discussed in 'Chest Roentgenology'. We briefly describe the causes of the signs, their utility and the differential diagnosis to be considered when each sign is seen. Wherever possible, we use CT images to illustrate the basis of some of these classic radiographic signs.

  13. Time functions revisited

    Science.gov (United States)

    Fathi, Albert

    2015-07-01

    In this paper we revisit our joint work with Antonio Siconolfi on time functions. We will give a brief introduction to the subject. We will then show how to construct a Lipschitz time function in a simplified setting. We will end with a new result showing that the Aubry set is not an artifact of our proof of existence of time functions for stably causal manifolds.

  14. Seven Issues, Revisited

    OpenAIRE

    Whitehead, Jim; De Bra, Paul; Grønbæk, Kaj; Larsen, Deena; Legget, John; schraefel, monica m.c.

    2002-01-01

    It has been 15 years since the original presentation by Frank Halasz at Hypertext'87 on seven issues for the next generation of hypertext systems. These issues are: Search and Query Composites Virtual Structures Computation in/over hypertext network Versioning Collaborative Work Extensibility and Tailorability Since that time, these issues have formed the nucleus of multiple research agendas within the Hypertext community. Befitting this direction-setting role, the issues have been revisited ...

  15. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2008-01-01

    We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....

  16. Efficient Incremental Checkpointing of Java Programs

    DEFF Research Database (Denmark)

    Lawall, Julia Laetitia; Muller, Gilles

    2000-01-01

    This paper investigates the optimization of language-level checkpointing of Java programs. First, we describe how to systematically associate incremental checkpoints with Java classes. While being safe, the genericness of this solution induces substantial execution overhead. Second, to solve...

  17. Two-Point Incremental Forming with Partial Die: Theory and Experimentation

    Science.gov (United States)

    Silva, M. B.; Martins, P. A. F.

    2013-04-01

    This paper proposes a new level of understanding of two-point incremental forming (TPIF) with partial die by means of a combined theoretical and experimental investigation. The theoretical developments include an innovative extension of the analytical model for rotational symmetric single point incremental forming (SPIF), originally developed by the authors, to address the influence of the major operating parameters of TPIF and to successfully explain the differences in formability between SPIF and TPIF. The experimental work comprised the mechanical characterization of the material and the determination of its formability limits at necking and fracture by means of circle grid analysis and benchmark incremental sheet forming tests. Results show the adequacy of the proposed analytical model to handle the deformation mechanics of SPIF and TPIF with partial die and demonstrate that neck formation is suppressed in TPIF, so that traditional forming limit curves are inapplicable to describe failure and must be replaced by fracture forming limits derived from ductile damage mechanics. The overall geometric accuracy of sheet metal parts produced by TPIF with partial die is found to be better than that of parts fabricated by SPIF due to smaller elastic recovery upon unloading.

  18. Proposed method of producing large optical mirrors Single-point diamond crushing followed by polishing with a small-area tool

    Science.gov (United States)

    Wright, G.; Bryan, J. B.

    1986-01-01

    Faster production of large optical mirrors may result from combining single-point diamond crushing of the glass with polishing using a small area tool to smooth the surface and remove the damaged layer. Diamond crushing allows a surface contour accurate to 0.5 microns to be generated, and the small area computer-controlled polishing tool allows the surface roughness to be removed without destroying the initial contour. Final contours with an accuracy of 0.04 microns have been achieved.

  19. Fabrication of an infrared Shack-Hartmann sensor by combining high-speed single-point diamond milling and precision compression molding processes.

    Science.gov (United States)

    Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y

    2018-05-01

    A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.

  20. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2012-01-01

    Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...

  1. Bottomonium spectrum revisited

    CERN Document Server

    Segovia, Jorge; Entem, David R.; Fernández, Francisco

    2016-01-01

    We revisit the bottomonium spectrum motivated by the recently exciting experimental progress in the observation of new bottomonium states, both conventional and unconventional. Our framework is a nonrelativistic constituent quark model which has been applied to a wide range of hadronic observables from the light to the heavy quark sector and thus the model parameters are completely constrained. Beyond the spectrum, we provide a large number of electromagnetic, strong and hadronic decays in order to discuss the quark content of the bottomonium states and give more insights about the better way to determine their properties experimentally.

  2. Metamorphosis in Craniiformea revisited

    DEFF Research Database (Denmark)

    Altenburger, Andreas; Wanninger, Andreas; Holmer, Lars E.

    2013-01-01

    We revisited the brachiopod fold hypothesis and investigated metamorphosis in the craniiform brachiopod Novocrania anomala. Larval development is lecithotrophic and the dorsal (brachial) valve is secreted by dorsal epithelia. We found that the juvenile ventral valve, which consists only of a thin...... brachiopods during metamorphosis to cement their pedicle to the substrate. N. anomala is therefore not initially attached by a valve but by material corresponding to pedicle cuticle. This is different to previous descriptions, which had led to speculations about a folding event in the evolution of Brachiopoda...

  3. Growth increments in teeth of Diictodon (Therapsida

    Directory of Open Access Journals (Sweden)

    J. Francis Thackeray

    1991-09-01

    Full Text Available Growth increments circa 0.02 mm in width have been observed in sectioned tusks of Diictodon from the Late Permian lower Beaufort succession of the South African Karoo, dated between about 260 and 245 million years ago. Mean growth increments show a decline from relatively high values in the Tropidostoma/Endothiodon Assemblage Zone, to lower values in the Aulacephalodon/Cistecephaluszone, declining still further in the Dicynodon lacerficeps/Whaitsia zone at the end of the Permian. These changes coincide with gradual changes in carbon isotope ratios measured from Diictodon tooth apatite. It is suggested that the decline in growth increments is related to environmental changes associated with a decline in primary production which contributed to the decline in abundance and ultimate extinction of Diictodon.

  4. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  5. Incremental Integrity Checking: Limitations and Possibilities

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2005-01-01

    Integrity checking is an essential means for the preservation of the intended semantics of a deductive database. Incrementality is the only feasible approach to checking and can be obtained with respect to given update patterns by exploiting query optimization techniques. By reducing the problem...... to query containment, we show that no procedure exists that always returns the best incremental test (aka simplification of integrity constraints), and this according to any reasonable criterion measuring the checking effort. In spite of this theoretical limitation, we develop an effective procedure...

  6. History Matters: Incremental Ontology Reasoning Using Modules

    Science.gov (United States)

    Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny

    The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.

  7. Revisiting Nursing Research in Nigeria

    African Journals Online (AJOL)

    2016-08-18

    Aug 18, 2016 ... health care research, it is therefore pertinent to revisit the state of nursing research in the country. .... platforms, updated libraries with electronic resource ... benchmarks for developing countries of 26%, [17] the amount is still ...

  8. Revisiting control establishments for emerging energy hubs

    Science.gov (United States)

    Nasirian, Vahidreza

    Emerging small-scale energy systems, i.e., microgrids and smartgrids, rely on centralized controllers for voltage regulation, load sharing, and economic dispatch. However, the central controller is a single-point-of-failure in such a design as either the controller or attached communication links failure can render the entire system inoperable. This work seeks for alternative distributed control structures to improve system reliability and help to the scalability of the system. A cooperative distributed controller is proposed that uses a noise-resilient voltage estimator and handles global voltage regulation and load sharing across a DC microgrid. Distributed adaptive droop control is also investigated as an alternative solution. A droop-free distributed control is offered to handle voltage/frequency regulation and load sharing in AC systems. This solution does not require frequency measurement and, thus, features a fast frequency regulation. Distributed economic dispatch is also studied, where a distributed protocol is designed that controls generation units to merge their incremental costs into a consensus and, thus, push the entire system to generate with the minimum cost. Experimental verifications and Hardware-in-the-Loop (HIL) simulations are used to study efficacy of the proposed control protocols.

  9. Design of methodology for incremental compiler construction

    Directory of Open Access Journals (Sweden)

    Pavel Haluza

    2011-01-01

    Full Text Available The paper deals with possibilities of the incremental compiler construction. It represents the compiler construction possibilities for languages with a fixed set of lexical units and for languages with a variable set of lexical units, too. The methodology design for the incremental compiler construction is based on the known algorithms for standard compiler construction and derived for both groups of languages. Under the group of languages with a fixed set of lexical units there belong languages, where each lexical unit has its constant meaning, e.g., common programming languages. For this group of languages the paper tries to solve the problem of the incremental semantic analysis, which is based on incremental parsing. In the group of languages with a variable set of lexical units (e.g., professional typographic system TEX, it is possible to change arbitrarily the meaning of each character on the input file at any time during processing. The change takes effect immediately and its validity can be somehow limited or is given by the end of the input. For this group of languages this paper tries to solve the problem case when we use macros temporarily changing the category of arbitrary characters.

  10. Existing School Buildings: Incremental Seismic Retrofit Opportunities.

    Science.gov (United States)

    Federal Emergency Management Agency, Washington, DC.

    The intent of this document is to provide technical guidance to school district facility managers for linking specific incremental seismic retrofit opportunities to specific maintenance and capital improvement projects. The linkages are based on logical affinities, such as technical fit, location of the work within the building, cost saving…

  11. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  12. The Cognitive Underpinnings of Incremental Rehearsal

    Science.gov (United States)

    Varma, Sashank; Schleisman, Katrina B.

    2014-01-01

    Incremental rehearsal (IR) is a flashcard technique that has been developed and evaluated by school psychologists. We discuss potential learning and memory effects from cognitive psychology that may explain the observed superiority of IR over other flashcard techniques. First, we propose that IR is a form of "spaced practice" that…

  13. Life quality index revisited

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2004-01-01

    The derivation of the life quality index (LQI) is revisited for a revision. This revision takes into account the unpaid but necessary work time needed to stay alive in clean and healthy conditions to be fit for effective wealth producing work and to enjoyable free time. Dimension analysis...... at birth should not vary between countries. Finally the distributional assumptions are relaxed as compared to the assumptions made in an earlier work by the author. These assumptions concern the calculation of the life expectancy change due to the removal of an accident source. Moreover a simple public...... consistency problems with the standard power function expression of the LQI are pointed out. It is emphasized that the combination coefficient in the convex differential combination between the relative differential of the gross domestic product per capita and the relative differential of the expected life...

  14. Quantum duel revisited

    International Nuclear Information System (INIS)

    Schmidt, Alexandre G M; Paiva, Milena M

    2012-01-01

    We revisit the quantum two-person duel. In this problem, both Alice and Bob each possess a spin-1/2 particle which models dead and alive states for each player. We review the Abbott and Flitney result—now considering non-zero α 1 and α 2 in order to decide if it is better for Alice to shoot or not the second time—and we also consider a duel where players do not necessarily start alive. This simple assumption allows us to explore several interesting special cases, namely how a dead player can win the duel shooting just once, or how can Bob revive Alice after one shot, and the better strategy for Alice—being either alive or in a superposition of alive and dead states—fighting a dead opponent. (paper)

  15. Satellite failures revisited

    Science.gov (United States)

    Balcerak, Ernie

    2012-12-01

    In January 1994, the two geostationary satellites known as Anik-E1 and Anik-E2, operated by Telesat Canada, failed one after the other within 9 hours, leaving many northern Canadian communities without television and data services. The outage, which shut down much of the country's broadcast television for hours and cost Telesat Canada more than $15 million, generated significant media attention. Lam et al. used publicly available records to revisit the event; they looked at failure details, media coverage, recovery effort, and cost. They also used satellite and ground data to determine the precise causes of those satellite failures. The researchers traced the entire space weather event from conditions on the Sun through the interplanetary medium to the particle environment in geostationary orbit.

  16. Logistics Innovation Process Revisited

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Su, Shong-Iee Ivan; Yang, Su-Lan

    2011-01-01

    Purpose – The purpose of this paper is to learn more about logistics innovation processes and their implications for the focal organization as well as the supply chain, especially suppliers. Design/methodology/approach – The empirical basis of the study is a longitudinal action research project...... that was triggered by the practical needs of new ways of handling material flows of a hospital. This approach made it possible to revisit theory on logistics innovation process. Findings – Apart from the tangible benefits reported to the case hospital, five findings can be extracted from this study: the logistics...... innovation process model may include not just customers but also suppliers; logistics innovation in buyer-supplier relations may serve as an alternative to outsourcing; logistics innovation processes are dynamic and may improve supplier partnerships; logistics innovations in the supply chain are as dependent...

  17. Klein's double discontinuity revisited

    DEFF Research Database (Denmark)

    Winsløw, Carl; Grønbæk, Niels

    2014-01-01

    Much effort and research has been invested into understanding and bridging the ‘gaps’ which many students experience in terms of contents and expectations as they begin university studies with a heavy component of mathematics, typically in the form of calculus courses. We have several studies...... of bridging measures, success rates and many other aspects of these “entrance transition” problems. In this paper, we consider the inverse transition, experienced by university students as they revisit core parts of high school mathematics (in particular, calculus) after completing the undergraduate...... mathematics courses which are mandatory to become a high school teacher of mathematics. To what extent does the “advanced” experience enable them to approach the high school calculus in a deeper and more autonomous way ? To what extent can “capstone” courses support such an approach ? How could it be hindered...

  18. Reframing in dentistry: Revisited

    Directory of Open Access Journals (Sweden)

    Sivakumar Nuvvula

    2013-01-01

    Full Text Available The successful practice of dentistry involves a good combination of technical skills and soft skills. Soft skills or communication skills are not taught extensively in dental schools and it can be challenging to learn and at times in treating dental patients. Guiding the child′s behavior in the dental operatory is one of the preliminary steps to be taken by the pediatric dentist and one who can successfully modify the behavior can definitely pave the way for a life time comprehensive oral care. This article is an attempt to revisit a simple behavior guidance technique, reframing and explain the possible psychological perspectives behind it for better use in the clinical practice.

  19. Statistics of wind direction and its increments

    International Nuclear Information System (INIS)

    Doorn, Eric van; Dhruva, Brindesh; Sreenivasan, Katepalli R.; Cassella, Victor

    2000-01-01

    We study some elementary statistics of wind direction fluctuations in the atmosphere for a wide range of time scales (10 -4 sec to 1 h), and in both vertical and horizontal planes. In the plane parallel to the ground surface, the direction time series consists of two parts: a constant drift due to large weather systems moving with the mean wind speed, and fluctuations about this drift. The statistics of the direction fluctuations show a rough similarity to Brownian motion but depend, in detail, on the wind speed. This dependence manifests itself quite clearly in the statistics of wind-direction increments over various intervals of time. These increments are intermittent during periods of low wind speeds but Gaussian-like during periods of high wind speeds. (c) 2000 American Institute of Physics

  20. Evolving effective incremental SAT solvers with GP

    OpenAIRE

    Bader, Mohamed; Poli, R.

    2008-01-01

    Hyper-Heuristics could simply be defined as heuristics to choose other heuristics, and it is a way of combining existing heuristics to generate new ones. In a Hyper-Heuristic framework, the framework is used for evolving effective incremental (Inc*) solvers for SAT. We test the evolved heuristics (IncHH) against other known local search heuristics on a variety of benchmark SAT problems.

  1. Teraflop-scale Incremental Machine Learning

    OpenAIRE

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-u...

  2. Shakedown analysis by finite element incremental procedures

    International Nuclear Information System (INIS)

    Borkowski, A.; Kleiber, M.

    1979-01-01

    It is a common occurence in many practical problems that external loads are variable and the exact time-dependent history of loading is unknown. Instead of it load is characterized by a given loading domain: a convex polyhedron in the n-dimensional space of load parameters. The problem is then to check whether a structure shakes down, i.e. responds elastically after a few elasto-plastic cycles, or not to a variable loading as defined above. Such check can be performed by an incremental procedure. One should reproduce incrementally a simple cyclic process which consists of proportional load paths that connect the origin of the load space with the corners of the loading domain. It was proved that if a structure shakes down to such loading history then it is able to adopt itself to an arbitrary load path contained in the loading domain. The main advantage of such approach is the possibility to use existing incremental finite-element computer codes. (orig.)

  3. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    CERN Document Server

    Lu Cun Heng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving cou...

  4. CAN MARKETING SUPPORT THE IMPLEMENTATION OF EFFECTIVE EGOVERNMENT? ANALYSIS OF THE SINGLE POINT OF ACCESS PORTAL FOR ROMANIAN ELECTRONIC PUBLIC SERVICES

    Directory of Open Access Journals (Sweden)

    Velicu Bogdan Calin

    2011-12-01

    Full Text Available The advances in technology hold great potential for helping Romanian government respond to its challenges namely, better service delivery, better procurement, efficient working and better communication with citizens and businesses. While the European Commission develops the main strategies on eGovernment, every member state has the freedom to identify its own necessities and decide according to specific social, administrative and economic context. Designing, cost setting, choosing the best supply channels or communicating with involved actors, are all marketing instruments which, if used accordingly, can ensure modern and efficient public services. This paper presents an analysis of the degree of development of public services available at the www.e-guvernare.ro portal, the single point of access for specific Romanian electronic public services.

  5. A national assessment of underground natural gas storage: identifying wells with designs likely vulnerable to a single-point-of-failure

    Science.gov (United States)

    Michanowicz, Drew R.; Buonocore, Jonathan J.; Rowland, Sebastian T.; Konschnik, Katherine E.; Goho, Shaun A.; Bernstein, Aaron S.

    2017-05-01

    The leak of processed natural gas (PNG) from October 2015 to February 2016 from the Aliso Canyon storage facility, near Los Angeles, California, was the largest single accidental release of greenhouse gases in US history. The Interagency Task Force on Natural Gas Storage Safety and California regulators recently recommended operators phase out single-point-of-failure (SPF) well designs. Here, we develop a national dataset of UGS well activity in the continental US to assess regulatory data availability and uncertainty, and to assess the prevalence of certain well design deficiencies including single-point-of-failure designs. We identified 14 138 active UGS wells associated with 317 active UGS facilities in 29 states using regulatory and company data. State-level wellbore datasets contained numerous reporting inconsistencies that limited data concatenation. We identified 2715 active UGS wells across 160 facilities that, like the failed well at Aliso Canyon, predated the storage facility, and therefore were not originally designed for gas storage. The majority (88%) of these repurposed wells are located in OH, MI, PA, NY, and WV. Repurposed wells have a median age of 74 years, and the 2694 repurposed wells constructed prior to 1979 are particularly likely to exhibit design-related deficiencies. An estimated 210 active repurposed wells were constructed before 1917—before cement zonal isolation methods were utilized. These wells are located in OH, PA, NY, and WV and represent the highest priority related to potential design deficiencies that could lead to containment loss. This national baseline assessment identifies regulatory data uncertainties, highlights a potentially widespread vulnerability of the natural gas supply chain, and can aid in prioritization and oversight for high-risk wells and facilities.

  6. Simulation and comparison of perturb and observe and incremental ...

    Indian Academy of Sciences (India)

    Perturb and Observe (P & O) algorithm and Incremental conductance algorithm. ... Keywords. Solar array; insolation; MPPT; modelling, P & O; incremental conductance. 1. .... voltage level. It is also ..... Int. J. Advances in Eng. Technol. 133–148.

  7. The critical catastrophe revisited

    International Nuclear Information System (INIS)

    De Mulatier, Clélia; Rosso, Alberto; Dumonteil, Eric; Zoia, Andrea

    2015-01-01

    The neutron population in a prototype model of nuclear reactor can be described in terms of a collection of particles confined in a box and undergoing three key random mechanisms: diffusion, reproduction due to fissions, and death due to absorption events. When the reactor is operated at the critical point, and fissions are exactly compensated by absorptions, the whole neutron population might in principle go to extinction because of the wild fluctuations induced by births and deaths. This phenomenon, which has been named critical catastrophe, is nonetheless never observed in practice: feedback mechanisms acting on the total population, such as human intervention, have a stabilizing effect. In this work, we revisit the critical catastrophe by investigating the spatial behaviour of the fluctuations in a confined geometry. When the system is free to evolve, the neutrons may display a wild patchiness (clustering). On the contrary, imposing a population control on the total population acts also against the local fluctuations, and may thus inhibit the spatial clustering. The effectiveness of population control in quenching spatial fluctuations will be shown to depend on the competition between the mixing time of the neutrons (i.e. the average time taken for a particle to explore the finite viable space) and the extinction time

  8. Magnetic moments revisited

    International Nuclear Information System (INIS)

    Towner, I.S.; Khanna, F.C.

    1984-01-01

    Consideration of core polarization, isobar currents and meson-exchange processes gives a satisfactory understanding of the ground-state magnetic moments in closed-shell-plus (or minus)-one nuclei, A = 3, 15, 17, 39 and 41. Ever since the earliest days of the nuclear shell model the understanding of magnetic moments of nuclear states of supposedly simple configurations, such as doubly closed LS shells +-1 nucleon, has been a challenge for theorists. The experimental moments, which in most cases are known with extraordinary precision, show a small yet significant departure from the single-particle Schmidt values. The departure, however, is difficult to evaluate precisely since, as will be seen, it results from a sensitive cancellation between several competing corrections each of which can be as large as the observed discrepancy. This, then, is the continuing fascination of magnetic moments. In this contribution, we revisit the subjet principally to identify the role played by isobar currents, which are of much concern at this conference. But in so doing we warn quite strongly of the dangers of considering just isobar currents in isolation; equal consideration must be given to competing processes which in this context are the mundane nuclear structure effects, such as core polarization, and the more popular meson-exchange currents

  9. Lorentz violation naturalness revisited

    Energy Technology Data Exchange (ETDEWEB)

    Belenchia, Alessio; Gambassi, Andrea; Liberati, Stefano [SISSA - International School for Advanced Studies, via Bonomea 265, 34136 Trieste (Italy); INFN, Sezione di Trieste, via Valerio 2, 34127 Trieste (Italy)

    2016-06-08

    We revisit here the naturalness problem of Lorentz invariance violations on a simple toy model of a scalar field coupled to a fermion field via a Yukawa interaction. We first review some well-known results concerning the low-energy percolation of Lorentz violation from high energies, presenting some details of the analysis not explicitly discussed in the literature and discussing some previously unnoticed subtleties. We then show how a separation between the scale of validity of the effective field theory and that one of Lorentz invariance violations can hinder this low-energy percolation. While such protection mechanism was previously considered in the literature, we provide here a simple illustration of how it works and of its general features. Finally, we consider a case in which dissipation is present, showing that the dissipative behaviour does not percolate generically to lower mass dimension operators albeit dispersion does. Moreover, we show that a scale separation can protect from unsuppressed low-energy percolation also in this case.

  10. 48 CFR 3432.771 - Provision for incremental funding.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Provision for incremental funding. 3432.771 Section 3432.771 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION..., Incremental Funding, in a solicitation if a cost-reimbursement contract using incremental funding is...

  11. Enabling Incremental Query Re-Optimization.

    Science.gov (United States)

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  12. Incremental learning for automated knowledge capture

    Energy Technology Data Exchange (ETDEWEB)

    Benz, Zachary O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilico, Justin Derrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dixon, Kevin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wendt, Jeremy Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-12-01

    People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

  13. Two models of minimalist, incremental syntactic analysis.

    Science.gov (United States)

    Stabler, Edward P

    2013-07-01

    Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.

  14. Incremental Nonnegative Matrix Factorization for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  15. [Incremental cost effectiveness of multifocal cataract surgery].

    Science.gov (United States)

    Pagel, N; Dick, H B; Krummenauer, F

    2007-02-01

    Supplementation of cataract patients with multifocal intraocular lenses involves an additional financial investment when compared to the corresponding monofocal supplementation, which usually is not funded by German health care insurers. In the context of recent resource allocation discussions, however, the cost effectiveness of multifocal cataract surgery could become an important rationale. Therefore an evidence-based estimation of its cost effectiveness was carried out. Three independent meta-analyses were implemented to estimate the gain in uncorrected near visual acuity and best corrected visual acuity (vision lines) as well as the predictability (fraction of patients without need for reading aids) of multifocal supplementation. Study reports published between 1995 and 2004 (English or German language) were screened for appropriate key words. Meta effects in visual gain and predictability were estimated by means and standard deviations of the reported effect measures. Cost data were estimated by German DRG rates and individual lens costs; the cost effectiveness of multifocal cataract surgery was then computed in terms of its marginal cost effectiveness ratio (MCER) for each clinical benefit endpoint; the incremental costs of multifocal versus monofocal cataract surgery were further estimated by means of their respective incremental cost effectiveness ratio (ICER). An independent meta-analysis estimated the complication profiles to be expected after monofocal and multifocal cataract surgery in order to evaluate expectable complication-associated additional costs of both procedures; the marginal and incremental cost effectiveness estimates were adjusted accordingly. A sensitivity analysis comprised cost variations of +/- 10 % and utility variations alongside the meta effect estimate's 95 % confidence intervals. Total direct costs from the health care insurer's perspective were estimated 3363 euro, associated with a visual meta benefit in best corrected visual

  16. Leadership and Management Theories Revisited

    DEFF Research Database (Denmark)

    Madsen, Mona Toft

    2001-01-01

    The goal of the paper is to revisit and analyze key contributions to the understanding of leadership and management. As a part of the discussion a role perspective that allows for additional and/or integrated leader dimensions, including a change-centered, will be outlined. Seemingly, a major...

  17. Revisiting Inter-Genre Similarity

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Gouyon, Fabien

    2013-01-01

    We revisit the idea of ``inter-genre similarity'' (IGS) for machine learning in general, and music genre recognition in particular. We show analytically that the probability of error for IGS is higher than naive Bayes classification with zero-one loss (NB). We show empirically that IGS does...... not perform well, even for data that satisfies all its assumptions....

  18. 'Counterfeit deviance' revisited.

    Science.gov (United States)

    Griffiths, Dorothy; Hingsburger, Dave; Hoath, Jordan; Ioannou, Stephanie

    2013-09-01

    The field has seen a renewed interest in exploring the theory of 'counterfeit deviance' for persons with intellectual disability who sexually offend. The term was first presented in 1991 by Hingsburger, Griffiths and Quinsey as a means to differentiate in clinical assessment a subgroup of persons with intellectual disability whose behaviours appeared like paraphilia but served a function that was not related to paraphilia sexual urges or fantasies. Case observations were put forward to provide differential diagnosis of paraphilia in persons with intellectual disabilities compared to those with counterfeit deviance. The brief paper was published in a journal that is no longer available and as such much of what is currently written on the topic is based on secondary sources. The current paper presents a theoretical piece to revisit the original counterfeit deviance theory to clarify the myths and misconceptions that have arisen and evaluate the theory based on additional research and clinical findings. The authors also propose areas where there may be a basis for expansion of the theory. The theory of counterfeit deviance still has relevance as a consideration for clinicians when assessing the nature of a sexual offence committed by a person with an intellectual disability. Clinical differentiation of paraphilia from counterfeit deviance provides a foundation for intervention that is designed to specifically treat the underlying factors that contributed to the offence for a given individual. Counterfeit deviance is a concept that continues to provide areas for consideration for clinicians regarding the assessment and treatment of an individual with an intellectual disability who has sexually offended. It is not and never was an explanation for all sexually offending behavior among persons with intellectual disabilities. © 2013 John Wiley & Sons Ltd.

  19. Gaussian entanglement revisited

    Science.gov (United States)

    Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo

    2018-02-01

    We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.

  20. Izmit Foreshocks Revisited

    Science.gov (United States)

    Ellsworth, W. L.; Bulut, F.

    2016-12-01

    Much of what we know about the initiation of earthquakes comes from the temporal and spatial relationship of foreshocks to the initiation point of the mainshock. The 1999 Mw 7.6 Izmit, Turkey, earthquake was preceded by a 44 minute-long foreshock sequence. Bouchon et al. (Science, 2011) analyzed the foreshocks using a single seismic station, UCG, located to the north of the east-west fault, and concluded on the basis of waveform similarity that the foreshocks repeatedly re-ruptured the same fault patch, driven by slow slip at the base of the crust. We revisit the foreshock sequence using seismograms from 9 additional stations that recorded the four largest foreshocks (Mw 2.0 to 2.8) to better characterize spatial and temporal evolution of the foreshock sequence and their relationship to the mainshock hypocenter. Cross-correlation timing and hypocentroid location with hypoDD reveals a systematic west-to-east propagation of the four largest foreshocks toward the mainshock hypocenter. Foreshock rupture dimensions estimated using spectral ratios imply no major overlap for the first three foreshocks. The centroid of 4th and largest foreshock continues the eastward migration, but lies within the circular source area of the 3rd. The 3rd, however, has a low stress drop and strong directivity to the west . The mainshock hypocenter locates on the eastern edge of foreshock 4. We also re-analyzed waveform similarity of all 18 foreshocks recorded at UCG by removing the common mode signal and clustering the residual seismogram using the correlation coefficient as the distance metric. The smaller foreshocks cluster with the larger events in time order, sometimes as foreshocks and more commonly as aftershocks. These observations show that the Izmit foreshock sequence is consistent with a stress-transfer driven cascade, moving systematically to the east along the fault and that there is no observational requirement for creep as a driving mechanism.

  1. Sequence polymorphism in an insect RNA virus field population: A snapshot from a single point in space and time reveals stochastic differences among and within individual hosts

    Energy Technology Data Exchange (ETDEWEB)

    Stenger, Drake C., E-mail: drake.stenger@ars.usda.gov [USDA, Agricultural Research Service, San Joaquin Valley Agricultural Sciences Center, 9611 South Riverbend Ave., Parlier, CA 93648-9757 (United States); Krugner, Rodrigo [USDA, Agricultural Research Service, San Joaquin Valley Agricultural Sciences Center, 9611 South Riverbend Ave., Parlier, CA 93648-9757 (United States); Nouri, Shahideh; Ferriol, Inmaculada; Falk, Bryce W. [Department of Plant Pathology, University of California, Davis, CA 95616 (United States); Sisterson, Mark S. [USDA, Agricultural Research Service, San Joaquin Valley Agricultural Sciences Center, 9611 South Riverbend Ave., Parlier, CA 93648-9757 (United States)

    2016-11-15

    Population structure of Homalodisca coagulata Virus-1 (HoCV-1) among and within field-collected insects sampled from a single point in space and time was examined. Polymorphism in complete consensus sequences among single-insect isolates was dominated by synonymous substitutions. The mutant spectrum of the C2 helicase region within each single-insect isolate was unique and dominated by nonsynonymous singletons. Bootstrapping was used to correct the within-isolate nonsynonymous:synonymous arithmetic ratio (N:S) for RT-PCR error, yielding an N:S value ~one log-unit greater than that of consensus sequences. Probability of all possible single-base substitutions for the C2 region predicted N:S values within 95% confidence limits of the corrected within-isolate N:S when the only constraint imposed was viral polymerase error bias for transitions over transversions. These results indicate that bottlenecks coupled with strong negative/purifying selection drive consensus sequences toward neutral sequence space, and that most polymorphism within single-insect isolates is composed of newly-minted mutations sampled prior to selection. -- Highlights: •Sampling protocol minimized differential selection/history among isolates. •Polymorphism among consensus sequences dominated by negative/purifying selection. •Within-isolate N:S ratio corrected for RT-PCR error by bootstrapping. •Within-isolate mutant spectrum dominated by new mutations yet to undergo selection.

  2. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    International Nuclear Information System (INIS)

    Lu Cunheng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving count rate (derived from total receiving count rate of the bottom and side surface). Finally, according to the measuring value, it is proved that imitating the change of total receiving gamma ray exposure rate of the bottom and side surfaces with this regularity in a certain high area is feasible

  3. Dynamic Topography Revisited

    Science.gov (United States)

    Moresi, Louis

    2015-04-01

    Dynamic Topography Revisited Dynamic topography is usually considered to be one of the trinity of contributing causes to the Earth's non-hydrostatic topography along with the long-term elastic strength of the lithosphere and isostatic responses to density anomalies within the lithosphere. Dynamic topography, thought of this way, is what is left over when other sources of support have been eliminated. An alternate and explicit definition of dynamic topography is that deflection of the surface which is attributable to creeping viscous flow. The problem with the first definition of dynamic topography is 1) that the lithosphere is almost certainly a visco-elastic / brittle layer with no absolute boundary between flowing and static regions, and 2) the lithosphere is, a thermal / compositional boundary layer in which some buoyancy is attributable to immutable, intrinsic density variations and some is due to thermal anomalies which are coupled to the flow. In each case, it is difficult to draw a sharp line between each contribution to the overall topography. The second definition of dynamic topography does seem cleaner / more precise but it suffers from the problem that it is not measurable in practice. On the other hand, this approach has resulted in a rich literature concerning the analysis of large scale geoid and topography and the relation to buoyancy and mechanical properties of the Earth [e.g. refs 1,2,3] In convection models with viscous, elastic, brittle rheology and compositional buoyancy, however, it is possible to examine how the surface topography (and geoid) are supported and how different ways of interpreting the "observable" fields introduce different biases. This is what we will do. References (a.k.a. homework) [1] Hager, B. H., R. W. Clayton, M. A. Richards, R. P. Comer, and A. M. Dziewonski (1985), Lower mantle heterogeneity, dynamic topography and the geoid, Nature, 313(6003), 541-545, doi:10.1038/313541a0. [2] Parsons, B., and S. Daly (1983), The

  4. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    Energy Technology Data Exchange (ETDEWEB)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu [National Radio Astronomy Observatory, Socorro, NM-87801 (United States)

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.

  5. Long-Term and Short-Term Effects of Hemodialysis on Liver Function Evaluated Using the Galactose Single-Point Test

    Directory of Open Access Journals (Sweden)

    Yi-Chou Hou

    2014-01-01

    Full Text Available Aim. The galactose single-point (GSP test assesses functioning liver mass by measuring the galactose concentration in the blood 1 hour after its administration. The purpose of this study was to investigate the impact of hemodialysis (HD on short-term and long-term liver function by use of GSP test. Methods. Seventy-four patients on maintenance HD (46 males and 28 females, 60.38 ± 11.86 years with a mean time on HD of 60.77 ± 48.31 months were studied. The GSP values were compared in two groups: (1 before and after single session HD, and (2 after one year of maintenance HD. Results. Among the 74 HD patient, only the post-HD Cr levels and years on dialysis were significantly correlated with GSP values (r=0.280, P<0.05 and r=-0.240, P<0.05, resp.. 14 of 74 patients were selected for GSP evaluation before and after a single HD session, and the hepatic clearance of galactose was similar (pre-HD 410 ± 254 g/mL, post-HD 439 ± 298 g/mL, P=0.49. GSP values decreased from 420.20 ± 175.26 g/mL to 383.40 ± 153.97 g/mL after 1 year maintenance HD in other 15 patients (mean difference: 19.00 ± 37.66 g/mL, P<0.05. Conclusions. Patients on maintenance HD for several years may experience improvement of their liver function. However, a single HD session does not affect liver function significantly as assessed by the GSP test. Since the metabolism of galactose is dependent on liver blood flow and hepatic functional mass, further studies are needed.

  6. Remembered Experiences and Revisit Intentions

    DEFF Research Database (Denmark)

    Barnes, Stuart; Mattsson, Jan; Sørensen, Flemming

    2016-01-01

    Tourism is an experience-intensive sector in which customers seek and pay for experiences above everything else. Remembering past tourism experiences is also crucial for an understanding of the present, including the predicted behaviours of visitors to tourist destinations. We adopt a longitudinal...... approach to memory data collection from psychological science, which has the potential to contribute to our understanding of tourist behaviour. In this study, we examine the impact of remembered tourist experiences in a safari park. In particular, using matched survey data collected longitudinally and PLS...... path modelling, we examine the impact of positive affect tourist experiences on the development of revisit intentions. We find that longer-term remembered experiences have the strongest impact on revisit intentions, more so than predicted or immediate memory after an event. We also find that remembered...

  7. Incremental fold tests of remagnetized carbonate rocks

    Science.gov (United States)

    Van Der Voo, R.; van der Pluijm, B.

    2017-12-01

    Many unmetamorphosed carbonates all over the world are demonstrably remagnetized, with the age of the secondary magnetizations typically close to that of the nearest orogeny in space and time. This observation did not become compelling until the mid-1980's, when the incremental fold test revealed the Appalachian carbonates to carry a syn-deformational remanence of likely Permian age (Scotese et al., 1982, Phys. Earth Planet. Int., v. 30, p. 385-395; Cederquist et al., 2006, Tectonophysics v. 422, p. 41-54). Since that time scores of Appalachian and Rocky Mountain carbonate rocks have added results to the growing database of paleopoles representing remagnetizations. Late Paleozoic remagnetizations form a cloud of results surrounding the reference poles of the Laurentian APWP. Remagnetizations in other locales and with inferred ages coeval with regional orogenies (e.g., Taconic, Sevier/Laramide, Variscan, Indosinian) are also ubiquitous. To be able to transform this cornucopia into valuable anchor-points on the APWP would be highly desirable. This may indeed become feasible, as will be explained next. Recent studies of faulted and folded carbonate-shale sequences have shown that this deformation enhances the illitization of smectite (Haines & van der Pluijm, 2008, Jour. Struct. Geol., v. 30, p. 525-538; Fitz-Diaz et al., 2014, International Geol. Review, v. 56, p. 734-755). 39Ar-40Ar dating of the authigenic illite (neutralizing any detrital illite contribution by taking the intercept of a mixing line) yields, therefore, the age of the deformation. We know that this date is also the age of the syndeformational remanence; thus we have the age of the corresponding paleopole. Results so far are obtained for the Canadian and U.S. Rocky Mountains and for the Spanish Cantabrian carbonates (Tohver et al., 2008, Earth Planet. Sci. Lett., v. 274, p. 524-530) and make good sense in accord with geological knowledge. Incremental fold tests are the tools used for this

  8. Incremental passivity and output regulation for switched nonlinear systems

    Science.gov (United States)

    Pang, Hongbo; Zhao, Jun

    2017-10-01

    This paper studies incremental passivity and global output regulation for switched nonlinear systems, whose subsystems are not required to be incrementally passive. A concept of incremental passivity for switched systems is put forward. First, a switched system is rendered incrementally passive by the design of a state-dependent switching law. Second, the feedback incremental passification is achieved by the design of a state-dependent switching law and a set of state feedback controllers. Finally, we show that once the incremental passivity for switched nonlinear systems is assured, the output regulation problem is solved by the design of global nonlinear regulator controllers comprising two components: the steady-state control and the linear output feedback stabilising controllers, even though the problem for none of subsystems is solvable. Two examples are presented to illustrate the effectiveness of the proposed approach.

  9. Revisiting Mutual Fund Performance Evaluation

    OpenAIRE

    Angelidis, Timotheos; Giamouridis, Daniel; Tessaromatis, Nikolaos

    2012-01-01

    Mutual fund manager excess performance should be measured relative to their self-reported benchmark rather than the return of a passive portfolio with the same risk characteristics. Ignoring the self-reported benchmark introduces biases in the measurement of stock selection and timing components of excess performance. We revisit baseline empirical evidence in mutual fund performance evaluation utilizing stock selection and timing measures that address these biases. We introduce a new factor e...

  10. Performance Evaluation of Incremental K-means Clustering Algorithm

    OpenAIRE

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  11. Towards a multiconfigurational method of increments

    Science.gov (United States)

    Fertitta, E.; Koch, D.; Paulus, B.; Barcza, G.; Legeza, Ö.

    2018-06-01

    The method of increments (MoI) allows one to successfully calculate cohesive energies of bulk materials with high accuracy, but it encounters difficulties when calculating dissociation curves. The reason is that its standard formalism is based on a single Hartree-Fock (HF) configuration whose orbitals are localised and used for the many-body expansion. In situations where HF does not allow a size-consistent description of the dissociation, the MoI cannot be guaranteed to yield proper results either. Herein, we address the problem by employing a size-consistent multiconfigurational reference for the MoI formalism. This leads to a matrix equation where a coupling derived by the reference itself is employed. In principle, such an approach allows one to evaluate approximate values for the ground as well as excited states energies. While the latter are accurate close to the avoided crossing only, the ground state results are very promising for the whole dissociation curve, as shown by the comparison with density matrix renormalisation group benchmarks. We tested this two-state constant-coupling MoI on beryllium rings of different sizes and studied the error introduced by the constant coupling.

  12. Natural Gas pipelines: economics of incremental capacity

    International Nuclear Information System (INIS)

    Kimber, M.

    2000-01-01

    A number of gas transmission pipeline systems in Australia exhibit capacity constraints, and yet there is little evidence of creative or innovative processes from either the service provides of the regulators which might provide a market-based response to these constraints. There is no provision in the Code in its current form to allow it to accommodate these processes. This aspect is one of many that require review to make the Code work. It is unlikely that the current members of the National Gas Pipeline Advisory Committee (NGPAC) or its advisers have sufficient understanding of the analysis of risk and the consequential commercial drivers to implement the necessary changes. As a result, the Code will increasingly lose touch with the commercial realities of the energy market and will continue to inhibit investment in new and expanded infrastructure where market risk is present. The recent report prepared for the Business Council of Australia indicates a need to re-vitalise the energy reform process. It is important for the Australian energy industry to provide leadership and advice to governments to continue the process of reform, and, in particular, to amend the Code to make it more relevant. These amendments must include a mechanism by which price signals can be generated to provide timely and effective information for existing service providers or new entrants to install incremental pipeline capacity

  13. Evolution of cooperation driven by incremental learning

    Science.gov (United States)

    Li, Pei; Duan, Haibin

    2015-02-01

    It has been shown that the details of microscopic rules in structured populations can have a crucial impact on the ultimate outcome in evolutionary games. So alternative formulations of strategies and their revision processes exploring how strategies are actually adopted and spread within the interaction network need to be studied. In the present work, we formulate the strategy update rule as an incremental learning process, wherein knowledge is refreshed according to one's own experience learned from the past (self-learning) and that gained from social interaction (social-learning). More precisely, we propose a continuous version of strategy update rules, by introducing the willingness to cooperate W, to better capture the flexibility of decision making behavior. Importantly, the newly gained knowledge including self-learning and social learning is weighted by the parameter ω, establishing a strategy update rule involving innovative element. Moreover, we quantify the macroscopic features of the emerging patterns to inspect the underlying mechanisms of the evolutionary process using six cluster characteristics. In order to further support our results, we examine the time evolution course for these characteristics. Our results might provide insights for understanding cooperative behaviors and have several important implications for understanding how individuals adjust their strategies under real-life conditions.

  14. Power variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, J.M.; Podolskij, Mark

    2009-01-01

    We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path of the pr......We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path...... a chaos representation....

  15. Biomedical Titanium alloy prostheses manufacturing by means of Superplastic and Incremental Forming processes

    Directory of Open Access Journals (Sweden)

    Piccininni Antonio

    2016-01-01

    Full Text Available The present work collects some results of the three-years Research Program “BioForming“, funded by the Italian Ministry of Education (MIUR and aimed to investigate the possibility of using flexible sheet forming processes, i.e. Super Plastic Forming (SPF and Single Point Incremental Forming (SPIF, for the manufacturing of patient-oriented titanium prostheses. The prosthetic implants used as case studies were from the skull; in particular, two different Ti alloys and geometries were considered: one to be produced in Ti-Gr23 by SPF and one to be produced in Ti-Gr2 by SPIF. Numerical simulations implementing material behaviours evaluated by characterization tests were conducted in order to design both the manufacturing processes. Subsequently, experimental tests were carried out implementing numerical results in terms of: (i gas pressure profile able to determine a constant (and optimal strain rate during the SPF process; (ii tool path able to avoid rupture during the SPIF process. Post forming characteristics of the prostheses in terms of thickness distributions were measured and compared to data from simulations for validation purposes. A good correlation between numerical and experimental thickness distributions has been obtained; in addition, the possibility of successfully adopting both the SPF and the SPIF processes for the manufacturing of prostheses has been demonstrated.

  16. Feasibility of in vivo three-dimensional T 2* mapping using dicarboxy-PROXYL and CW-EPR-based single-point imaging.

    Science.gov (United States)

    Kubota, Harue; Komarov, Denis A; Yasui, Hironobu; Matsumoto, Shingo; Inanami, Osamu; Kirilyuk, Igor A; Khramtsov, Valery V; Hirata, Hiroshi

    2017-06-01

    The aim of this study was to demonstrate the feasibility of in vivo three-dimensional (3D) relaxation time T 2 * mapping of a dicarboxy-PROXYL radical using continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Isotopically substituted dicarboxy-PROXYL radicals, 3,4-dicarboxy-2,2,5,5-tetra( 2 H 3 )methylpyrrolidin-(3,4- 2 H 2 )-(1- 15 N)-1-oxyl ( 2 H, 15 N-DCP) and 3,4-dicarboxy-2,2,5,5-tetra( 2 H 3 )methylpyrrolidin-(3,4- 2 H 2 )-1-oxyl ( 2 H-DCP), were used in the study. A clonogenic cell survival assay was performed with the 2 H-DCP radical using squamous cell carcinoma (SCC VII) cells. The time course of EPR signal intensities of intravenously injected 2 H, 15 N-DCP and 2 H-DCP radicals were determined in tumor-bearing hind legs of mice (C3H/HeJ, male, n = 5). CW-EPR-based single-point imaging (SPI) was performed for 3D T 2 * mapping. 2 H-DCP radical did not exhibit cytotoxicity at concentrations below 10 mM. The in vivo half-life of 2 H, 15 N-DCP in tumor tissues was 24.7 ± 2.9 min (mean ± standard deviation [SD], n = 5). The in vivo time course of the EPR signal intensity of the 2 H, 15 N-DCP radical showed a plateau of 10.2 ± 1.2 min (mean ± SD) where the EPR signal intensity remained at more than 90% of the maximum intensity. During the plateau, in vivo 3D T 2 * maps with 2 H, 15 N-DCP were obtained from tumor-bearing hind legs, with a total acquisition time of 7.5 min. EPR signals of 2 H, 15 N-DCP persisted long enough after bolus intravenous injection to conduct in vivo 3D T 2 * mapping with CW-EPR-based SPI.

  17. Cardiac EASE (Ensuring Access and Speedy Evaluation) – the impact of a single-point-of-entry multidisciplinary outpatient cardiology consultation program on wait times in Canada

    Science.gov (United States)

    Bungard, Tammy J; Smigorowsky, Marcie J; Lalonde, Lucille D; Hogan, Terry; Doliszny, Katharine M; Gebreyesus, Ghirmay; Garg, Sipi; Archer, Stephen L

    2009-01-01

    BACKGROUND: Universal access to health care is valued in Canada but increasing wait times for services (eg, cardiology consultation) raise safety questions. Observations suggest that deficiencies in the process of care contribute to wait times. Consequently, an outpatient clinic was designed for Ensuring Access and Speedy Evaluation (Cardiac EASE) in a university group practice, providing cardiac consultative services for northern Alberta. Cardiac EASE has two components: a single-point-of-entry intake service (prospective testing using physician-approved algorithms and previsit triage) and a multidisciplinary clinic (staffed by cardiologists, nurse practitioners and doctoral-trained pharmacists). OBJECTIVES: It was hypothesized that Cardiac EASE would reduce the time to initial consultation and a definitive diagnosis, and also increase the referral capacity. METHODS: The primary and secondary outcomes were time from referral to initial consultation, and time to achieve a definitive diagnosis and management plan, respectively. A conventionally managed historical control group (three-month pre-EASE period in 2003) was compared with the EASE group (2004 to 2006). The conventional referral mechanism continued concurrently with EASE. RESULTS: A comparison between pre-EASE (n=311) and EASE (n=3096) revealed no difference in the mean (± SD) age (60±16 years), sex (55% and 52% men, respectively) or reason for referral, including chest pain (31% and 40%, respectively) and arrhythmia (27% and 29%, respectively). Cardiac EASE reduced the time to initial cardiac consultation (from 71±45 days to 33±19 days) and time to a definitive diagnosis (from 120±86 days to 51±58 days) (P<0.0001). The annual number of new referrals increased from 1512 in 2002 to 2574 in 2006 due to growth in the Cardiac EASE clinic. The number of patients seen through the conventional referral mechanism and their wait times remained constant during the study period. CONCLUSIONS: Cardiac EASE reduced

  18. Successful Principalship in Norway: Sustainable Ethos and Incremental Changes?

    Science.gov (United States)

    Moller, Jorunn; Vedoy, Gunn; Presthus, Anne Marie; Skedsmo, Guri

    2009-01-01

    Purpose: The purpose of this paper is to explore whether and how success has been sustained over time in schools which were identified as being successful five years ago. Design/methodology/approach: Three schools were selected for a revisit, and the sample included two combined schools (grade 1-10) and one upper secondary school (grade 11-13). In…

  19. Escaping Depressions in LRTS Based on Incremental Refinement of Encoded Quad-Trees

    Directory of Open Access Journals (Sweden)

    Yue Hu

    2017-01-01

    Full Text Available In the context of robot navigation, game AI, and so on, real-time search is extensively used to undertake motion planning. Though it satisfies the requirement of quick response to users’ commands and environmental changes, learning real-time search (LRTS suffers from the heuristic depressions where agents behave irrationally. There have introduced several effective solutions, such as state abstractions. This paper combines LRTS and encoded quad-tree abstraction which represent the search space in multiresolutions. When exploring the environments, agents are enabled to locally repair the quad-tree models and incrementally refine the spatial cognition. By virtue of the idea of state aggregation and heuristic generalization, our EQ LRTS (encoded quad-tree based LRTS possesses the ability of quickly escaping from heuristic depressions with less state revisitations. Experiments and analysis show that (a our encoding principle for quad-trees is a much more memory-efficient method than other data structures expressing quad-trees, (b EQ LRTS differs a lot in several characteristics from classical PR LRTS which represent the space and refine the paths hierarchically, and (c EQ LRTS substantially reduces the planning amount and curtails heuristic updates compared with LRTS on uniform cells.

  20. One Step at a Time: SBM as an Incremental Process.

    Science.gov (United States)

    Conrad, Mark

    1995-01-01

    Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…

  1. Defense Agencies Initiative Increment 2 (DAI Inc 2)

    Science.gov (United States)

    2016-03-01

    module. In an ADM dated September 23, 2013, the MDA established Increment 2 as a MAIS program to include budget formulation; grants financial...2016 Major Automated Information System Annual Report Defense Agencies Initiative Increment 2 (DAI Inc 2) Defense Acquisition Management...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then

  2. Incrementality in naming and reading complex numerals: Evidence from eyetracking

    NARCIS (Netherlands)

    Korvorst, M.H.W.; Roelofs, A.P.A.; Levelt, W.J.M.

    2006-01-01

    Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported

  3. Lifetime costs of lung transplantation : Estimation of incremental costs

    NARCIS (Netherlands)

    VanEnckevort, PJ; Koopmanschap, MA; Tenvergert, EM; VanderBij, W; Rutten, FFH

    1997-01-01

    Despite an expanding number of centres which provide lung transplantation, information about the incremental costs of lung transplantation is scarce. From 1991 until 1995, in The Netherlands a technology assessment was performed which provided information about the incremental costs of lung

  4. Finance for incremental housing: current status and prospects for expansion

    NARCIS (Netherlands)

    Ferguson, B.; Smets, P.G.S.M.

    2010-01-01

    Appropriate finance can greatly increase the speed and lower the cost of incremental housing - the process used by much of the low/moderate-income majority of most developing countries to acquire shelter. Informal finance continues to dominate the funding of incremental housing. However, new sources

  5. Validation of the periodicity of growth increment deposition in ...

    African Journals Online (AJOL)

    Validation of the periodicity of growth increment deposition in otoliths from the larval and early juvenile stages of two cyprinids from the Orange–Vaal river ... Linear regression models were fitted to the known age post-fertilisation and the age estimated using increment counts to test the correspondence between the two for ...

  6. 76 FR 73475 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2011-11-29

    ... Benefits Business Transformation, Increment I, 76 FR 53764 (Aug. 29, 2011). The final rule removed form... [CIS No. 2481-09; Docket No. USCIS-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY: U.S. Citizenship and Immigration Services, DHS. ACTION: Final...

  7. 76 FR 53763 - Immigration Benefits Business Transformation, Increment I

    Science.gov (United States)

    2011-08-29

    ..., 100, et al. Immigration Benefits Business Transformation, Increment I; Final Rule #0;#0;Federal... Benefits Business Transformation, Increment I AGENCY: U.S. Citizenship and Immigration Services, DHS... USCIS is engaged in an enterprise-wide transformation effort to implement new business processes and to...

  8. The Time Course of Incremental Word Processing during Chinese Reading

    Science.gov (United States)

    Zhou, Junyi; Ma, Guojie; Li, Xingshan; Taft, Marcus

    2018-01-01

    In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words…

  9. On conditional scalar increment and joint velocity-scalar increment statistics

    International Nuclear Information System (INIS)

    Zhang Hengbin; Wang Danhong; Tong Chenning

    2004-01-01

    Conditional velocity and scalar increment statistics are usually studied in the context of Kolmogorov's refined similarity hypotheses and are considered universal (quasi-Gaussian) for inertial-range separations. In such analyses the locally averaged energy and scalar dissipation rates are used as conditioning variables. Recent studies have shown that certain local turbulence structures can be captured when the local scalar variance (φ 2 ) r and the local kinetic energy k r are used as the conditioning variables. We study the conditional increments using these conditioning variables, which also provide the local turbulence scales. Experimental data obtained in the fully developed region of an axisymmetric turbulent jet are used to compute the statistics. The conditional scalar increment probability density function (PDF) conditional on (φ 2 ) r is found to be close to Gaussian for (φ 2 ) r small compared with its mean and is sub-Gaussian and bimodal for large (φ 2 ) r , and therefore is not universal. We find that the different shapes of the conditional PDFs are related to the instantaneous degree of non-equilibrium (production larger than dissipation) of the local scalar. There is further evidence of this from the conditional PDF conditional on both (φ 2 ) r and χ r , which is largely a function of (φ 2 ) r /χ r , a measure of the degree of non-equilibrium. The velocity-scalar increment joint PDF is close to joint Gaussian and quad-modal for equilibrium and non-equilibrium local velocity and scalar, respectively. The latter shape is associated with a combination of the ramp-cliff and plane strain structures. Kolmogorov's refined similarity hypotheses also predict a dependence of the conditional PDF on the degree of non-equilibrium. Therefore, the quasi-Gaussian (joint) PDF, previously observed in the context of Kolmogorov's refined similarity hypotheses, is only one of the conditional PDF shapes of inertial range turbulence. The present study suggests that

  10. Efficiency of Oral Incremental Rehearsal versus Written Incremental Rehearsal on Students' Rate, Retention, and Generalization of Spelling Words

    Science.gov (United States)

    Garcia, Dru; Joseph, Laurice M.; Alber-Morgan, Sheila; Konrad, Moira

    2014-01-01

    The purpose of this study was to examine the efficiency of an incremental rehearsal oral versus an incremental rehearsal written procedure on a sample of primary grade children's weekly spelling performance. Participants included five second and one first grader who were in need of help with their spelling according to their teachers. An…

  11. Schroedinger's variational method of quantization revisited

    International Nuclear Information System (INIS)

    Yasue, K.

    1980-01-01

    Schroedinger's original quantization procedure is revisited in the light of Nelson's stochastic framework of quantum mechanics. It is clarified why Schroedinger's proposal of a variational problem led us to a true description of quantum mechanics. (orig.)

  12. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  13. Tourists' perceptions and intention to revisit Norway

    OpenAIRE

    Lazar, Ana Florina; Komolikova-Blindheim, Galyna

    2016-01-01

    Purpose - The overall purpose of this study is to explore tourists' perceptions and their intention to revisit Norway. The aim is to find out what are the factors that drive the overall satisfaction, the willingness to recommend and the revisit intention of international tourists that spend their holiday in Norway. Design-Method-Approach - the Theory of Planned Behavior (Ajzen 1991), is used as a framework to investigate tourists' intention and behavior towards Norway as destination. The o...

  14. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek; Skiadopoulos, Spiros; Kalnis, Panos

    2017-01-01

    : they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving

  15. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab; Canim, Mustafa; Sadoghi, Mohammad; Bhatta, Bishwaranjan; Chang, Yuan-Chi; Kalnis, Panos

    2017-01-01

    , such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem

  16. Increment and mortality in a virgin Douglas-fir forest.

    Science.gov (United States)

    Robert W. Steele; Norman P. Worthington

    1955-01-01

    Is there any basis to the forester's rule of thumb that virgin forests eventually reach an equilibrium where increment and mortality approximately balance? Are we wasting potential timber volume by failing to salvage mortality in old-growth stands?

  17. Mission Planning System Increment 5 (MPS Inc 5)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Mission Planning System Increment 5 (MPS Inc 5) Defense Acquisition Management Information...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then Year...Phone: 845-9625 DSN Fax: Date Assigned: May 19, 2014 Program Information Program Name Mission Planning System Increment 5 (MPS Inc 5) DoD

  18. Shredder: GPU-Accelerated Incremental Storage and Computation

    OpenAIRE

    Bhatotia, Pramod; Rodrigues, Rodrigo; Verma, Akshat

    2012-01-01

    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-ba...

  19. On the instability increments of a stationary pinch

    International Nuclear Information System (INIS)

    Bud'ko, A.B.

    1989-01-01

    The stability of stationary pinch to helical modes is numerically studied. It is shown that in the case of a rather fast plasma pressure decrease to the pinch boundary, for example, for an isothermal diffusion pinch with Gauss density distribution instabilities with m=0 modes are the most quickly growing. Instability increments are calculated. A simple analytical expression of a maximum increment of growth of sausage instability for automodel Gauss profiles is obtained

  20. Biometrics Enabling Capability Increment 1 (BEC Inc 1)

    Science.gov (United States)

    2016-03-01

    modal biometrics submissions to include iris, face, palm and finger prints from biometrics collection devices, which will support the Warfighter in...2016 Major Automated Information System Annual Report Biometrics Enabling Capability Increment 1 (BEC Inc 1) Defense Acquisition Management...Phone: 227-3119 DSN Fax: Date Assigned: July 15, 2015 Program Information Program Name Biometrics Enabling Capability Increment 1 (BEC Inc 1) DoD

  1. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    OpenAIRE

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists) and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution err...

  2. A Syntactic-Semantic Approach to Incremental Verification

    OpenAIRE

    Bianculli, Domenico; Filieri, Antonio; Ghezzi, Carlo; Mandrioli, Dino

    2013-01-01

    Software verification of evolving systems is challenging mainstream methodologies and tools. Formal verification techniques often conflict with the time constraints imposed by change management practices for evolving systems. Since changes in these systems are often local to restricted parts, an incremental verification approach could be beneficial. This paper introduces SiDECAR, a general framework for the definition of verification procedures, which are made incremental by the framework...

  3. Logistics Modernization Program Increment 2 (LMP Inc 2)

    Science.gov (United States)

    2016-03-01

    Sections 3 and 4 of the LMP Increment 2 Business Case, ADM), key functional requirements, Critical Design Review (CDR) Reports, and Economic ...from the 2013 version of the LMP Increment 2 Economic Analysis and replace it with references to the Economic Analysis that will be completed...of ( inbound /outbound) IDOCs into the system. LMP must be able to successfully process 95% of ( inbound /outbound) IDOCs into the system. Will meet

  4. Organization Strategy and Structural Differences for Radical Versus Incremental Innovation

    OpenAIRE

    John E. Ettlie; William P. Bridges; Robert D. O'Keefe

    1984-01-01

    The purpose of this study was to test a model of the organizational innovation process that suggests that the strategy-structure causal sequence is differentiated by radical versus incremental innovation. That is, unique strategy and structure will be required for radical innovation, especially process adoption, while more traditional strategy and structure arrangements tend to support new product introduction and incremental process adoption. This differentiated theory is strongly supported ...

  5. Incremental Learning for Place Recognition in Dynamic Environments

    OpenAIRE

    Luo, Jie; Pronobis, Andrzej; Caputo, Barbara; Jensfelt, Patric

    2007-01-01

    This paper proposes a discriminative approach to template-based Vision-based place recognition is a desirable feature for an autonomous mobile system. In order to work in realistic scenarios, visual recognition algorithms should be adaptive, i.e. should be able to learn from experience and adapt continuously to changes in the environment. This paper presents a discriminative incremental learning approach to place recognition. We use a recently introduced version of the incremental SVM, which ...

  6. MRI: Modular reasoning about interference in incremental programming

    OpenAIRE

    Oliveira, Bruno C. D. S; Schrijvers, Tom; Cook, William R

    2012-01-01

    Incremental Programming (IP) is a programming style in which new program components are defined as increments of other components. Examples of IP mechanisms include: Object-oriented programming (OOP) inheritance, aspect-oriented programming (AOP) advice and feature-oriented programming (FOP). A characteristic of IP mechanisms is that, while individual components can be independently defined, the composition of components makes those components become tightly coupled, sh...

  7. Incremental short daily home hemodialysis: a case series

    OpenAIRE

    Toth-Manikowski, Stephanie M.; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-01-01

    Background Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients? residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. Case presentation From 2011 to 2015, we initiated 5 incident hemodialysis patients on an ...

  8. Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-04-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.

  9. Entity versus incremental theories predict older adults' memory performance.

    Science.gov (United States)

    Plaks, Jason E; Chasteen, Alison L

    2013-12-01

    The authors examined whether older adults' implicit theories regarding the modifiability of memory in particular (Studies 1 and 3) and abilities in general (Study 2) would predict memory performance. In Study 1, individual differences in older adults' endorsement of the "entity theory" (a belief that one's ability is fixed) or "incremental theory" (a belief that one's ability is malleable) of memory were measured using a version of the Implicit Theories Measure (Dweck, 1999). Memory performance was assessed with a free-recall task. Results indicated that the higher the endorsement of the incremental theory, the better the free recall. In Study 2, older and younger adults' theories were measured using a more general version of the Implicit Theories Measure that focused on the modifiability of abilities in general. Again, for older adults, the higher the incremental endorsement, the better the free recall. Moreover, as predicted, implicit theories did not predict younger adults' memory performance. In Study 3, participants read mock news articles reporting evidence in favor of either the entity or incremental theory. Those in the incremental condition outperformed those in the entity condition on reading span and free-recall tasks. These effects were mediated by pretask worry such that, for those in the entity condition, higher worry was associated with lower performance. Taken together, these studies suggest that variation in entity versus incremental endorsement represents a key predictor of older adults' memory performance. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  10. Incremental short daily home hemodialysis: a case series.

    Science.gov (United States)

    Toth-Manikowski, Stephanie M; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-07-05

    Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients' residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. From 2011 to 2015, we initiated 5 incident hemodialysis patients on an incremental home hemodialysis regimen. The biochemical parameters of all patients remained stable on the incremental hemodialysis regimen and they consistently achieved standard Kt/Vurea targets. Of the two patients with follow-up >6 months, residual kidney function was preserved for ≥2 years. Importantly, the patients were able to transition to home hemodialysis without automatically requiring 5 sessions per week at the outset and gradually increased the number of treatments and/or dialysate volume as the residual kidney function declined. An incremental home hemodialysis regimen can be safely prescribed and may improve acceptability of home hemodialysis. Reducing hemodialysis frequency by even one treatment per week can reduce the number of fistula or graft cannulations or catheter connections by >100 per year, an important consideration for patient well-being, access longevity, and access-related infections. The incremental hemodialysis approach, supported by national guidelines, can be considered for all home hemodialysis patients with residual kidney function.

  11. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    Directory of Open Access Journals (Sweden)

    Ruben Ruiz-Gonzalez

    2014-11-01

    Full Text Available The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels.

  12. Performance analysis of the incremental sheet forming on PMMA using a combined chemical and mechanical approach

    Science.gov (United States)

    Conte, R.; Gagliardi, F.; Ambrogio, G.; Filice, F.; Russo, P.

    2017-10-01

    Single Point Incremental Forming (SPIF) has been widely investigated highlighting advantages as low-cost, higher formability and greater process flexibility if compared to traditional processes [1]. Recent works have proven the SPIF feasibility for polymers processing. Experimental researches have been carried out with the aim to investigate the influence of several working variables, i.e. spindle speed, tool diameter, step depth, etc., on the final quality of the formed parts [2, 3]. The processed thermoplastic materials are characterized by glass temperatures close to the room temperature and, therefore, SPIF has been performed without external thermal source, exploiting mostly the friction heat generated during the forming phases. In the proposed work, the attention has been focused on extruded poly(methyl methacrylate) (PMMA) sheets, which are characterized by a glass temperature of more than 100 °C. Because of that, an experimental equipment has been designed and the PMMA sheets have been placed on the top of chamber with controlled temperature before SPIF beginning. The temperature on the upper face of the sheets has been monitored by thermocamera, which has been properly set by matching its readings with the ones extracted by a thermocouple in contact with the sheets. SPIF at different temperatures has been carried out by changing both the heater temperature and the process parameters which have an influence on the workpiece heating. The influence of the highlighted process conditions on the worked parts and on the process feasibility has been investigated. Furthermore, examinations on the quality of the formed parts have been performed pointing out the effects that different process conditions have on surface integrity.

  13. Reducing workpieces to their base geometry for multi-step incremental forming using manifold harmonics

    Science.gov (United States)

    Carette, Yannick; Vanhove, Hans; Duflou, Joost

    2018-05-01

    Single Point Incremental Forming is a flexible process that is well-suited for small batch production and rapid prototyping of complex sheet metal parts. The distributed nature of the deformation process and the unsupported sheet imply that controlling the final accuracy of the workpiece is challenging. To improve the process limits and the accuracy of SPIF, the use of multiple forming passes has been proposed and discussed by a number of authors. Most methods use multiple intermediate models, where the previous one is strictly smaller than the next one, while gradually increasing the workpieces' wall angles. Another method that can be used is the manufacture of a smoothed-out "base geometry" in the first pass, after which more detailed features can be added in subsequent passes. In both methods, the selection of these intermediate shapes is freely decided by the user. However, their practical implementation in the production of complex freeform parts is not straightforward. The original CAD model can be manually adjusted or completely new CAD models can be created. This paper discusses an automatic method that is able to extract the base geometry from a full STL-based CAD model in an analytical way. Harmonic decomposition is used to express the final geometry as the sum of individual surface harmonics. It is then possible to filter these harmonic contributions to obtain a new CAD model with a desired level of geometric detail. This paper explains the technique and its implementation, as well as its use in the automatic generation of multi-step geometries.

  14. The Levy sections theorem revisited

    International Nuclear Information System (INIS)

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Silva, Sergio Da

    2007-01-01

    This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets

  15. The power reinforcement framework revisited

    DEFF Research Database (Denmark)

    Nielsen, Jeppe; Andersen, Kim Normann; Danziger, James N.

    2016-01-01

    Whereas digital technologies are often depicted as being capable of disrupting long-standing power structures and facilitating new governance mechanisms, the power reinforcement framework suggests that information and communications technologies tend to strengthen existing power arrangements within...... public organizations. This article revisits the 30-yearold power reinforcement framework by means of an empirical analysis on the use of mobile technology in a large-scale programme in Danish public sector home care. It explores whether and to what extent administrative management has controlled decision......-making and gained most benefits from mobile technology use, relative to the effects of the technology on the street-level workers who deliver services. Current mobile technology-in-use might be less likely to be power reinforcing because it is far more decentralized and individualized than the mainly expert...

  16. The Levy sections theorem revisited

    Science.gov (United States)

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio

    2007-06-01

    This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.

  17. Support vector machine incremental learning triggered by wrongly predicted samples

    Science.gov (United States)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  18. Three routes forward for biofuels: Incremental, leapfrog, and transitional

    International Nuclear Information System (INIS)

    Morrison, Geoff M.; Witcover, Julie; Parker, Nathan C.; Fulton, Lew

    2016-01-01

    This paper examines three technology routes for lowering the carbon intensity of biofuels: (1) a leapfrog route that focuses on major technological breakthroughs in lignocellulosic pathways at new, stand-alone biorefineries; (2) an incremental route in which improvements are made to existing U.S. corn ethanol and soybean biodiesel biorefineries; and (3) a transitional route in which biotechnology firms gain experience growing, handling, or chemically converting lignocellulosic biomass in a lower-risk fashion than leapfrog biorefineries by leveraging existing capital stock. We find the incremental route is likely to involve the largest production volumes and greenhouse gas benefits until at least the mid-2020s, but transitional and leapfrog biofuels together have far greater long-term potential. We estimate that the Renewable Fuel Standard, California's Low Carbon Fuel Standard, and federal tax credits provided an incentive of roughly $1.5–2.5 per gallon of leapfrog biofuel between 2012 and 2015, but that regulatory elements in these policies mostly incentivize lower-risk incremental investments. Adjustments in policy may be necessary to bring a greater focus on transitional technologies that provide targeted learning and cost reduction opportunities for leapfrog biofuels. - Highlights: • Three technological pathways are compared that lower carbon intensity of biofuels. • Incremental changes lead to faster greenhouse gas reductions. • Leapfrog changes lead to greatest long-term potential. • Two main biofuel policies (RFS and LCFS) are largely incremental in nature. • Transitional biofuels offer medium-risk, medium reward pathway.

  19. Incremental Costs and Cost Effectiveness of Intensive Treatment in Individuals with Type 2 Diabetes Detected by Screening in the ADDITION-UK Trial: An Update with Empirical Trial-Based Cost Data.

    Science.gov (United States)

    Laxy, Michael; Wilson, Edward C F; Boothby, Clare E; Griffin, Simon J

    2017-12-01

    There is uncertainty about the cost effectiveness of early intensive treatment versus routine care in individuals with type 2 diabetes detected by screening. To derive a trial-informed estimate of the incremental costs of intensive treatment as delivered in the Anglo-Danish-Dutch Study of Intensive Treatment in People with Screen-Detected Diabetes in Primary Care-Europe (ADDITION) trial and to revisit the long-term cost-effectiveness analysis from the perspective of the UK National Health Service. We analyzed the electronic primary care records of a subsample of the ADDITION-Cambridge trial cohort (n = 173). Unit costs of used primary care services were taken from the published literature. Incremental annual costs of intensive treatment versus routine care in years 1 to 5 after diagnosis were calculated using multilevel generalized linear models. We revisited the long-term cost-utility analyses for the ADDITION-UK trial cohort and reported results for ADDITION-Cambridge using the UK Prospective Diabetes Study Outcomes Model and the trial-informed cost estimates according to a previously developed evaluation framework. Incremental annual costs of intensive treatment over years 1 to 5 averaged £29.10 (standard error = £33.00) for consultations with general practitioners and nurses and £54.60 (standard error = £28.50) for metabolic and cardioprotective medication. For ADDITION-UK, over the 10-, 20-, and 30-year time horizon, adjusted incremental quality-adjusted life-years (QALYs) were 0.014, 0.043, and 0.048, and adjusted incremental costs were £1,021, £1,217, and £1,311, resulting in incremental cost-effectiveness ratios of £71,232/QALY, £28,444/QALY, and £27,549/QALY, respectively. Respective incremental cost-effectiveness ratios for ADDITION-Cambridge were slightly higher. The incremental costs of intensive treatment as delivered in the ADDITION-Cambridge trial were lower than expected. Given UK willingness-to-pay thresholds in patients with screen

  20. Making context explicit for explanation and incremental knowledge acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Brezillon, P. [Univ. Paris (France)

    1996-12-31

    Intelligent systems may be improved by making context explicit in problem solving. This is a lesson drawn from a study of the reasons why a number of knowledge-based systems (KBSs) failed. We discuss the interest to make context explicit in explanation generation and incremental knowledge acquisition, two important aspects of intelligent systems that aim to cooperate with users. We show how context can be used to better explain and incrementally acquire knowledge. The advantages of using context in explanation and incremental knowledge acquisition are discussed through SEPIT, an expert system for supporting diagnosis and explanation through simulation of power plants. We point out how the limitations of such systems may be overcome by making context explicit.

  1. Martingales, nonstationary increments, and the efficient market hypothesis

    Science.gov (United States)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  2. Motion-Induced Blindness Using Increments and Decrements of Luminance

    Directory of Open Access Journals (Sweden)

    Stine Wm Wren

    2017-10-01

    Full Text Available Motion-induced blindness describes the disappearance of stationary elements of a scene when other, perhaps non-overlapping, elements of the scene are in motion. We measured the effects of increment (200.0 cd/m2 and decrement targets (15.0 cd/m2 and masks presented on a grey background (108.0 cd/m2, tapping into putative ON- and OFF-channels, on the rate of target disappearance psychophysically. We presented two-frame motion, which has coherent motion energy, and dynamic Glass patterns and dynamic anti-Glass patterns, which do not have coherent motion energy. Using the method of constant stimuli, participants viewed stimuli of varying durations (3.1 s, 4.6 s, 7.0 s, 11 s, or 16 s in a given trial and then indicated whether or not the targets vanished during that trial. Psychometric function midpoints were used to define absolute threshold mask duration for the disappearance of the target. 95% confidence intervals for threshold disappearance times were estimated using a bootstrap technique for each of the participants across two experiments. Decrement masks were more effective than increment masks with increment targets. Increment targets were easier to mask than decrement targets. Distinct mask pattern types had no effect, suggesting that perceived coherence contributes to the effectiveness of the mask. The ON/OFF dichotomy clearly carries its influence to the level of perceived motion coherence. Further, the asymmetry in the effects of increment and decrement masks on increment and decrement targets might lead one to speculate that they reflect the ‘importance’ of detecting decrements in the environment.

  3. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors.

    Science.gov (United States)

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual's processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people's moral character is fixed (entity theorists) and individuals who hold the implicit belief that people's moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE), rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction) with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2-4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory) showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  4. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    Directory of Open Access Journals (Sweden)

    Niwen Huang

    2017-08-01

    Full Text Available Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE, rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2–4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  5. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  6. Apparatus for electrical-assisted incremental forming and process thereof

    Science.gov (United States)

    Roth, John; Cao, Jian

    2018-04-24

    A process and apparatus for forming a sheet metal component using an electric current passing through the component. The process can include providing an incremental forming machine, the machine having at least one arcuate tipped tool and at least electrode spaced a predetermined distance from the arcuate tipped tool. The machine is operable to perform a plurality of incremental deformations on the sheet metal component using the arcuate tipped tool. The machine is also operable to apply an electric direct current through the electrode into the sheet metal component at the predetermined distance from the arcuate tipped tool while the machine is forming the sheet metal component.

  7. Short-term load forecasting with increment regression tree

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jingfei; Stenzel, Juergen [Darmstadt University of Techonology, Darmstadt 64283 (Germany)

    2006-06-15

    This paper presents a new regression tree method for short-term load forecasting. Both increment and non-increment tree are built according to the historical data to provide the data space partition and input variable selection. Support vector machine is employed to the samples of regression tree nodes for further fine regression. Results of different tree nodes are integrated through weighted average method to obtain the comprehensive forecasting result. The effectiveness of the proposed method is demonstrated through its application to an actual system. (author)

  8. Revisiting tourist behavior via destination brand worldness

    Directory of Open Access Journals (Sweden)

    Murat Kayak

    2016-11-01

    Full Text Available Taking tourists’ perspective rather than destination offerings as its core concept, this study introduces “perceived destination brand worldness” as a variable. Perceived destination brand worldness is defined as the positive perception that a tourist has of a country that is visited by tourists from all over the world. Then, the relationship between perceived destination brand worldness and intention to revisit is analyzed using partial least squares regression. This empirical study selects Taiwanese tourists as its sample, and the results show that perceived destination brand worldness is a direct predictor of intention to revisit. In light of these empirical findings and observations, practical and theoretical implications are discussed.

  9. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    Science.gov (United States)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  10. Playing by the rules? Analysing incremental urban developments

    NARCIS (Netherlands)

    Karnenbeek, van Lilian; Janssen-Jansen, Leonie

    2018-01-01

    Current urban developments are often considered outdated and static, and the argument follows that they should become more adaptive. In this paper, we argue that existing urban development are already adaptive and incremental. Given this flexibility in urban development, understanding changes in the

  11. Size, Stability and Incremental Budgeting Outcomes in Public Universities.

    Science.gov (United States)

    Schick, Allen G.; Hills, Frederick S.

    1982-01-01

    Examined the influence of relative size in the analysis of total dollar and workforce budgets, and changes in total dollar and workforce budgets when correlational/regression methods are used. Data suggested that size dominates the analysis of total budgets, and is not a factor when discretionary dollar increments are analyzed. (JAC)

  12. The National Institute of Education and Incremental Budgeting.

    Science.gov (United States)

    Hastings, Anne H.

    1979-01-01

    The National Institute of Education's (NIE) history demonstrates that the relevant criteria for characterizing budgeting as incremental are not the predictability and stability of appropriations but the conditions of complexity, limited information, multiple factors, and imperfect agreement on ends; NIE's appropriations were dominated by political…

  13. Generation of Referring Expressions: Assessing the Incremental Algorithm

    Science.gov (United States)

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  14. Object class hierarchy for an incremental hypertext editor

    Directory of Open Access Journals (Sweden)

    A. Colesnicov

    1995-02-01

    Full Text Available The object class hierarchy design is considered due to a hypertext editor implementation. The following basic classes were selected: the editor's coordinate system, the memory manager, the text buffer executing basic editing operations, the inherited hypertext buffer, the edit window, the multi-window shell. Special hypertext editing features, the incremental hypertext creation support and further generalizations are discussed.

  15. Bipower variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, José Manuel; Podolskij, Mark

    2009-01-01

    Convergence in probability and central limit laws of bipower variation for Gaussian processes with stationary increments and for integrals with respect to such processes are derived. The main tools of the proofs are some recent powerful techniques of Wiener/Itô/Malliavin calculus for establishing...

  16. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    KAUST Repository

    Zhang, Chuxu; Liu, Chuang; Yu, Lu; Zhang, Zi-Ke; Zhou, Tao

    2017-01-01

    success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt

  17. Some theoretical aspects of capacity increment in gaseous diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Coates, J. H.; Guais, J. C.; Lamorlette, G.

    1975-09-01

    Facing to the sharply growing needs of enrichment services, the problem of implementing new capacities must be included in an optimized scheme spread out in time. In this paper the alternative solutions will be studied first for an unique increment decision, and then in an optimum schedule. The limits of the analysis will be discussed.

  18. Respiratory ammonia output and blood ammonia concentration during incremental exercise

    NARCIS (Netherlands)

    Ament, W; Huizenga, [No Value; Kort, E; van der Mark, TW; Grevink, RG; Verkerke, GJ

    The aim of this study was to investigate whether the increase of ammonia concentration and lactate concentration in blood was accompanied by an increased expiration of ammonia during graded exercise. Eleven healthy subjects performed an incremental cycle ergometer test. Blood ammonia, blood lactate

  19. Incremental concept learning with few training examples and hierarchical classification

    NARCIS (Netherlands)

    Bouma, H.; Eendebak, P.T.; Schutte, K.; Azzopardi, G.; Burghouts, G.J.

    2015-01-01

    Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible

  20. Factors for Radical Creativity, Incremental Creativity, and Routine, Noncreative Performance

    Science.gov (United States)

    Madjar, Nora; Greenberg, Ellen; Chen, Zheng

    2011-01-01

    This study extends theory and research by differentiating between routine, noncreative performance and 2 distinct types of creativity: radical and incremental. We also use a sensemaking perspective to examine the interplay of social and personal factors that may influence a person's engagement in a certain level of creative action versus routine,…

  1. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  2. Incremental exercise test performance with and without a respiratory ...

    African Journals Online (AJOL)

    Incremental exercise test performance with and without a respiratory gas collection system. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... Industrial- type mask wear is thought to impair exercise performance through increased respiratory dead space, flow ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  3. 78 FR 22770 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2013-04-17

    ...-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY...: Background On August 29, 2011, DHS issued a final rule titled, Immigration Benefits Business Transformation... business processes. In this notice, we are correcting three technical errors. DATES: The effective date of...

  4. Minimizing System Modification in an Incremental Design Approach

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. We...

  5. Incremental cryptography and security of public hash functions ...

    African Journals Online (AJOL)

    An investigation of incremental algorithms for crytographic functions was initiated. The problem, for collision-free hashing, is to design a scheme for which there exists an efficient “update” algorithm: this algorithm is given the hash function H, the hash h = H(M) of message M and the “replacement request” (j, m), and outputs ...

  6. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  7. Evidence combination for incremental decision-making processes

    NARCIS (Netherlands)

    Berrada, Ghita; van Keulen, Maurice; de Keijzer, Ander

    The establishment of a medical diagnosis is an incremental process highly fraught with uncertainty. At each step of this painstaking process, it may be beneficial to be able to quantify the uncertainty linked to the diagnosis and steadily update the uncertainty estimation using available sources of

  8. Geometry of finite deformations and time-incremental analysis

    Czech Academy of Sciences Publication Activity Database

    Fiala, Zdeněk

    2016-01-01

    Roč. 81, May (2016), s. 230-244 ISSN 0020-7462 Institutional support: RVO:68378297 Keywords : solid mechanics * finite deformations * time-incremental analysis * Lagrangian system * evolution equation of Lie type Subject RIV: BE - Theoretical Physics Impact factor: 2.074, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020746216000330

  9. Leukemia and ionizing radiation revisited

    Energy Technology Data Exchange (ETDEWEB)

    Cuttler, J.M. [Cuttler & Associates Inc., Vaughan, Ontario (Canada); Welsh, J.S. [Loyola University-Chicago, Dept. or Radiation Oncology, Stritch School of Medicine, Maywood, Illinois (United States)

    2016-03-15

    A world-wide radiation health scare was created in the late 19508 to stop the testing of atomic bombs and block the development of nuclear energy. In spite of the large amount of evidence that contradicts the cancer predictions, this fear continues. It impairs the use of low radiation doses in medical diagnostic imaging and radiation therapy. This brief article revisits the second of two key studies, which revolutionized radiation protection, and identifies a serious error that was missed. This error in analyzing the leukemia incidence among the 195,000 survivors, in the combined exposed populations of Hiroshima and Nagasaki, invalidates use of the LNT model for assessing the risk of cancer from ionizing radiation. The threshold acute dose for radiation-induced leukemia, based on about 96,800 humans, is identified to be about 50 rem, or 0.5 Sv. It is reasonable to expect that the thresholds for other cancer types are higher than this level. No predictions or hints of excess cancer risk (or any other health risk) should be made for an acute exposure below this value until there is scientific evidence to support the LNT hypothesis. (author)

  10. Individualist Biocentrism vs. Holism Revisited

    Directory of Open Access Journals (Sweden)

    Katie McShane

    2014-06-01

    Full Text Available While holist views such as ecocentrism have considerable intuitive appeal, arguing for the moral considerability of ecological wholes such as ecosystems has turned out to be a very difficult task. In the environmental ethics literature, individualist biocentrists have persuasively argued that individual organisms—but not ecological wholes—are properly regarded as having a good of their own . In this paper, I revisit those arguments and contend that they are fatally flawed. The paper proceeds in five parts. First, I consider some problems brought about by climate change for environmental conservation strategies and argue that these problems give us good pragmatic reasons to want a better account of the welfare of ecological wholes. Second, I describe the theoretical assumptions from normative ethics that form the background of the arguments against holism. Third, I review the arguments given by individualist biocentrists in favour of individualism over holism. Fourth, I review recent work in the philosophy of biology on the units of selection problem, work in medicine on the human biome, and work in evolutionary biology on epigenetics and endogenous viral elements. I show how these developments undermine both the individualist arguments described above as well as the distinction between individuals and wholes as it has been understood by individualists. Finally, I consider five possible theoretical responses to these problems.

  11. Revisiting the safety of aspartame.

    Science.gov (United States)

    Choudhary, Arbind Kumar; Pretorius, Etheresia

    2017-09-01

    Aspartame is a synthetic dipeptide artificial sweetener, frequently used in foods, medications, and beverages, notably carbonated and powdered soft drinks. Since 1981, when aspartame was first approved by the US Food and Drug Administration, researchers have debated both its recommended safe dosage (40 mg/kg/d) and its general safety to organ systems. This review examines papers published between 2000 and 2016 on both the safe dosage and higher-than-recommended dosages and presents a concise synthesis of current trends. Data on the safe aspartame dosage are controversial, and the literature suggests there are potential side effects associated with aspartame consumption. Since aspartame consumption is on the rise, the safety of this sweetener should be revisited. Most of the literature available on the safety of aspartame is included in this review. Safety studies are based primarily on animal models, as data from human studies are limited. The existing animal studies and the limited human studies suggest that aspartame and its metabolites, whether consumed in quantities significantly higher than the recommended safe dosage or within recommended safe levels, may disrupt the oxidant/antioxidant balance, induce oxidative stress, and damage cell membrane integrity, potentially affecting a variety of cells and tissues and causing a deregulation of cellular function, ultimately leading to systemic inflammation. © The Author(s) 2017. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Neutrino assisted GUT baryogenesis revisited

    Science.gov (United States)

    Huang, Wei-Chih; Päs, Heinrich; Zeißner, Sinan

    2018-03-01

    Many grand unified theory (GUT) models conserve the difference between the baryon and lepton number, B -L . These models can create baryon and lepton asymmetries from heavy Higgs or gauge boson decays with B +L ≠0 but with B -L =0 . Since the sphaleron processes violate B +L , such GUT-generated asymmetries will finally be washed out completely, making GUT baryogenesis scenarios incapable of reproducing the observed baryon asymmetry of the Universe. In this work, we revisit the idea to revive GUT baryogenesis, proposed by Fukugita and Yanagida, where right-handed neutrinos erase the lepton asymmetry before the sphaleron processes can significantly wash out the original B +L asymmetry, and in this way one can prevent a total washout of the initial baryon asymmetry. By solving the Boltzmann equations numerically for baryon and lepton asymmetries in a simplified 1 +1 flavor scenario, we can confirm the results of the original work. We further generalize the analysis to a more realistic scenario of three active and two right-handed neutrinos to highlight flavor effects of the right-handed neutrinos. Large regions in the parameter space of the Yukawa coupling and the right-handed neutrino mass featuring successful baryogenesis are identified.

  13. Investigación de la unión soldada entre el vástago y las placas de las cuchillas calzadas // Investigation of the welded joint between plates and tipped single-point lathe tools

    Directory of Open Access Journals (Sweden)

    M. Jacas Cabrera

    2000-01-01

    Full Text Available El presente trabajo está dirigido al incremento de los niveles de producción y calidad, específicamente en la línea defabricación de cuchillas calzadas para torno en el centro fabril “Miguel Saavedra” HERRAMIX.En este caso se realizó un análisis para la sustitución de las pastillas de soldar, fabricadas por CIME por nuevastrimetálicas.En el mismo se determinaron los tiempos de calentamiento necesarios para realizar la soldadura, en las máquinas deinducción (TBCHE, así como los valores de resistencia al cizallamiento de los calzos una vez soldados.Palabras claves: Cuchillas de punta, calzos metalo-ceramicos, placas trimetálicas_____________________________________________________________________Abstract:The present work is directed to increase the production and quality levels of tipped single-point lathe tools at “MiguelSaavedra” plant HERRAMIX.This work deals with a study about the substitution of brazing-pads made by CIME for tri-metallic new ones.The induction brazing heating time’s necessaries at induction machines (TBCHE as well as shear-stress values at the tips afterwelding are determined.Key words: tip single point lathe tool, metal ceramic plate, trimetallic pads.

  14. Revisiting Hansen Solubility Parameters by Including Thermodynamics

    NARCIS (Netherlands)

    Louwerse, Manuel J; Fernández-Maldonado, Ana María; Rousseau, Simon; Moreau-Masselon, Chloe; Roux, Bernard; Rothenberg, Gadi

    2017-01-01

    The Hansen solubility parameter approach is revisited by implementing the thermodynamics of dissolution and mixing. Hansen's pragmatic approach has earned its spurs in predicting solvents for polymer solutions, but for molecular solutes improvements are needed. By going into the details of entropy

  15. The Future of Engineering Education--Revisited

    Science.gov (United States)

    Wankat, Phillip C.; Bullard, Lisa G.

    2016-01-01

    This paper revisits the landmark CEE series, "The Future of Engineering Education," published in 2000 (available free in the CEE archives on the internet) to examine the predictions made in the original paper as well as the tools and approaches documented. Most of the advice offered in the original series remains current. Despite new…

  16. Revisiting the formal foundation of Probabilistic Databases

    NARCIS (Netherlands)

    Wanders, B.; van Keulen, Maurice

    2015-01-01

    One of the core problems in soft computing is dealing with uncertainty in data. In this paper, we revisit the formal foundation of a class of probabilistic databases with the purpose to (1) obtain data model independence, (2) separate metadata on uncertainty and probabilities from the raw data, (3)

  17. Revisiting Weak Simulation for Substochastic Markov Chains

    DEFF Research Database (Denmark)

    Jansen, David N.; Song, Lei; Zhang, Lijun

    2013-01-01

    of the logic PCTL\\x, and its completeness was conjectured. We revisit this result and show that soundness does not hold in general, but only for Markov chains without divergence. It is refuted for some systems with substochastic distributions. Moreover, we provide a counterexample to completeness...

  18. Coccolithophorids in polar waters: Wigwamma spp. revisited

    DEFF Research Database (Denmark)

    Thomsen, Helge Abildhauge; Østergaard, Jette B.; Heldal, Mikal

    2013-01-01

    A contingent of weakly calcified coccolithophorid genera and species were described from polar regions almost 40 years ago. In the interim period a few additional findings have been reported enlarging the realm of some of the species. The genus Wigwamma is revisited here with the purpose of provi...... appearance of the coccolith armour of the cell...

  19. The Faraday effect revisited: General theory

    DEFF Research Database (Denmark)

    Cornean, Horia Decebal; Nenciu, Gheorghe; Pedersen, Thomas Garm

    2006-01-01

    This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. At zero temperature and zero frequency...

  20. The Faraday effect revisited: General theory

    DEFF Research Database (Denmark)

    Cornean, Horia Decebal; Nenciu, Gheorghe; Pedersen, Thomas Garm

    This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. For free electrons, the transverse...

  1. eWOM, Revisit Intention, Destination Trust and Gender

    OpenAIRE

    Abubakar, Abubakar Mohammed; Ilkan, Mustafa; Al-Tal, Raad Meshall; Eluwole, Kayode

    2017-01-01

    This article investigates the impact of eWOM on intention to revisit and destination trust, and the moderating role of gender in medical tourism industry. Result from structural equation modeling (n=240) suggests the following: (1) that eWOM influences intention to revisit and destination trust; (2) that destination trust influences intention to revisit; (3) that the impact of eWOM on intention to revisit is about 1.3 times higher in men; (4) that the impact of eWOM on destination trust is ab...

  2. Intermittent single point machining of brittle materials

    Energy Technology Data Exchange (ETDEWEB)

    Marsh, E

    1999-12-07

    A series of tests were undertaken to explore diamond tool wear in the intermittent cutting of brittle materials, specifically silicon. The tests were carried out on a plain way No. 3 Moore machine base equipped as a flycutter with a motorized Professional Instruments 4R air bearing spindle. The diamond tools were made by Edge Technologies with known crystal orientation and composition and sharpened with either an abrasive or chemical process, depending on the individual test. The flycutting machine configuration allowed precise control over the angle at which the tool engages the anisotropic silicon workpiece. In contrast, the crystallographic orientation of the silicon workpiece changes continuously during on-axis turning. As a result, it is possible to flycut a workpiece in cutting directions that are known to be easy or hard. All cuts were run in the 100 plane of the silicon, with a slight angle deliberately introduced to ensure that the 100 plane is engaged in ''up-cutting'' which lengthens the tool life. A Kistler 9256 dynamometer was used to measure the cutting forces in order to gain insight into the material removal process and tool wear during testing. The dynamometer provides high bandwidth force measurement with milli-Newton resolution and good thermal stability. After many successive passes over the workpiece, it was observed that the cutting forces grow at a rate that is roughly proportional to the degradation of the workpiece surface finish. The exact relationship between cutting force growth and surface finish degradation was not quantified because of the problems associated with measuring surface finish in situ. However, a series of witness marks were made during testing in an aluminum sample that clearly show the development of wear flats on the tool nose profile as the forces grow and the surface finish worsens. The test results show that workpieces requiring on the order of two miles of track length can be made with low tool wear and excellent surface finish. With longer track lengths, the tool forces (and presumably tool wear) begin a roughly linear increase as surface finish steadily worsens. No catastrophic tool failures were observed, only slow changes as the track length increases. Interestingly, the specific cutting energy did not remain constant with depth of cut, suggesting that there are significant friction forces in the cutting of silicon. This finding supports published results emphasizing the importance of a large clearance angle on the tool and hints that fairly aggressive cuts may be the most efficient way to remove material. That is, tool life may turn out to scale with track length, not volume indicating that machining parameters for silicon should be chosen to minimize track length by taking heavier cuts.

  3. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  4. Will Incremental Hemodialysis Preserve Residual Function and Improve Patient Survival?

    Science.gov (United States)

    Davenport, Andrew

    2015-01-01

    The progressive loss of residual renal function in peritoneal dialysis patients is associated with increased mortality. It has been suggested that incremental dialysis may help preserve residual renal function and improve patient survival. Residual renal function depends upon both patient related and dialysis associated factors. Maintaining patients in an over-hydrated state may be associated with better preservation of residual renal function but any benefit comes with a significant risk of cardiovascular consequences. Notably, it is only observational studies that have reported an association between dialysis patient survival and residual renal function; causality has not been established for dialysis patient survival. The tenuous connections between residual renal function and outcomes and between incremental hemodialysis and residual renal function should temper our enthusiasm for interventions in this area. PMID:25385441

  5. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  6. Table incremental slow injection CE-CT in lung cancer

    International Nuclear Information System (INIS)

    Yoshida, Shoji; Maeda, Tomoho; Morita, Masaru

    1988-01-01

    The purpose of this study is to evaluate tumor enhancement in lung cancer under the table incremental study with slow injection of contrast media. The early serial 8 sliced images during the slow injection (1.5 ml/sec) of contrant media were obtained. Following the early images, delayed 8 same sliced images were taken in 2 minutes later. Chacteristic enhanced patterns of the primary cancer and metastatic mediastinal lymphnode were recognized in this study. Enhancement of the primary lesion was classified in 4 patterns, irregular geographic pattern, heterogeneous pattern, homogeneous pattern and rim-enhanced pattern. In mediastinal metastatic lymphadenopathy, three enhanced patterns were obtained, heterogeneous, homogeneous and ring enhanced pattern. Some characteristic enhancement patterns according to the histopathological finding of the lung cancer were obtained. With using this incremental slow injection CE-CT, precise information about the relationship between lung cancer and adjacent mediastinal structure, and obvious staining patterns of the tumor and mediastinal lymphnode were recognized. (author)

  7. Final Safety Analysis Report (FSAR) for Building 332, Increment III

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B. N.; Toy, Jr., A. J.

    1977-08-31

    This Final Safety Analysis Report (FSAR) supplements the Preliminary Safety Analysis Report (PSAR), dated January 18, 1974, for Building 332, Increment III of the Plutonium Materials Engineering Facility located at the Lawrence Livermore Laboratory (LLL). The FSAR, in conjunction with the PSAR, shows that the completed increment provides facilities for safely conducting the operations as described. These documents satisfy the requirements of ERDA Manual Appendix 6101, Annex C, dated April 8, 1971. The format and content of this FSAR complies with the basic requirements of the letter of request from ERDA San to LLL, dated March 10, 1972. Included as appendices in support of th FSAR are the Building 332 Operational Safety Procedure and the LLL Disaster Control Plan.

  8. Decoupled Simulation Method For Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Sebastiani, G.; Brosius, A.; Tekkaya, A. E.; Homberg, W.; Kleiner, M.

    2007-01-01

    Within the scope of this article a decoupling algorithm to reduce computing time in Finite Element Analyses of incremental forming processes will be investigated. Based on the given position of the small forming zone, the presented algorithm aims at separating a Finite Element Model in an elastic and an elasto-plastic deformation zone. Including the elastic response of the structure by means of model simplifications, the costly iteration in the elasto-plastic zone can be restricted to the small forming zone and to few supporting elements in order to reduce computation time. Since the forming zone moves along the specimen, an update of both, forming zone with elastic boundary and supporting structure, is needed after several increments.The presented paper discusses the algorithmic implementation of the approach and introduces several strategies to implement the denoted elastic boundary condition at the boundary of the plastic forming zone

  9. Incremental exposure facilitates adaptation to sensory rearrangement. [vestibular stimulation patterns

    Science.gov (United States)

    Lackner, J. R.; Lobovits, D. N.

    1978-01-01

    Visual-target pointing experiments were performed on 24 adult volunteers in order to compare the relative effectiveness of incremental (stepwise) and single-step exposure conditions on adaptation to visual rearrangement. The differences between the preexposure and postexposure scores served as an index of the adaptation elicited during the exposure period. It is found that both single-step and stepwise exposure to visual rearrangement elicit compensatory changes in sensorimotor coordination. However, stepwise exposure, when compared to single-step exposur in terms of the average magnitude of visual displacement over the exposure period, clearly enhances the rate of adaptation. It seems possible that the enhancement of adaptation to unusual patterns of sensory stimulation produced by incremental exposure reflects a general principle of sensorimotor function.

  10. Thermomechanical simulations and experimental validation for high speed incremental forming

    Science.gov (United States)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  11. Automobile sheet metal part production with incremental sheet forming

    Directory of Open Access Journals (Sweden)

    İsmail DURGUN

    2016-02-01

    Full Text Available Nowadays, effect of global warming is increasing drastically so it leads to increased interest on energy efficiency and sustainable production methods. As a result of adverse conditions, national and international project platforms, OEMs (Original Equipment Manufacturers, SMEs (Small and Mid-size Manufacturers perform many studies or improve existing methodologies in scope of advanced manufacturing techniques. In this study, advanced manufacturing and sustainable production method "Incremental Sheet Metal Forming (ISF" was used for sheet metal forming process. A vehicle fender was manufactured with or without die by using different toolpath strategies and die sets. At the end of the study, Results have been investigated under the influence of method and parameters used.Keywords: Template incremental sheet metal, Metal forming

  12. On kinematical minimum principles for rates and increments in plasticity

    International Nuclear Information System (INIS)

    Zouain, N.

    1984-01-01

    The optimization approach for elastoplastic analysis is discussed showing that some minimum principles related to numerical methods can be derived by means of duality and penalization procedures. Three minimum principles for velocity and plastic multiplier rate fields are presented in the framework of perfect plasticity. The first one is the classical Greenberg formulation. The second one, due to Capurso, is developed here with different motivation, and modified by penalization of constraints so as to arrive at a third principle for rates. The counterparts of these optimization formulations in terms of discrete increments of displacements of displacements and plastic multipliers are discussed. The third one of these minimum principles for finite increments is recognized to be closely related to Maier's formulation of holonomic plasticity. (Author) [pt

  13. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  14. Fault-tolerant incremental diagnosis with limited historical data

    OpenAIRE

    Gillblad, Daniel; Holst, Anders; Steinert, Rebecca

    2006-01-01

    In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. All relevant information may not be available initially and must be acquired manually or at a cost. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. Here, we will describe how to create an incremental classification system based on a statistical model that is trained from empirical dat...

  15. Diagnosis of small hepatocellular carcinoma by incremental dynamic CT

    International Nuclear Information System (INIS)

    Uchida, Masafumi; Kumabe, Tsutomu; Edamitsu, Osamu

    1993-01-01

    Thirty cases of pathologically confirmed small hepatocellular carcinoma were examined by Incremental Dynamic CT (ICT). ICT scanned the whole liver with single-breath-hold technique; therefore, effective early contrast enhancement could be obtained for diagnosis. Among the 30 tumors, 26 were detected. The detection rate was 87%. A high detection rate was obtained in tumors more than 20 mm in diameter. Twenty-two of 26 tumors could be diagnosed correctly. ICT examination was useful for detection of small hepatocellular carcinoma. (author)

  16. A parallel ILP algorithm that incorporates incremental batch learning

    OpenAIRE

    Nuno Fonseca; Rui Camacho; Fernado Silva

    2003-01-01

    In this paper we tackle the problems of eciency and scala-bility faced by Inductive Logic Programming (ILP) systems. We proposethe use of parallelism to improve eciency and the use of an incrementalbatch learning to address the scalability problem. We describe a novelparallel algorithm that incorporates into ILP the method of incremen-tal batch learning. The theoretical complexity of the algorithm indicatesthat a linear speedup can be achieved.

  17. Public Key Infrastructure Increment 2 (PKI Inc 2)

    Science.gov (United States)

    2016-03-01

    across the Global Information Grid (GIG) and at rest. Using authoritative data, obtained via face-to-face identity proofing, PKI creates a credential ...operating on a network by provision of assured PKI-based credentials for any device on that network. ​​​​PKI Increment One made significant...provide assured/secure validation of revocation of an electronic/ digital credential . 2.DoD PKI shall support assured revocation status requests of

  18. The intermetallic ThRh5: microstructure and enthalpy increments

    International Nuclear Information System (INIS)

    Banerjee, Aparna; Joshi, A.R.; Kaity, Santu; Mishra, R.; Roy, S.B.

    2013-01-01

    Actinide intermetallics are one of the most interesting and important series of compounds. Thermochemistry of these compounds play significant role in understand the nature of bonding in alloys and nuclear fuel performance. In the present paper we report synthesis and characterization of thorium based intermetallic compound ThRh 5 (s) by SEM/EDX technique. The mechanical properties and enthalpy increment as a function of temperature of the alloy has been measured. (author)

  19. Systematic Luby Transform codes as incremental redundancy scheme

    CSIR Research Space (South Africa)

    Grobler, TL

    2011-09-01

    Full Text Available Transform Codes as Incremental Redundancy Scheme T. L. Grobler y, E. R. Ackermann y, J. C. Olivier y and A. J. van Zylz Department of Electrical, Electronic and Computer Engineering University of Pretoria, Pretoria 0002, South Africa Email: trienkog...@gmail.com, etienne.ackermann@ieee.org yDefence, Peace, Safety and Security (DPSS) Council for Scientific and Industrial Research (CSIR), Pretoria 0001, South Africa zDepartment of Mathematics and Applied Mathematics University of Pretoria, Pretoria 0002, South...

  20. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    OpenAIRE

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  1. Incremental Support Vector Machine Framework for Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuichi Motai

    2007-01-01

    Full Text Available Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  2. Health level seven interoperability strategy: big data, incrementally structured.

    Science.gov (United States)

    Dolin, R H; Rogers, B; Jaffe, C

    2015-01-01

    Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.

  3. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek

    2017-10-17

    Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.

  4. Conservation of wildlife populations: factoring in incremental disturbance.

    Science.gov (United States)

    Stewart, Abbie; Komers, Petr E

    2017-06-01

    Progressive anthropogenic disturbance can alter ecosystem organization potentially causing shifts from one stable state to another. This potential for ecosystem shifts must be considered when establishing targets and objectives for conservation. We ask whether a predator-prey system response to incremental anthropogenic disturbance might shift along a disturbance gradient and, if it does, whether any disturbance thresholds are evident for this system. Development of linear corridors in forested areas increases wolf predation effectiveness, while high density of development provides a safe-haven for their prey. If wolves limit moose population growth, then wolves and moose should respond inversely to land cover disturbance. Using general linear model analysis, we test how the rate of change in moose ( Alces alces ) density and wolf ( Canis lupus ) harvest density are influenced by the rate of change in land cover and proportion of land cover disturbed within a 300,000 km 2 area in the boreal forest of Alberta, Canada. Using logistic regression, we test how the direction of change in moose density is influenced by measures of land cover change. In response to incremental land cover disturbance, moose declines occurred where 43% of land cover was disturbed and wolf density declined. Wolves and moose appeared to respond inversely to incremental disturbance with the balance between moose decline and wolf increase shifting at about 43% of land cover disturbed. Conservation decisions require quantification of disturbance rates and their relationships to predator-prey systems because ecosystem responses to anthropogenic disturbance shift across disturbance gradients.

  5. Context-dependent incremental timing cells in the primate hippocampus.

    Science.gov (United States)

    Sakon, John J; Naya, Yuji; Wirth, Sylvia; Suzuki, Wendy A

    2014-12-23

    We examined timing-related signals in primate hippocampal cells as animals performed an object-place (OP) associative learning task. We found hippocampal cells with firing rates that incrementally increased or decreased across the memory delay interval of the task, which we refer to as incremental timing cells (ITCs). Three distinct categories of ITCs were identified. Agnostic ITCs did not distinguish between different trial types. The remaining two categories of cells signaled time and trial context together: One category of cells tracked time depending on the behavioral action required for a correct response (i.e., early vs. late release), whereas the other category of cells tracked time only for those trials cued with a specific OP combination. The context-sensitive ITCs were observed more often during sessions where behavioral learning was observed and exhibited reduced incremental firing on incorrect trials. Thus, single primate hippocampal cells signal information about trial timing, which can be linked with trial type/context in a learning-dependent manner.

  6. Microstructure-sensitive modelling of dislocation creep in polycrystalline FCC alloys: Orowan theory revisited

    Energy Technology Data Exchange (ETDEWEB)

    Galindo-Nava, E.I., E-mail: eg375@cam.ac.uk; Rae, C.M.F.

    2016-01-10

    A new approach for modelling dislocation creep during primary and secondary creep in FCC metals is proposed. The Orowan equation and dislocation behaviour at the grain scale are revisited to include the effects of different microstructures such as the grain size and solute atoms. Dislocation activity is proposed to follow a jog-diffusion law. It is shown that the activation energy for cross-slip E{sub cs} controls dislocation mobility and the strain increments during secondary creep. This is confirmed by successfully comparing E{sub cs} with the experimentally determined activation energy during secondary creep in 5 FCC metals. It is shown that the inverse relationship between the grain size and dislocation creep is attributed to the higher number of strain increments at the grain level dominating their magnitude as the grain size decreases. An alternative approach describing solid solution strengthening effects in nickel alloys is presented, where the dislocation mobility is reduced by dislocation pinning around solute atoms. An analysis on the solid solution strengthening effects of typical elements employed in Ni-base superalloys is also discussed. The model results are validated against measurements of Cu, Ni, Ti and 4 Ni-base alloys for wide deformation conditions and different grain sizes.

  7. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  8. Local Equilibrium and Retardation Revisited.

    Science.gov (United States)

    Hansen, Scott K; Vesselinov, Velimir V

    2018-01-01

    In modeling solute transport with mobile-immobile mass transfer (MIMT), it is common to use an advection-dispersion equation (ADE) with a retardation factor, or retarded ADE. This is commonly referred to as making the local equilibrium assumption (LEA). Assuming local equilibrium, Eulerian textbook treatments derive the retarded ADE, ostensibly exactly. However, other authors have presented rigorous mathematical derivations of the dispersive effect of MIMT, applicable even in the case of arbitrarily fast mass transfer. We resolve the apparent contradiction between these seemingly exact derivations by adopting a Lagrangian point of view. We show that local equilibrium constrains the expected time immobile, whereas the retarded ADE actually embeds a stronger, nonphysical, constraint: that all particles spend the same amount of every time increment immobile. Eulerian derivations of the retarded ADE thus silently commit the gambler's fallacy, leading them to ignore dispersion due to mass transfer that is correctly modeled by other approaches. We then present a particle tracking simulation illustrating how poor an approximation the retarded ADE may be, even when mobile and immobile plumes are continually near local equilibrium. We note that classic "LEA" (actually, retarded ADE validity) criteria test for insignificance of MIMT-driven dispersion relative to hydrodynamic dispersion, rather than for local equilibrium. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  9. Revisiting Cementoblastoma with a Rare Case Presentation

    Directory of Open Access Journals (Sweden)

    Vijayanirmala Subramani

    2017-01-01

    Full Text Available Cementoblastoma is a rare benign odontogenic neoplasm which is characterized by the proliferation of cellular cementum. Diagnosis of cementoblastoma is challenging because of its protracted clinical, radiographic features, and bland histological appearance; most often cementoblastoma is often confused with other cementum and bone originated lesions. The aim of this article is to overview/revisit, approach the diagnosis of cementoblastoma, and also present a unique radiographic appearance of a cementoblastoma lesion associated with an impacted tooth.

  10. Evaluation of incremental reactivity and its uncertainty in Southern California.

    Science.gov (United States)

    Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G

    2003-04-15

    The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.

  11. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  12. Incremental Innovation and Competitive Pressure in the Presence of Discrete Innovation

    DEFF Research Database (Denmark)

    Ghosh, Arghya; Kato, Takao; Morita, Hodaka

    2017-01-01

    Technical progress consists of improvements made upon the existing technology (incremental innovation) and innovative activities aiming at entirely new technology (discrete innovation). Incremental innovation is often of limited relevance to the new technology invented by successful discrete...

  13. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2006-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  14. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  15. An Approach to Incremental Design of Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to incremental design of distributed embedded systems for hard real-time applications. We start from an already existing system running a set of applications and the design problem is to implement new functionality on this system. Thus, we propose mapping...... strategies of functionality so that the already running functionality is not disturbed and there is a good chance that, later, new functionality can easily be mapped on the resulted system. The mapping and scheduling for hard real-time embedded systems are considered the context of a realistic communication...

  16. From incremental to fundamental substitution in chemical alternatives assessment

    DEFF Research Database (Denmark)

    Fantke, Peter; Weber, Roland; Scheringer, Martin

    2015-01-01

    to similarity in chemical structures and, hence, similar hazard profiles between phase-out and substitute chemicals, leading to a rather incremental than fundamental substitution. A hampered phase-out process, the lack of implementing Green Chemistry principles in chemicals design, and lack of Sustainable...... an integrated approach of all stakeholders involved toward more fundamental and function-based substitution by greener and more sustainable alternatives. Our recommendations finally constitute a starting point for identifying further research needs and for improving current alternatives assessment practice....

  17. Automating the Incremental Evolution of Controllers for Physical Robots

    DEFF Research Database (Denmark)

    Faina, Andres; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    the evolution of digital objects.…” The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration...... of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range...

  18. Transferring the Incremental Capacity Analysis to Lithium-Sulfur Batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Kalogiannis, Theodoros; Purkayastha, Rajlakshmi

    2017-01-01

    In order to investigate the battery degradation and to estimate their health, various techniques can be applied. One of them, which is widely used for Lithium-ion batteries, is the incremental capacity analysis (ICA). In this work, we apply the ICA to Lithium-Sulfur batteries, which differ in many...... aspects from Lithium-ion batteries and possess unique behavior. One of the challenges of applying the ICA to Lithium-Sulfur batteries is the representation of the IC curves, as their voltage profiles are often non-monotonic, resulting in more complex IC curves. The ICA is at first applied to charge...

  19. Switch-mode High Voltage Drivers for Dielectric Electro Active Polymer (DEAP) Incremental Actuators

    DEFF Research Database (Denmark)

    Thummala, Prasanth

    voltage DC-DC converters for driving the DEAP based incremental actuators. The DEAP incremental actuator technology has the potential to be used in various industries, e.g., automotive, space and medicine. The DEAP incremental actuator consists of three electrically isolated and mechanically connected...

  20. 21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... increment sensitivity index (SISI) adapter. (a) Identification. A short increment sensitivity index (SISI...

  1. Incremental Learning of Skill Collections based on Intrinsic Motivation

    Directory of Open Access Journals (Sweden)

    Jan Hendrik Metzen

    2013-07-01

    Full Text Available Life-long learning of reusable, versatile skills is a key prerequisite forembodied agents that act in a complex, dynamic environment and are faced withdifferent tasks over their lifetime. We address the question of how an agentcan learn useful skills efficiently during a developmental period,i.e., when no task is imposed on him and no external reward signal is provided.Learning of skills in a developmental period needs to be incremental andself-motivated. We propose a new incremental, task-independent skill discoveryapproach that is suited for continuous domains. Furthermore, the agent learnsspecific skills based on intrinsic motivation mechanisms thatdetermine on which skills learning is focused at a given point in time. Weevaluate the approach in a reinforcement learning setup in two continuousdomains with complex dynamics. We show that an intrinsically motivated, skilllearning agent outperforms an agent which learns task solutions from scratch.Furthermore, we compare different intrinsic motivation mechanisms and howefficiently they make use of the agent's developmental period.

  2. Optimal Output of Distributed Generation Based On Complex Power Increment

    Science.gov (United States)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  3. Phase retrieval via incremental truncated amplitude flow algorithm

    Science.gov (United States)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  4. Adaptive Incremental Genetic Algorithm for Task Scheduling in Cloud Environments

    Directory of Open Access Journals (Sweden)

    Kairong Duan

    2018-05-01

    Full Text Available Cloud computing is a new commercial model that enables customers to acquire large amounts of virtual resources on demand. Resources including hardware and software can be delivered as services and measured by specific usage of storage, processing, bandwidth, etc. In Cloud computing, task scheduling is a process of mapping cloud tasks to Virtual Machines (VMs. When binding the tasks to VMs, the scheduling strategy has an important influence on the efficiency of datacenter and related energy consumption. Although many traditional scheduling algorithms have been applied in various platforms, they may not work efficiently due to the large number of user requests, the variety of computation resources and complexity of Cloud environment. In this paper, we tackle the task scheduling problem which aims to minimize makespan by Genetic Algorithm (GA. We propose an incremental GA which has adaptive probabilities of crossover and mutation. The mutation and crossover rates change according to generations and also vary between individuals. Large numbers of tasks are randomly generated to simulate various scales of task scheduling problem in Cloud environment. Based on the instance types of Amazon EC2, we implemented virtual machines with different computing capacity on CloudSim. We compared the performance of the adaptive incremental GA with that of Standard GA, Min-Min, Max-Min , Simulated Annealing and Artificial Bee Colony Algorithm in finding the optimal scheme. Experimental results show that the proposed algorithm can achieve feasible solutions which have acceptable makespan with less computation time.

  5. Incremental support vector machines for fast reliable image recognition

    International Nuclear Information System (INIS)

    Makili, L.; Vega, J.; Dormido-Canto, S.

    2013-01-01

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency

  6. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    Science.gov (United States)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  7. STS-102 Expedition 2 Increment and Science Briefing

    Science.gov (United States)

    2001-01-01

    Merri Sanchez, Expedition 2 Increment Manager, John Uri, Increment Scientist, and Lybrease Woodard, Lead Payload Operations Director, give an overview of the upcoming activities and objectives of the Expedition 2's (E2's) mission in this prelaunch press conference. Ms. Sanchez describes the crew rotation of Expedition 1 to E2, the timeline E2 will follow during their stay on the International Space Station (ISS), and the various flights going to the ISS and what each will bring to ISS. Mr. Uri gives details on the on-board experiments that will take place on the ISS in the fields of microgravity research, commercial, earth, life, and space sciences (such as radiation characterization, H-reflex, colloids formation and interaction, protein crystal growth, plant growth, fermentation in microgravity, etc.). He also gives details on the scientific facilities to be used (laboratory racks and equipment such as the human torso facsimile or 'phantom torso'). Ms. Woodard gives an overview of Marshall Flight Center's role in the mission. Computerized simulations show the installation of the Space Station Remote Manipulator System (SSRMS) onto the ISS and the installation of the airlock using SSRMS. Live footage shows the interior of the ISS, including crew living quarters, the Progress Module, and the Destiny Laboratory. The three then answer questions from the press.

  8. Efficient incremental relaying for packet transmission over fading channels

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-07-01

    In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying (EIR) scheme with both amplify and forward and decode and forward relaying. We compare the performance of the EIR scheme with the threshold-based incremental relaying (TIR) scheme. It is shown that the efficiency of the TIR scheme is better for lower values of the threshold. However, the efficiency of the TIR scheme for higher values of threshold is outperformed by the EIR. In addition, three new threshold-based adaptive EIR are devised to further improve the efficiency of the EIR scheme. We calculate the packet error rate and the efficiency of these new schemes to provide the analytical insight. © 2014 IEEE.

  9. Incremental learning of concept drift in nonstationary environments.

    Science.gov (United States)

    Elwell, Ryan; Polikar, Robi

    2011-10-01

    We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. © 2011 IEEE

  10. Incremental learning of skill collections based on intrinsic motivation

    Science.gov (United States)

    Metzen, Jan H.; Kirchner, Frank

    2013-01-01

    Life-long learning of reusable, versatile skills is a key prerequisite for embodied agents that act in a complex, dynamic environment and are faced with different tasks over their lifetime. We address the question of how an agent can learn useful skills efficiently during a developmental period, i.e., when no task is imposed on him and no external reward signal is provided. Learning of skills in a developmental period needs to be incremental and self-motivated. We propose a new incremental, task-independent skill discovery approach that is suited for continuous domains. Furthermore, the agent learns specific skills based on intrinsic motivation mechanisms that determine on which skills learning is focused at a given point in time. We evaluate the approach in a reinforcement learning setup in two continuous domains with complex dynamics. We show that an intrinsically motivated, skill learning agent outperforms an agent which learns task solutions from scratch. Furthermore, we compare different intrinsic motivation mechanisms and how efficiently they make use of the agent's developmental period. PMID:23898265

  11. Incremental support vector machines for fast reliable image recognition

    Energy Technology Data Exchange (ETDEWEB)

    Makili, L., E-mail: makili_le@yahoo.com [Instituto Superior Politécnico da Universidade Katyavala Bwila, Benguela (Angola); Vega, J. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Dormido-Canto, S. [Dpto. Informática y Automática – UNED, Madrid (Spain)

    2013-10-15

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency.

  12. Incremental first pass technique to measure left ventricular ejection fraction

    International Nuclear Information System (INIS)

    Kocak, R.; Gulliford, P.; Hoggard, C.; Critchley, M.

    1980-01-01

    An incremental first pass technique was devised to assess the acute effects of any drug on left ventricular ejection fraction (LVEF) with or without a physiological stress. In particular, the effects of the vasodilater isosorbide dinitrate on LVEF before and after exercise were studied in 11 patients who had suffered cardiac failure. This was achieved by recording the passage of sup(99m)Tc pertechnetate through the heart at each stage of the study using a gamma camera computer system. Consistent values for four consecutive first pass values without exercise or drug in normal subjects illustrated the reproducibility of the technique. There was no significant difference between LVEF values obtained at rest and exercise before or after oral isosorbide dinitrate with the exception of one patient with gross mitral regurgitation. The advantages of the incremental first pass technique are that the patient need not be in sinus rhythm, the effects of physiological intervention may be studied and tests may also be repeated at various intervals during long term follow-up of patients. A disadvantage of the method is the limitation in the number of sequential measurements which can be carried out due to the amount of radioactivity injected. (U.K.)

  13. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    KAUST Repository

    Zhang, Chuxu

    2017-08-02

    Predicting the fast-rising young researchers (the Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt years. We explore a series of factors that can drive an author to be fast-rising and design a novel pairwise citation increment ranking (PCIR) method that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of PCIR. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.

  14. Modification and Validation of the Triglyceride-to-HDL Cholesterol Ratio as a Surrogate of Insulin Sensitivity in White Juveniles and Adults without Diabetes Mellitus: The Single Point Insulin Sensitivity Estimator (SPISE).

    Science.gov (United States)

    Paulmichl, Katharina; Hatunic, Mensud; Højlund, Kurt; Jotic, Aleksandra; Krebs, Michael; Mitrakou, Asimina; Porcellati, Francesca; Tura, Andrea; Bergsten, Peter; Forslund, Anders; Manell, Hannes; Widhalm, Kurt; Weghuber, Daniel; Anderwald, Christian-Heinz

    2016-09-01

    The triglyceride-to-HDL cholesterol (TG/HDL-C) ratio was introduced as a tool to estimate insulin resistance, because circulating lipid measurements are available in routine settings. Insulin, C-peptide, and free fatty acids are components of other insulin-sensitivity indices but their measurement is expensive. Easier and more affordable tools are of interest for both pediatric and adult patients. Study participants from the Relationship Between Insulin Sensitivity and Cardiovascular Disease [43.9 (8.3) years, n = 1260] as well as the Beta-Cell Function in Juvenile Diabetes and Obesity study cohorts [15 (1.9) years, n = 29] underwent oral-glucose-tolerance tests and euglycemic clamp tests for estimation of whole-body insulin sensitivity and calculation of insulin sensitivity indices. To refine the TG/HDL ratio, mathematical modeling was applied including body mass index (BMI), fasting TG, and HDL cholesterol and compared to the clamp-derived M-value as an estimate of insulin sensitivity. Each modeling result was scored by identifying insulin resistance and correlation coefficient. The Single Point Insulin Sensitivity Estimator (SPISE) was compared to traditional insulin sensitivity indices using area under the ROC curve (aROC) analysis and χ(2) test. The novel formula for SPISE was computed as follows: SPISE = 600 × HDL-C(0.185)/(TG(0.2) × BMI(1.338)), with fasting HDL-C (mg/dL), fasting TG concentrations (mg/dL), and BMI (kg/m(2)). A cutoff value of 6.61 corresponds to an M-value smaller than 4.7 mg · kg(-1) · min(-1) (aROC, M:0.797). SPISE showed a significantly better aROC than the TG/HDL-C ratio. SPISE aROC was comparable to the Matsuda ISI (insulin sensitivity index) and equal to the QUICKI (quantitative insulin sensitivity check index) and HOMA-IR (homeostasis model assessment-insulin resistance) when calculated with M-values. The SPISE seems well suited to surrogate whole-body insulin sensitivity from inexpensive fasting single-point blood draw and BMI

  15. Changes in composition, ecology and structure of high-mountain vegetation: a re-visitation study over 42 years.

    Science.gov (United States)

    Evangelista, Alberto; Frate, Ludovico; Carranza, Maria Laura; Attorre, Fabio; Pelino, Giovanni; Stanisci, Angela

    2016-01-27

    High-mountain ecosystems are increasingly threatened by climate change, causing biodiversity loss, habitat degradation and landscape modifications. However, very few detailed studies have focussed on plant biodiversity in the high mountains of the Mediterranean. In this study, we investigated the long-term changes that have occurred in the composition, structure and ecology of high-mountain vegetation in the central Apennines (Majella) over the last 42 years. We performed a re-visitation study, using historical and newly collected vegetation data to explore which ecological and structural features have been the most successful in coping with climatic changes. Vegetation changes were analysed by comparing geo-referenced phytosociological relevés collected in high-mountain habitats (dolines, gentle slopes and ridges) on the Majella massif in 1972 and in 2014. Composition analysis was performed by detrended correspondence analysis, followed by an analysis of similarities for statistical significance assessment and by similarity percentage procedure (SIMPER) for identifying which species indicate temporal changes. Changes in ecological and structural indicators were analysed by a permutational multivariate analysis of variance, followed by a post hoc comparison. Over the last 42 years, clear floristic changes and significant ecological and structural variations occurred. We observed a significant increase in the thermophilic and mesonitrophilic plant species and an increment in the frequencies of hemicryptophytes. This re-visitation study in the Apennines agrees with observations in other alpine ecosystems, providing new insights for a better understanding of the effects of global change on Mediterranean high-mountain biodiversity. The observed changes in floristic composition, the thermophilization process and the shift towards a more nutrient-demanding vegetation are likely attributable to the combined effect of higher temperatures and the increase in soil nutrients

  16. Single point mutations distributed in 10 soluble and membrane regions of the Nicotiana plumbaginifolia plasma membrane PMA2 H+-ATPase activate the enzyme and modify the structure of the C-terminal region.

    Science.gov (United States)

    Morsomme, P; Dambly, S; Maudoux, O; Boutry, M

    1998-12-25

    The Nicotiana plumbaginifolia pma2 (plasma membrane H+-ATPase) gene is capable of functionally replacing the H+-ATPase genes of the yeast Saccharomyces cerevisiae, provided that the external pH is kept above 5.0. Single point mutations within the pma2 gene were previously identified that improved H+-ATPase activity and allowed yeast growth at pH 4.0. The aim of the present study was to identify most of the PMA2 positions, the mutation of which would lead to improved growth and to determine whether all these mutations result in similar enzymatic and structural modifications. We selected additional mutants in total 42 distinct point mutations localized in 30 codons. They were distributed in 10 soluble and membrane regions of the enzyme. Most mutant PMA2 H+-ATPases were characterized by a higher specific activity, lower inhibition by ADP, and lower stimulation by lysophosphatidylcholine than wild-type PMA2. The mutants thus seem to be constitutively activated. Partial tryptic digestion and immunodetection showed that the PMA2 mutants had a conformational change making the C-terminal region more accessible. These data therefore support the hypothesis that point mutations in various H+-ATPase parts displace the inhibitory C-terminal region, resulting in enzyme activation. The high density of mutations within the first half of the C-terminal region suggests that this part is involved in the interaction between the inhibitory C-terminal region and the rest of the enzyme.

  17. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    International Nuclear Information System (INIS)

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  18. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 data points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.

  19. Incremental and developmental perspectives for general-purpose learning systems

    Directory of Open Access Journals (Sweden)

    Fernando Martínez-Plumed

    2017-02-01

    Full Text Available The stupefying success of Articial Intelligence (AI for specic problems, from recommender systems to self-driving cars, has not yet been matched with a similar progress in general AI systems, coping with a variety of (dierent problems. This dissertation deals with the long-standing problem of creating more general AI systems, through the analysis of their development and the evaluation of their cognitive abilities. It presents a declarative general-purpose learning system and a developmental and lifelong approach for knowledge acquisition, consolidation and forgetting. It also analyses the use of the use of more ability-oriented evaluation techniques for AI evaluation and provides further insight for the understanding of the concepts of development and incremental learning in AI systems.

  20. Improving process performance in Incremental Sheet Forming (ISF)

    International Nuclear Information System (INIS)

    Ambrogio, G.; Filice, L.; Manco, G. L.

    2011-01-01

    Incremental Sheet Forming (ISF) is a relatively new process in which a sheet clamped along the borders is progressively deformed through a hemispherical tool. The tool motion is CNC controlled and the path is designed using a CAD-CAM approach, with the aim to reproduce the final shape contour such as in the surface milling. The absence of a dedicated setup and the related high flexibility is the main point of strength and the reason why several researchers focused their attentions on the ISF process.On the other hand the process slowness is the most relevant drawback which reduces a wider industrial application. In the paper, a first attempt to overcome this process limitation is presented taking into account a relevant speed increasing respect to the values currently used.

  1. Advanced Change Theory Revisited: An Article Critique

    Directory of Open Access Journals (Sweden)

    R. Scott Pochron

    2008-12-01

    Full Text Available The complexity of life in 21st century society requires new models for leading and managing change. With that in mind, this paper revisits the model for Advanced Change Theory (ACT as presented by Quinn, Spreitzer, and Brown in their article, “Changing Others Through Changing Ourselves: The Transformation of Human Systems” (2000. The authors present ACT as a potential model for facilitating change in complex organizations. This paper presents a critique of the article and summarizes opportunities for further exploring the model in the light of current trends in developmental and integral theory.

  2. Resolution of Reflection Seismic Data Revisited

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Zunino, Andrea

    The Rayleigh Principle states that the minimum separation between two reflectors that allows them to be visually separated is the separation where the wavelet maxima from the two superimposed reflections combine into one maximum. This happens around Δtres = λb/8, where λb is the predominant...... lower vertical resolution of reflection seismic data. In the following we will revisit think layer model and demonstrate that there is in practice no limit to the vertical resolution using the parameterization of Widess (1973), and that the vertical resolution is limited by the noise in the data...

  3. Revisiting fifth forces in the Galileon model

    Energy Technology Data Exchange (ETDEWEB)

    Burrage, Clare [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie; Seery, David [Sussex Univ., Brighton (United Kingdom). Dept. of Physics and Astronomy

    2010-05-15

    A Galileon field is one which obeys a spacetime generalization of the non- relativistic Galilean invariance. Such a field may possess non-canonical kinetic terms, but ghost-free theories with a well-defined Cauchy problem exist, constructed using a finite number of relevant operators. The interactions of this scalar with matter are hidden by the Vainshtein effect, causing the Galileon to become weakly coupled near heavy sources. We revisit estimates of the fifth force mediated by a Galileon field, and show that the parameters of the model are less constrained by experiment than previously supposed. (orig.)

  4. Large J expansion in ABJM theory revisited.

    Science.gov (United States)

    Dimov, H; Mladenov, S; Rashkov, R C

    Recently there has been progress in the computation of the anomalous dimensions of gauge theory operators at strong coupling by making use of the AdS/CFT correspondence. On the string theory side they are given by dispersion relations in the semiclassical regime. We revisit the problem of a large-charge expansion of the dispersion relations for simple semiclassical strings in an [Formula: see text] background. We present the calculation of the corresponding anomalous dimensions of the gauge theory operators to an arbitrary order using three different methods. Although the results of the three methods look different, power series expansions show their consistency.

  5. Sloan Digital Sky Survey Photometric Calibration Revisited

    International Nuclear Information System (INIS)

    Marriner, John

    2012-01-01

    The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.

  6. Revisiting the Political Economy of Communication

    Directory of Open Access Journals (Sweden)

    Nicholas Garnham

    2014-02-01

    The task of the paper and the seminar was to revisit some of Nicholas Garnham’s ideas, writings and contributions to the study of the Political Economy of Communication and to reflect on the concepts, history, current status and perspectives of this field and the broader study of political economy today. The topics covered include Raymond Williams’ cultural materialism, Pierre Bourdieu’s sociology of culture, the debate between Political Economy and Cultural Studies, information society theory, Karl Marx’s theory and the critique of capitalism.

  7. Sloan Digital Sky Survey Photometric Calibration Revisited

    Energy Technology Data Exchange (ETDEWEB)

    Marriner, John; /Fermilab

    2012-06-29

    The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.

  8. A user-friendly tool for incremental haemodialysis prescription.

    Science.gov (United States)

    Casino, Francesco Gaetano; Basile, Carlo

    2018-01-05

    There is a recently heightened interest in incremental haemodialysis (IHD), the main advantage of which could likely be a better preservation of the residual kidney function of the patients. The implementation of IHD, however, is hindered by many factors, among them, the mathematical complexity of its prescription. The aim of our study was to design a user-friendly tool for IHD prescription, consisting of only a few rows of a common spreadsheet. The keystone of our spreadsheet was the following fundamental concept: the dialysis dose to be prescribed in IHD depends only on the normalized urea clearance provided by the native kidneys (KRUn) of the patient for each frequency of treatment, according to the variable target model recently proposed by Casino and Basile (The variable target model: a paradigm shift in the incremental haemodialysis prescription. Nephrol Dial Transplant 2017; 32: 182-190). The first step was to put in sequence a series of equations in order to calculate, firstly, KRUn and, then, the key parameters to be prescribed for an adequate IHD; the second step was to compare KRUn values obtained with our spreadsheet with KRUn values obtainable with the gold standard Solute-solver (Daugirdas JT et al., Solute-solver: a web-based tool for modeling urea kinetics for a broad range of hemodialysis schedules in multiple patients. Am J Kidney Dis 2009; 54: 798-809) in a sample of 40 incident haemodialysis patients. Our spreadsheet provided excellent results. The differences with Solute-solver were clinically negligible. This was confirmed by the Bland-Altman plot built to analyse the agreement between KRUn values obtained with the two methods: the difference was 0.07 ± 0.05 mL/min/35 L. Our spreadsheet is a user-friendly tool able to provide clinically acceptable results in IHD prescription. Two immediate consequences could derive: (i) a larger dissemination of IHD might occur; and (ii) our spreadsheet could represent a useful tool for an ineludibly

  9. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    OpenAIRE

    Vermeulen, Patrick; Bosch, Frans; Volberda, Henk

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation. In this paper, we use an institutional perspective to investigate why established firms in the financial services industry struggle with their complex incremental product innovation efforts. We ar...

  10. Incremental cost of PACS in a medical intensive care unit

    Science.gov (United States)

    Langlotz, Curtis P.; Cleff, Bridget; Even-Shoshan, Orit; Bozzo, Mary T.; Redfern, Regina O.; Brikman, Inna; Seshadri, Sridhar B.; Horii, Steven C.; Kundel, Harold L.

    1995-05-01

    Our purpose is to determine the incremental costs (or savings) due to the introduction of picture archiving and communication systems (PACS) and computed radiology (CR) in a medical intensive care unit (MICU). Our economic analysis consists of three measurement methods. The first method is an assessment of the direct costs to the radiology department, implemented in a spreadsheet model. The second method consists of a series of brief observational studies to measure potential changes in personnel costs that might not be reflected in administrative claims. The third method (results not reported here) is a multivariate modeling technique which estimates the independent effect of PACS/CR on the cost of care (estimated from administrative claims data), while controlling for clinical case- mix variables. Our direct cost model shows no cost savings to the radiology department after the introduction of PACS in the medical intensive care unit. Savings in film supplies and film library personnel are offset by increases in capital equipment costs and PACS operation personnel. The results of observational studies to date demonstrate significant savings in clinician film-search time, but no significant change in technologist time or lost films. Our model suggests that direct radiology costs will increase after the limited introduction of PACS/CR in the MICU. Our observational studies show a small but significant effect on clinician film search time by the introduction of PACS/CR in the MICU, but no significant effect on other variables. The projected costs of a hospital-wide PACS are currently under study.

  11. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Science.gov (United States)

    Gan, Wensheng; Zhang, Binbin

    2015-01-01

    Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038

  12. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab

    2017-08-22

    Frequent subgraph mining is a core graph operation used in many domains, such as graph data management and knowledge exploration, bioinformatics and security. Most existing techniques target static graphs. However, modern applications, such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem on a single large evolving graph. We adapt the notion of “fringe” to the graph context, that is the set of subgraphs on the border between frequent and infrequent subgraphs. IncGM+ maintains fringe subgraphs and exploits them to prune the search space. To boost the efficiency, we propose an efficient index structure to maintain selected embeddings with minimal memory overhead. These embeddings are utilized to avoid redundant expensive subgraph isomorphism operations. Moreover, the proposed system supports batch updates. Using large real-world graphs, we experimentally verify that IncGM+ outperforms existing methods by up to three orders of magnitude, scales to much larger graphs and consumes less memory.

  13. A novel instrument for generating angular increments of 1 nanoradian

    Science.gov (United States)

    Alcock, Simon G.; Bugnar, Alex; Nistea, Ioana; Sawhney, Kawal; Scott, Stewart; Hillman, Michael; Grindrod, Jamie; Johnson, Iain

    2015-12-01

    Accurate generation of small angles is of vital importance for calibrating angle-based metrology instruments used in a broad spectrum of industries including mechatronics, nano-positioning, and optic fabrication. We present a novel, piezo-driven, flexure device capable of reliably generating micro- and nanoradian angles. Unlike many such instruments, Diamond Light Source's nano-angle generator (Diamond-NANGO) does not rely on two separate actuators or rotation stages to provide coarse and fine motion. Instead, a single Physik Instrumente NEXLINE "PiezoWalk" actuator provides millimetres of travel with nanometre resolution. A cartwheel flexure efficiently converts displacement from the linear actuator into rotary motion with minimal parasitic errors. Rotation of the flexure is directly measured via a Magnescale "Laserscale" angle encoder. Closed-loop operation of the PiezoWalk actuator, using high-speed feedback from the angle encoder, ensures that the Diamond-NANGO's output drifts by only ˜0.3 nrad rms over ˜30 min. We show that the Diamond-NANGO can reliably move with unprecedented 1 nrad (˜57 ndeg) angular increments over a range of >7000 μrad. An autocollimator, interferometer, and capacitive displacement sensor are used to independently confirm the Diamond-NANGO's performance by simultaneously measuring the rotation of a reflective cube.

  14. An incremental anomaly detection model for virtual machines

    Science.gov (United States)

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  15. Validation of daily increments periodicity in otoliths of spotted gar

    Science.gov (United States)

    Snow, Richard A.; Long, James M.; Frenette, Bryan D.

    2017-01-01

    Accurate age and growth information is essential in successful management of fish populations and for understanding early life history. We validated daily increment deposition, including the timing of first ring formation, for spotted gar (Lepisosteus oculatus) through 127 days post hatch. Fry were produced from hatchery-spawned specimens, and up to 10 individuals per week were sacrificed and their otoliths (sagitta, lapillus, and asteriscus) removed for daily age estimation. Daily age estimates for all three otolith pairs were significantly related to known age. The strongest relationships existed for measurements from the sagitta (r2 = 0.98) and the lapillus (r2 = 0.99) with asteriscus (r2 = 0.95) the lowest. All age prediction models resulted in a slope near unity, indicating that ring deposition occurred approximately daily. Initiation of ring formation varied among otolith types, with deposition beginning 3, 7, and 9 days for the sagitta, lapillus, and asteriscus, respectively. Results of this study suggested that otoliths are useful to estimate daily age of spotted gar juveniles; these data may be used to back calculate hatch dates, estimate early growth rates, and correlate with environmental factor that influence spawning in wild populations. is early life history information will be valuable in better understanding the ecology of this species. 

  16. An incremental anomaly detection model for virtual machines.

    Directory of Open Access Journals (Sweden)

    Hancui Zhang

    Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  17. VOLATILITAS RELEVANSI NILAI INCREMENTAL DARI LABA DAN NILAI BUKU

    Directory of Open Access Journals (Sweden)

    B. Linggar Yekti Nugraheni

    2012-03-01

    Full Text Available Dalam penelitian ini dikaji relevansi nilai pola volatilitas pendapatan dan equitas nilai buku. Diprediksi bahwa relevansi nilai volatilitas adalah dikaitkan dengan horizon waktu. Menggunakan model Ohslon, hipotesis yang dikembangkan adalah (1 laba dan nilai buku ekuitas berhubungan positif dengan harga saham, dan (2 ada penurunan atau peningkatan patern relevansi nilai tambahan. Sampel yang digunakan dalam penelitian ini adalah perusahaan manufaktur yang terdaftar di BEI (Bursa Efek Indonesia selama periode 1998-2007. Hasilnya menunjukkan bahwa earnings dan nilai buku ekuitas terkait secara positif dengan harga saham dan relevansi nilai laba menurun sementara ekuitas nilai buku ekuitas selama periode pengamatan. This research investigates the value of relevance volatility patern of earnings and book value equity. It is predicted that the volatility of value relevance is associated with the time horizon. Using the Ohslon model, the hypothesis developed are: (1 earnings and book value equity are associated positively with the stock price (2 There is decrease or increase patern of incremental value relevance. The sample used in this research is manufacturing companies listed in ISE (Indonesia Stock Exchange during 1998-2007 periods. The result shows that earnings and book value equity are related positively with stock price and value relevance of earnings is decreasing while book value equity is increasing during the observation periods.

  18. Automated Dimension Determination for NMF-based Incremental Collaborative Filtering

    Directory of Open Access Journals (Sweden)

    Xiwei Wang

    2015-12-01

    Full Text Available The nonnegative matrix factorization (NMF based collaborative filtering t e chniques h a ve a c hieved great success in product recommendations. It is well known that in NMF, the dimensions of the factor matrices have to be determined in advance. Moreover, data is growing fast; thus in some cases, the dimensions need to be changed to reduce the approximation error. The recommender systems should be capable of updating new data in a timely manner without sacrificing the prediction accuracy. In this paper, we propose an NMF based data update approach with automated dimension determination for collaborative filtering purposes. The approach can determine the dimensions of the factor matrices and update them automatically. It exploits the nearest neighborhood based clustering algorithm to cluster users and items according to their auxiliary information, and uses the clusters as the constraints in NMF. The dimensions of the factor matrices are associated with the cluster quantities. When new data becomes available, the incremental clustering algorithm determines whether to increase the number of clusters or merge the existing clusters. Experiments on three different datasets (MovieLens, Sushi, and LibimSeTi were conducted to examine the proposed approach. The results show that our approach can update the data quickly and provide encouraging prediction accuracy.

  19. Incremental Dynamic Analysis of Koyna Dam under Repeated Ground Motions

    Science.gov (United States)

    Zainab Nik Azizan, Nik; Majid, Taksiah A.; Nazri, Fadzli Mohamed; Maity, Damodar; Abdullah, Junaidah

    2018-03-01

    This paper discovers the incremental dynamic analysis (IDA) of concrete gravity dam under single and repeated earthquake loadings to identify the limit state of the dam. Seven ground motions with horizontal and vertical direction as seismic input considered in the nonlinear dynamic analysis based on the real repeated earthquake in the worldwide. All the ground motions convert to respond spectrum and scaled according to the developed elastic respond spectrum in order to match the characteristic of the ground motion to the soil type. The scaled was depends on the fundamental period, T1 of the dam. The Koyna dam has been selected as a case study for the purpose of the analysis by assuming that no sliding and rigid foundation, has been estimated. IDA curves for Koyna dam developed for single and repeated ground motions and the performance level of the dam identifies. The IDA curve of repeated ground motion shown stiffer rather than single ground motion. The ultimate state displacement for a single event is 45.59mm and decreased to 39.33mm under repeated events which are decreased about 14%. This showed that the performance level of the dam based on seismic loadings depend on ground motion pattern.

  20. Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation

    Science.gov (United States)

    Roberts, Seán G.

    2018-01-01

    This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech. PMID:29515487

  1. Numerical Simulation of Incremental Sheet Forming by Simplified Approach

    Science.gov (United States)

    Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.

    2011-01-01

    The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.

  2. Distribution of incremental static stress caused by earthquakes

    Directory of Open Access Journals (Sweden)

    Y. Y. Kagan

    1994-01-01

    Full Text Available Theoretical calculations, simulations and measurements of rotation of earthquake focal mechanisms suggest that the stress in earthquake focal zones follows the Cauchy distribution which is one of the stable probability distributions (with the value of the exponent α equal to 1. We review the properties of the stable distributions and show that the Cauchy distribution is expected to approximate the stress caused by earthquakes occurring over geologically long intervals of a fault zone development. However, the stress caused by recent earthquakes recorded in instrumental catalogues, should follow symmetric stable distributions with the value of α significantly less than one. This is explained by a fractal distribution of earthquake hypocentres: the dimension of a hypocentre set, ��, is close to zero for short-term earthquake catalogues and asymptotically approaches 2¼ for long-time intervals. We use the Harvard catalogue of seismic moment tensor solutions to investigate the distribution of incremental static stress caused by earthquakes. The stress measured in the focal zone of each event is approximated by stable distributions. In agreement with theoretical considerations, the exponent value of the distribution approaches zero as the time span of an earthquake catalogue (ΔT decreases. For large stress values α increases. We surmise that it is caused by the δ increase for small inter-earthquake distances due to location errors.

  3. Noise masking of S-cone increments and decrements.

    Science.gov (United States)

    Wang, Quanhong; Richters, David P; Eskew, Rhea T

    2014-11-12

    S-cone increment and decrement detection thresholds were measured in the presence of bipolar, dynamic noise masks. Noise chromaticities were the L-, M-, and S-cone directions, as well as L-M, L+M, and achromatic (L+M+S) directions. Noise contrast power was varied to measure threshold Energy versus Noise (EvN) functions. S+ and S- thresholds were similarly, and weakly, raised by achromatic noise. However, S+ thresholds were much more elevated by S, L+M, L-M, L- and M-cone noises than were S- thresholds, even though the noises consisted of two symmetric chromatic polarities of equal contrast power. A linear cone combination model accounts for the overall pattern of masking of a single test polarity well. L and M cones have opposite signs in their effects upon raising S+ and S- thresholds. The results strongly indicate that the psychophysical mechanisms responsible for S+ and S- detection, presumably based on S-ON and S-OFF pathways, are distinct, unipolar mechanisms, and that they have different spatiotemporal sampling characteristics, or contrast gains, or both. © 2014 ARVO.

  4. Business Collaboration in Food Networks: Incremental Solution Development

    Directory of Open Access Journals (Sweden)

    Harald Sundmaeker

    2014-10-01

    Full Text Available The paper will present an approach for an incremental solution development that is based on the usage of the currently developed Internet based FIspace business collaboration platform. Key element is the clear segmentation of infrastructures that are either internal or external to the collaborating business entity in the food network. On the one hand, the approach enables to differentiate between specific centralised as well as decentralised ways for data storage and hosting of IT based functionalities. The selection of specific dataexchange protocols and data models is facilitated. On the other hand, the supported solution design and subsequent development is focusing on reusable “software Apps” that can be used on their own and are incorporating a clear added value for the business actors. It will be outlined on how to push the development and introduction of Apps that do not require basic changes of the existing infrastructure. The paper will present an example that is based on the development of a set of Apps for the exchange of product quality related information in food networks, specifically addressing fresh fruits and vegetables. It combines workflow support for data exchange from farm to retail as well as to provide quality feedback information to facilitate the business process improvement. Finally, the latest status of theFIspace platform development will be outlined. Key features and potential ways for real users and software developers in using the FIspace platform that is initiated by science and industry will be outlined.

  5. Automating the Incremental Evolution of Controllers for Physical Robots.

    Science.gov (United States)

    Faíña, Andrés; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    Evolutionary robotics is challenged with some key problems that must be solved, or at least mitigated extensively, before it can fulfill some of its promises to deliver highly autonomous and adaptive robots. The reality gap and the ability to transfer phenotypes from simulation to reality constitute one such problem. Another lies in the embodiment of the evolutionary processes, which links to the first, but focuses on how evolution can act on real agents and occur independently from simulation, that is, going from being, as Eiben, Kernbach, & Haasdijk [2012, p. 261] put it, "the evolution of things, rather than just the evolution of digital objects.…" The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range of problems amenable to embodied evolution.

  6. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Directory of Open Access Journals (Sweden)

    Jerry Chun-Wei Lin

    2015-01-01

    Full Text Available Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.

  7. Revisiting Formability and Failure of AISI304 Sheets in SPIF: Experimental Approach and Numerical Validation

    Directory of Open Access Journals (Sweden)

    Gabriel Centeno

    2017-11-01

    Full Text Available Single Point Incremental Forming (SPIF is a flexible and economic manufacturing process with a strong potential for manufacturing small and medium batches of highly customized parts. Formability and failure in SPIF have been intensively discussed in recent years, especially because this process allows stable plastic deformation well above the conventional forming limits, as this enhanced formability is only achievable within a certain range of process parameters depending on the material type. This paper analyzes formability and failure of AISI304-H111 sheets deformed by SPIF compared to conventional testing conditions (including Nakazima and stretch-bending tests. With this purpose, experimental tests in SPIF and stretch-bending were carried out and a numerical model of SPIF is performed. The results allow the authors to establish the following contributions regarding SPIF: (i the setting of the limits of the formability enhancement when small tool diameters are used, (ii the evolution of the crack when failure is attained and (iii the determination of the conditions upon which necking is suppressed, leading directly to ductile fracture in SPIF.

  8. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  9. Appropriate use of the increment entropy for electrophysiological time series.

    Science.gov (United States)

    Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin

    2018-04-01

    The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. An Incremental Type-2 Meta-Cognitive Extreme Learning Machine.

    Science.gov (United States)

    Pratama, Mahardhika; Zhang, Guangquan; Er, Meng Joo; Anavatti, Sreenatha

    2017-02-01

    Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.

  11. Short term use of the system tariffs : the substitution method revisited

    International Nuclear Information System (INIS)

    De Oliveira-De Jesus, P.M.; Ponce de Leao, M.T.

    2007-01-01

    In some countries, electricity network losses are evaluated using a substitution method in order to apply Use of the System Tariffs against generators and loads. Although the substitution method is widely used for loss pricing in real distribution systems with distributed generation, this method can produce inconsistent results, particularly when all users are included in the analysis. This paper demonstrated how all agents are responsible for some of the network loss reduction and no single user is responsible for the actual loss. For these reasons, a new and more complex procedure based on a cost-causality approach was introduced. In this study, the substitution method was revisited and reformulated with a new performance index in order to produce an equitable sharing of the benefits or added costs introduced by distributed generators. Under certain operating scenarios, the newly proposed method can emulate the solution provided by a marginal or incremental approach fulfilling some requirements for an effective loss allocation policy to ensure recovery of losses and send economic signals to agents. It was concluded that the reformulated method is a practical alternative for access pricing in distribution networks. 5 refs., 2 tabs., 5 figs., 1 appendix

  12. Post-Inflationary Gravitino Production Revisited

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V.; Olive, Keith A.; Peloso, Marco

    2016-01-01

    We revisit gravitino production following inflation. As a first step, we review the standard calculation of gravitino production in the thermal plasma formed at the end of post-inflationary reheating when the inflaton has completely decayed. Next we consider gravitino production prior to the completion of reheating, assuming that the inflaton decay products thermalize instantaneously while they are still dilute. We then argue that instantaneous thermalization is in general a good approximation, and also show that the contribution of non-thermal gravitino production via the collisions of inflaton decay products prior to thermalization is relatively small. Our final estimate of the gravitino-to-entropy ratio is approximated well by a standard calculation of gravitino production in the post-inflationary thermal plasma assuming total instantaneous decay and thermalization at a time $t \\simeq 1.2/\\Gamma_\\phi$. Finally, in light of our calculations, we consider potential implications of upper limits on the gravitin...

  13. Revisiting R-invariant direct gauge mediation

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Cheng-Wei [Center for Mathematics and Theoretical Physics andDepartment of Physics, National Central University,Taoyuan, Taiwan 32001, R.O.C. (China); Institute of Physics, Academia Sinica,Taipei, Taiwan 11529, R.O.C. (China); Physics Division, National Center for Theoretical Sciences,Hsinchu, Taiwan 30013, R.O.C. (China); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8583 (Japan); Harigaya, Keisuke [Department of Physics, University of California,Berkeley, California 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, California 94720 (United States); ICRR, University of Tokyo,Kashiwa, Chiba 277-8582 (Japan); Ibe, Masahiro [Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8583 (Japan); ICRR, University of Tokyo,Kashiwa, Chiba 277-8582 (Japan); Yanagida, Tsutomu T. [Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8583 (Japan)

    2016-03-21

    We revisit a special model of gauge mediated supersymmetry breaking, the “R-invariant direct gauge mediation.” We pay particular attention to whether the model is consistent with the minimal model of the μ-term, i.e., a simple mass term of the Higgs doublets in the superpotential. Although the incompatibility is highlighted in view of the current experimental constraints on the superparticle masses and the observed Higgs boson mass, the minimal μ-term can be consistent with the R-invariant gauge mediation model via a careful choice of model parameters. We derive an upper limit on the gluino mass from the observed Higgs boson mass. We also discuss whether the model can explain the 3σ excess of the Z+jets+E{sub T}{sup miss} events reported by the ATLAS collaboration.

  14. The Faraday effect revisited General theory

    CERN Document Server

    Cornean, H D; Pedersen, T G

    2005-01-01

    This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. For free electrons, the transverse conductivity can be explicitly computed and coincides with the classical result. In the general case, using magnetic perturbation theory, the conductivity tensor is expanded in powers of the strength of the magnetic field $B$. Then the linear term in $B$ of this expansion is written down in terms of the zero magnetic field Green function and the zero field current operator. In the periodic case, the linear term in $B$ of the conductivity tensor is expressed in terms of zero magnetic field Bloch functions and energies. No derivatives with respect to the quasimomentum appear and thereby all ambiguities are removed, in contrast to earlier work.

  15. Revisiting instanton corrections to the Konishi multiplet

    Energy Technology Data Exchange (ETDEWEB)

    Alday, Luis F. [Mathematical Institute, University of Oxford,Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Korchemsky, Gregory P. [Institut de Physique Théorique, Université Paris Saclay, CNRS, CEA,F-91191 Gif-sur-Yvette (France)

    2016-12-01

    We revisit the calculation of instanton effects in correlation functions in N=4 SYM involving the Konishi operator and operators of twist two. Previous studies revealed that the scaling dimensions and the OPE coefficients of these operators do not receive instanton corrections in the semiclassical approximation. We go beyond this approximation and demonstrate that, while operators belonging to the same N=4 supermultiplet ought to have the same conformal data, the evaluation of quantum instanton corrections for one operator can be mapped into a semiclassical computation for another operator in the same supermultiplet. This observation allows us to compute explicitly the leading instanton correction to the scaling dimension of operators in the Konishi supermultiplet as well as to their structure constants in the OPE of two half-BPS scalar operators. We then use these results, together with crossing symmetry, to determine instanton corrections to scaling dimensions of twist-four operators with large spin.

  16. Revisiting kaon physics in general Z scenario

    Directory of Open Access Journals (Sweden)

    Motoi Endo

    2017-08-01

    Full Text Available New physics contributions to the Z penguin are revisited in the light of the recently-reported discrepancy of the direct CP violation in K→ππ. Interference effects between the standard model and new physics contributions to ΔS=2 observables are taken into account. Although the effects are overlooked in the literature, they make experimental bounds significantly severer. It is shown that the new physics contributions must be tuned to enhance B(KL→π0νν¯, if the discrepancy of the direct CP violation is explained with satisfying the experimental constraints. The branching ratio can be as large as 6×10−10 when the contributions are tuned at the 10% level.

  17. Sparse random matrices: The eigenvalue spectrum revisited

    International Nuclear Information System (INIS)

    Semerjian, Guilhem; Cugliandolo, Leticia F.

    2003-08-01

    We revisit the derivation of the density of states of sparse random matrices. We derive a recursion relation that allows one to compute the spectrum of the matrix of incidence for finite trees that determines completely the low concentration limit. Using the iterative scheme introduced by Biroli and Monasson [J. Phys. A 32, L255 (1999)] we find an approximate expression for the density of states expected to hold exactly in the opposite limit of large but finite concentration. The combination of the two methods yields a very simple geometric interpretation of the tails of the spectrum. We test the analytic results with numerical simulations and we suggest an indirect numerical method to explore the tails of the spectrum. (author)

  18. Neutrino dark energy. Revisiting the stability issue

    Energy Technology Data Exchange (ETDEWEB)

    Eggers Bjaelde, O.; Hannestad, S. [Aarhus Univ. (Denmark). Dept. of Physics and Astronomy; Brookfield, A.W. [Sheffield Univ. (United Kingdom). Dept. of Applied Mathematics and Dept. of Physics, Astro-Particle Theory and Cosmology Group; Van de Bruck, C. [Sheffield Univ. (United Kingdom). Dept. of Applied Mathematics, Astro-Particle Theory and Cosmology Group; Mota, D.F. [Heidelberg Univ. (Germany). Inst. fuer Theoretische Physik]|[Institute of Theoretical Astrophysics, Oslo (Norway); Schrempp, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Tocchini-Valentini, D. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Physics and Astronomy

    2007-05-15

    A coupling between a light scalar field and neutrinos has been widely discussed as a mechanism for linking (time varying) neutrino masses and the present energy density and equation of state of dark energy. However, it has been pointed out that the viability of this scenario in the non-relativistic neutrino regime is threatened by the strong growth of hydrodynamic perturbations associated with a negative adiabatic sound speed squared. In this paper we revisit the stability issue in the framework of linear perturbation theory in a model independent way. The criterion for the stability of a model is translated into a constraint on the scalar-neutrino coupling, which depends on the ratio of the energy densities in neutrinos and cold dark matter. We illustrate our results by providing meaningful examples both for stable and unstable models. (orig.)

  19. Re-visiting the electrophysiology of language.

    Science.gov (United States)

    Obleser, Jonas

    2015-09-01

    This editorial accompanies a special issue of Brain and Language re-visiting old themes and new leads in the electrophysiology of language. The event-related potential (ERP) as a series of characteristic deflections ("components") over time and their distribution on the scalp has been exploited by speech and language researchers over decades to find support for diverse psycholinguistic models. Fortunately, methodological and statistical advances have allowed human neuroscience to move beyond some of the limitations imposed when looking at the ERP only. Most importantly, we currently witness a refined and refreshed look at "event-related" (in the literal sense) brain activity that relates itself more closely to the actual neurobiology of speech and language processes. It is this imminent change in handling and interpreting electrophysiological data of speech and language experiments that this special issue intends to capture. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Lactate and ammonia concentration in blood and sweat during incremental cycle ergometer exercise

    NARCIS (Netherlands)

    Ament, W; Huizenga, [No Value; Mook, GA; Gips, CH; Verkerke, GJ

    It is known that the concentrations of ammonia and lactate in blood increase during incremental exercise. Sweat also contains lactate and ammonia. The aim of the present study was to investigate the physiological response of lactate and ammonia in plasma and sweat during a stepwise incremental cycle

  1. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  2. Incremental Beliefs of Ability, Achievement Emotions and Learning of Singapore Students

    Science.gov (United States)

    Luo, Wenshu; Lee, Kerry; Ng, Pak Tee; Ong, Joanne Xiao Wei

    2014-01-01

    This study investigated the relationships of students' incremental beliefs of math ability to their achievement emotions, classroom engagement and math achievement. A sample of 273 secondary students in Singapore were administered measures of incremental beliefs of math ability, math enjoyment, pride, boredom and anxiety, as well as math classroom…

  3. Do otolith increments allow correct inferences about age and growth of coral reef fishes?

    Science.gov (United States)

    Booth, D. J.

    2014-03-01

    Otolith increment structure is widely used to estimate age and growth of marine fishes. Here, I test the accuracy of the long-term otolith increment analysis of the lemon damselfish Pomacentrus moluccensis to describe age and growth characteristics. I compare the number of putative annual otolith increments (as a proxy for actual age) and widths of these increments (as proxies for somatic growth) with actual tagged fish-length data, based on a 6-year dataset, the longest time course for a coral reef fish. Estimated age from otoliths corresponded closely with actual age in all cases, confirming annual increment formation. However, otolith increment widths were poor proxies for actual growth in length [linear regression r 2 = 0.44-0.90, n = 6 fish] and were clearly of limited value in estimating annual growth. Up to 60 % of the annual growth variation was missed using otolith increments, suggesting the long-term back calculations of otolith growth characteristics of reef fish populations should be interpreted with caution.

  4. Dental caries increments and related factors in children with type 1 diabetes mellitus.

    Science.gov (United States)

    Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I

    2008-01-01

    The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.

  5. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    Science.gov (United States)

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017

  6. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  7. Annual increments, specific gravity and energy of Eucalyptus grandis by gamma-ray attenuation technique

    International Nuclear Information System (INIS)

    Rezende, M.A.; Guerrini, I.A.; Ferraz, E.S.B.

    1990-01-01

    Specific gravity annual increments in volume, mass and energy of Eucalyptus grandis at thirteen years of age were made taking into account measurements of the calorific value for wood. It was observed that the calorific value for wood decrease slightly, while the specific gravity increase significantly with age. The so-called culmination age for the Annual Volume Increment was determined to be around fourth year of growth while for the Annual Mass and Energy Increment was around the eighty year. These results show that a tree in a particular age may not have a significant growth in volume, yet one is mass and energy. (author)

  8. Sustained change blindness to incremental scene rotation: a dissociation between explicit change detection and visual memory.

    Science.gov (United States)

    Hollingworth, Andrew; Henderson, John M

    2004-07-01

    In a change detection paradigm, the global orientation of a natural scene was incrementally changed in 1 degree intervals. In Experiments 1 and 2, participants demonstrated sustained change blindness to incremental rotation, often coming to consider a significantly different scene viewpoint as an unchanged continuation of the original view. Experiment 3 showed that participants who failed to detect the incremental rotation nevertheless reliably detected a single-step rotation back to the initial view. Together, these results demonstrate an important dissociation between explicit change detection and visual memory. Following a change, visual memory is updated to reflect the changed state of the environment, even if the change was not detected.

  9. Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Martin, Corless

    2004-01-01

    We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.

  10. Performance and delay analysis of hybrid ARQ with incremental redundancy over double rayleigh fading channels

    KAUST Repository

    Chelli, Ali; Zedini, Emna; Alouini, Mohamed-Slim; Barry, John R.; Pä tzold, Matthias

    2014-01-01

    the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum

  11. Incremental Learning of Perceptual Categories for Open-Domain Sketch Recognition

    National Research Council Canada - National Science Library

    Lovett, Andrew; Dehghani, Morteza; Forbus, Kenneth

    2007-01-01

    .... This paper describes an incremental learning technique for opendomain recognition. Our system builds generalizations for categories of objects based upon previous sketches of those objects and uses those generalizations to classify new sketches...

  12. Performance of hybrid-ARQ with incremental redundancy over relay channels

    KAUST Repository

    Chelli, Ali; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ) with incremental redundancy (IR). The relay overhears

  13. Robust flight control using incremental nonlinear dynamic inversion and angular acceleration prediction

    NARCIS (Netherlands)

    Sieberling, S.; Chu, Q.P.; Mulder, J.A.

    2010-01-01

    This paper presents a flight control strategy based on nonlinear dynamic inversion. The approach presented, called incremental nonlinear dynamic inversion, uses properties of general mechanical systems and nonlinear dynamic inversion by feeding back angular accelerations. Theoretically, feedback of

  14. An Environment for Incremental Development of Distributed Extensible Asynchronous Real-time Systems

    Science.gov (United States)

    Ames, Charles K.; Burleigh, Scott; Briggs, Hugh C.; Auernheimer, Brent

    1996-01-01

    Incremental parallel development of distributed real-time systems is difficult. Architectural techniques and software tools developed at the Jet Propulsion Laboratory's (JPL's) Flight System Testbed make feasible the integration of complex systems in various stages of development.

  15. MUNIX and incremental stimulation MUNE in ALS patients and control subjects

    DEFF Research Database (Denmark)

    Furtula, Jasna; Johnsen, Birger; Christensen, Peter Broegger

    2013-01-01

    This study compares the new Motor Unit Number Estimation (MUNE) technique, MUNIX, with the more common incremental stimulation MUNE (IS-MUNE) with respect to reproducibility in healthy subjects and as potential biomarker of disease progression in patients with ALS....

  16. Unified performance analysis of hybrid-ARQ with incremental redundancy over free-space optical channels

    KAUST Repository

    Zedini, Emna; Chelli, Ali; Alouini, Mohamed-Slim

    2014-01-01

    In this paper, we carry out a unified performance analysis of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) from an information theoretic perspective over a point-to-point free-space optical (FSO) system. First, we

  17. Pair Production Constraints on Superluminal Neutrinos Revisited

    International Nuclear Information System (INIS)

    Brodsky, Stanley

    2012-01-01

    We revisit the pair creation constraint on superluminal neutrinos considered by Cohen and Glashow in order to clarify which types of superluminal models are constrained. We show that a model in which the superluminal neutrino is effectively light-like can evade the Cohen-Glashow constraint. In summary, any model for which the CG pair production process operates is excluded because such timelike neutrinos would not be detected by OPERA or other experiments. However, a superluminal neutrino which is effectively lightlike with fixed p 2 can evade the Cohen-Glashow constraint because of energy-momentum conservation. The coincidence involved in explaining the SN1987A constraint certainly makes such a picture improbable - but it is still intrinsically possible. The lightlike model is appealing in that it does not violate Lorentz symmetry in particle interactions, although one would expect Hughes-Drever tests to turn up a violation eventually. Other evasions of the CG constraints are also possible; perhaps, e.g., the neutrino takes a 'short cut' through extra dimensions or suffers anomalous acceleration in matter. Irrespective of the OPERA result, Lorentz-violating interactions remain possible, and ongoing experimental investigation of such possibilities should continue.

  18. Carbon emission from global hydroelectric reservoirs revisited.

    Science.gov (United States)

    Li, Siyue; Zhang, Quanfa

    2014-12-01

    Substantial greenhouse gas (GHG) emissions from hydropower reservoirs have been of great concerns recently, yet the significant carbon emitters of drawdown area and reservoir downstream (including spillways and turbines as well as river reaches below dams) have not been included in global carbon budget. Here, we revisit GHG emission from hydropower reservoirs by considering reservoir surface area, drawdown zone and reservoir downstream. Our estimates demonstrate around 301.3 Tg carbon dioxide (CO2)/year and 18.7 Tg methane (CH4)/year from global hydroelectric reservoirs, which are much higher than recent observations. The sum of drawdown and downstream emission, which is generally overlooked, represents 42 % CO2 and 67 % CH4 of the total emissions from hydropower reservoirs. Accordingly, the global average emissions from hydropower are estimated to be 92 g CO2/kWh and 5.7 g CH4/kWh. Nonetheless, global hydroelectricity could currently reduce approximate 2,351 Tg CO2eq/year with respect to fuel fossil plant alternative. The new findings show a substantial revision of carbon emission from the global hydropower reservoirs.

  19. Meta-analysis in clinical trials revisited.

    Science.gov (United States)

    DerSimonian, Rebecca; Laird, Nan

    2015-11-01

    In this paper, we revisit a 1986 article we published in this Journal, Meta-Analysis in Clinical Trials, where we introduced a random-effects model to summarize the evidence about treatment efficacy from a number of related clinical trials. Because of its simplicity and ease of implementation, our approach has been widely used (with more than 12,000 citations to date) and the "DerSimonian and Laird method" is now often referred to as the 'standard approach' or a 'popular' method for meta-analysis in medical and clinical research. The method is especially useful for providing an overall effect estimate and for characterizing the heterogeneity of effects across a series of studies. Here, we review the background that led to the original 1986 article, briefly describe the random-effects approach for meta-analysis, explore its use in various settings and trends over time and recommend a refinement to the method using a robust variance estimator for testing overall effect. We conclude with a discussion of repurposing the method for Big Data meta-analysis and Genome Wide Association Studies for studying the importance of genetic variants in complex diseases. Published by Elsevier Inc.

  20. Critical boundary sine-Gordon revisited

    International Nuclear Information System (INIS)

    Hasselfield, M.; Lee, Taejin; Semenoff, G.W.; Stamp, P.C.E.

    2006-01-01

    We revisit the exact solution of the two space-time dimensional quantum field theory of a free massless boson with a periodic boundary interaction and self-dual period. We analyze the model by using a mapping to free fermions with a boundary mass term originally suggested in Ref. [J. Polchinski, L. Thorlacius, Phys. Rev. D 50 (1994) 622]. We find that the entire SL (2, C) family of boundary states of a single boson are boundary sine-Gordon states and we derive a simple explicit expression for the boundary state in fermion variables and as a function of sine-Gordon coupling constants. We use this expression to compute the partition function. We observe that the solution of the model has a strong-weak coupling generalization of T-duality. We then examine a class of recently discovered conformal boundary states for compact bosons with radii which are rational numbers times the self-dual radius. These have simple expression in fermion variables. We postulate sine-Gordon-like field theories with discrete gauge symmetries for which they are the appropriate boundary states

  1. The drive revisited: Mastery and satisfaction.

    Science.gov (United States)

    Denis, Paul

    2016-06-01

    Starting from the theory of the libido and the notions of the experience of satisfaction and the drive for mastery introduced by Freud, the author revisits the notion of the drive by proposing the following model: the drive takes shape in the combination of two currents of libidinal cathexis, one which takes the paths of the 'apparatus for obtaining mastery' (the sense-organs, motricity, etc.) and strives to appropriate the object, and the other which cathects the erotogenic zones and the experience of satisfaction that is experienced through stimulation in contact with the object. The result of this combination of cathexes constitutes a 'representation', the subsequent evocation of which makes it possible to tolerate for a certain period of time the absence of a satisfying object. On the basis of this conception, the author distinguishes the representations proper, vehicles of satisfaction, from imagos and traumatic images which give rise to excitation that does not link up with the paths taken by the drives. This model makes it possible to conciliate the points of view of the advocates of 'object-seeking' and of those who give precedence to the search for pleasure, and, further, to renew our understanding of object-relations, which can then be approached from the angle of their relations to infantile sexuality. Destructiveness is considered in terms of "mastery madness" and not in terms of the late Freudian hypothesis of the death drive. Copyright © 2015 Institute of Psychoanalysis.

  2. Revisiting the argument from fetal potential

    Directory of Open Access Journals (Sweden)

    Manninen Bertha

    2007-05-01

    Full Text Available Abstract One of the most famous, and most derided, arguments against the morality of abortion is the argument from potential, which maintains that the fetus' potential to become a person and enjoy the valuable life common to persons, entails that its destruction is prima facie morally impermissible. In this paper, I will revisit and offer a defense of the argument from potential. First, I will criticize the classical arguments proffered against the importance of fetal potential, specifically the arguments put forth by philosophers Peter Singer and David Boonin, by carefully unpacking the claims made in these arguments and illustrating why they are flawed. Secondly, I will maintain that fetal potential is morally relevant when it comes to the morality of abortion, but that it must be accorded a proper place in the argument. This proper place, however, cannot be found until we first answer a very important and complex question: we must first address the issue of personal identity, and when the fetus becomes the type of being who is relevantly identical to a future person. I will illustrate why the question of fetal potential can only be meaningfully addressed after we have first answered the question of personal identity and how it relates to the human fetus.

  3. THE CONCEPT OF REFERENCE CONDITION, REVISITED ...

    Science.gov (United States)

    Ecological assessments of aquatic ecosystems depend on the ability to compare current conditions against some expectation of how they could be in the absence of significant human disturbance. The concept of a ‘‘reference condition’’ is often used to describe the standard or benchmark against which current condition is compared. If assessments are to be conducted consistently, then a common understanding of the definitions and complications of reference condition is necessary. A 2006 paper (Stoddard et al., 2006, Ecological Applications 16:1267-1276) made an early attempt at codifying the reference condition concept; in this presentation we will revisit the points raised in that paper (and others) and examine how our thinking has changed in a little over 10 years.Among the issues to be discussed: (1) the “moving target” created when reference site data are used to set thresholds in large scale assessments; (2) natural vs. human disturbance and their effects on reference site distributions; (3) circularity and the use of biological data to assist in reference site identification; (4) using site-scale (in-stream or in-lake) measurements vs. landscape-level human activity to identify reference conditions. Ecological assessments of aquatic ecosystems depend on the ability to compare current conditions against some expectation of how they could be in the absence of significant human disturbance. The concept of a ‘‘reference condition’’ is often use

  4. The Super-GUT CMSSM Revisited

    CERN Document Server

    Ellis, John

    2016-01-01

    We revisit minimal supersymmetric SU(5) grand unification (GUT) models in which the soft supersymmetry-breaking parameters of the minimal supersymmetric Standard Model (MSSM) are universal at some input scale, $M_{in}$, above the supersymmetric gauge coupling unification scale, $M_{GUT}$. As in the constrained MSSM (CMSSM), we assume that the scalar masses and gaugino masses have common values, $m_0$ and $m_{1/2}$ respectively, at $M_{in}$, as do the trilinear soft supersymmetry-breaking parameters $A_0$. Going beyond previous studies of such a super-GUT CMSSM scenario, we explore the constraints imposed by the lower limit on the proton lifetime and the LHC measurement of the Higgs mass, $m_h$. We find regions of $m_0$, $m_{1/2}$, $A_0$ and the parameters of the SU(5) superpotential that are compatible with these and other phenomenological constraints such as the density of cold dark matter, which we assume to be provided by the lightest neutralino. Typically, these allowed regions appear for $m_0$ and $m_{1/...

  5. Searle's"Dualism Revisited"

    Energy Technology Data Exchange (ETDEWEB)

    P., Henry

    2008-11-20

    A recent article in which John Searle claims to refute dualism is examined from a scientific perspective. John Searle begins his recent article 'Dualism Revisited' by stating his belief that the philosophical problem of consciousness has a scientific solution. He then claims to refute dualism. It is therefore appropriate to examine his arguments against dualism from a scientific perspective. Scientific physical theories contain two kinds of descriptions: (1) Descriptions of our empirical findings, expressed in an every-day language that allows us communicate to each other our sensory experiences pertaining to what we have done and what we have learned; and (2) Descriptions of a theoretical model, expressed in a mathematical language that allows us to communicate to each other certain ideas that exist in our mathematical imaginations, and that are believed to represent, within our streams of consciousness, certain aspects of reality that we deem to exist independently of their being perceived by any human observer. These two parts of our scientific description correspond to the two aspects of our general contemporary dualistic understanding of the total reality in which we are imbedded, namely the empirical-mental aspect and the theoretical-physical aspect. The duality question is whether this general dualistic understanding of ourselves should be regarded as false in some important philosophical or scientific sense.

  6. Early-Transition Output Decline Revisited

    Directory of Open Access Journals (Sweden)

    Crt Kostevc

    2016-05-01

    Full Text Available In this paper we revisit the issue of aggregate output decline that took place in the early transition period. We propose an alternative explanation of output decline that is applicable to Central- and Eastern-European countries. In the first part of the paper we develop a simple dynamic general equilibrium model that builds on work by Gomulka and Lane (2001. In particular, we consider price liberalization, interpreted as elimination of distortionary taxation, as a trigger of the output decline. We show that price liberalization in interaction with heterogeneous adjustment costs and non-employment benefits lead to aggregate output decline and surge in wage inequality. While these patterns are consistent with actual dynamics in CEE countries, this model cannot generate output decline in all sectors. Instead sectors that were initially taxed even exhibit output growth. Thus, in the second part we consider an alternative general equilibrium model with only one production sector and two types of labor and distortion in a form of wage compression during the socialist era. The trigger for labor mobility and consequently output decline is wage liberalization. Assuming heterogeneity of workers in terms of adjustment costs and non-employment benefits can explain output decline in all industries.

  7. Post-inflationary gravitino production revisited

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics, King' s College London, London WC2R 2LS (United Kingdom); Garcia, Marcos A.G.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Peloso, Marco, E-mail: john.ellis@cern.ch, E-mail: garciagarcia@physics.umn.edu, E-mail: dimitri@physics.tamu.edu, E-mail: olive@physics.umn.edu, E-mail: peloso@physics.umn.edu [School of Physics and Astronomy and Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States)

    2016-03-01

    We revisit gravitino production following inflation. As a first step, we review the standard calculation of gravitino production in the thermal plasma formed at the end of post-inflationary reheating when the inflaton has completely decayed. Next we consider gravitino production prior to the completion of reheating, assuming that the inflaton decay products thermalize instantaneously while they are still dilute. We then argue that instantaneous thermalization is in general a good approximation, and also show that the contribution of non-thermal gravitino production via the collisions of inflaton decay products prior to thermalization is relatively small. Our final estimate of the gravitino-to-entropy ratio is approximated well by a standard calculation of gravitino production in the post-inflationary thermal plasma assuming total instantaneous decay and thermalization at a time t ≅ 1.2/Γ{sub φ}. Finally, in light of our calculations, we consider potential implications of upper limits on the gravitino abundance for models of inflation, with particular attention to scenarios for inflaton decays in supersymmetric Starobinsky-like models.

  8. Pipe failure probability - the Thomas paper revisited

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    2000-01-01

    Almost twenty years ago, in Volume 2 of Reliability Engineering (the predecessor of Reliability Engineering and System Safety), a paper by H. M. Thomas of Rolls Royce and Associates Ltd. presented a generalized approach to the estimation of piping and vessel failure probability. The 'Thomas-approach' used insights from actual failure statistics to calculate the probability of leakage and conditional probability of rupture given leakage. It was intended for practitioners without access to data on the service experience with piping and piping system components. This article revisits the Thomas paper by drawing on insights from development of a new database on piping failures in commercial nuclear power plants worldwide (SKI-PIPE). Partially sponsored by the Swedish Nuclear Power Inspectorate (SKI), the R and D leading up to this note was performed during 1994-1999. Motivated by data requirements of reliability analysis and probabilistic safety assessment (PSA), the new database supports statistical analysis of piping failure data. Against the background of this database development program, the article reviews the applicability of the 'Thomas approach' in applied risk and reliability analysis. It addresses the question whether a new and expanded database on the service experience with piping systems would alter the original piping reliability correlation as suggested by H. M. Thomas

  9. Revisit to diffraction anomalous fine structure

    International Nuclear Information System (INIS)

    Kawaguchi, T.; Fukuda, K.; Tokuda, K.; Shimada, K.; Ichitsubo, T.; Oishi, M.; Mizuki, J.; Matsubara, E.

    2014-01-01

    The diffraction anomalous fine structure method has been revisited by applying this measurement technique to polycrystalline samples and using an analytical method with the logarithmic dispersion relation. The diffraction anomalous fine structure (DAFS) method that is a spectroscopic analysis combined with resonant X-ray diffraction enables the determination of the valence state and local structure of a selected element at a specific crystalline site and/or phase. This method has been improved by using a polycrystalline sample, channel-cut monochromator optics with an undulator synchrotron radiation source, an area detector and direct determination of resonant terms with a logarithmic dispersion relation. This study makes the DAFS method more convenient and saves a large amount of measurement time in comparison with the conventional DAFS method with a single crystal. The improved DAFS method has been applied to some model samples, Ni foil and Fe 3 O 4 powder, to demonstrate the validity of the measurement and the analysis of the present DAFS method

  10. Revisiting the Survival Mnemonic Effect in Children

    Directory of Open Access Journals (Sweden)

    Josefa N. S. Pand Eirada

    2014-04-01

    Full Text Available The survival processing paradigm is designed to explore the adaptive nature of memory functioning. The mnemonic advantage of processing information in fitness-relevant contexts, as has been demonstrated using this paradigm, is now well established, particularly in young adults; this phenomenon is often referred to as the “survival processing effect.” In the current experiment, we revisited the investigation of this effect in children and tested it in a new cultural group, using a procedure that differs from the existing studies with children. A group of 40 Portuguese children rated the relevance of unrelated words to a survival and a new moving scenario. This encoding task was followed by a surprise free-recall task. Akin to what is typically found, survival processing produced better memory performance than the control condition (moving. These data put on firmer ground the idea that a mnemonic tuning to fitness-relevant encodings is present early in development. The theoretical importance of this result to the adaptive memory literature is discussed, as well as potential practical implications of this kind of approach to the study of memory in children.

  11. Incremental Reconstruction of Urban Environments by Edge-Points Delaunay Triangulation

    OpenAIRE

    Romanoni, Andrea; Matteucci, Matteo

    2016-01-01

    Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we ...

  12. The Boundary Between Planning and Incremental Budgeting: Empirical Examination in a Publicly-Owned Corporation

    OpenAIRE

    S. K. Lioukas; D. J. Chambers

    1981-01-01

    This paper is a study within the field of public budgeting. It focuses on the capital budget, and it attempts to model and analyze the capital budgeting process using a framework previously developed in the literature of incremental budgeting. Within this framework the paper seeks to determine empirically whether the movement of capital expenditure budgets can be represented as the routine application of incremental adjustments over an existing base of allocations and whether further, forward...

  13. Global Combat Support System - Army Increment 2 (GCSS-A Inc 2)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) Defense Acquisition...Secretary of Defense PB - President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be...Date Assigned: Program Information Program Name Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) DoD Component Army Responsible

  14. Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A...Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A) DoD Component Air Force Responsible Office Program...APB) dated March 9, 2015 DCAPES Inc 2A 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments

  15. Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B...Information Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B) DoD Component Air Force Responsible Office...been established. DCAPES Inc 2B 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments (DCAPES) is

  16. Applying CLSM to increment core surfaces for histometric analyses: A novel advance in quantitative wood anatomy

    OpenAIRE

    Wei Liang; Ingo Heinrich; Gerhard Helle; I. Dorado Liñán; T. Heinken

    2013-01-01

    A novel procedure has been developed to conduct cell structure measurements on increment core samples of conifers. The procedure combines readily available hardware and software equipment. The essential part of the procedure is the application of a confocal laser scanning microscope (CLSM) which captures images directly from increment cores surfaced with the advanced WSL core-microtome. Cell wall and lumen are displayed with a strong contrast due to the monochrome black and green nature of th...

  17. A System to Derive Optimal Tree Diameter Increment Models from the Eastwide Forest Inventory Data Base (EFIDB)

    Science.gov (United States)

    Don C. Bragg

    2002-01-01

    This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...

  18. Estimation of incremental reactivities for multiple day scenarios: an application to ethane and dimethyoxymethane

    Science.gov (United States)

    Stockwell, William R.; Geiger, Harald; Becker, Karl H.

    Single-day scenarios are used to calculate incremental reactivities by definition (Carter, J. Air Waste Management Assoc. 44 (1994) 881-899.) but even unreactive organic compounds may have a non-negligible effect on ozone concentrations if multiple-day scenarios are considered. The concentration of unreactive compounds and their products may build up over a multiple-day period and the oxidation products may be highly reactive or highly unreactive affecting the overall incremental reactivity of the organic compound. We have developed a method for calculating incremental reactivities for multiple days based on a standard scenario for polluted European conditions. This method was used to estimate maximum incremental reactivities (MIR) and maximum ozone incremental reactivities (MOIR) for ethane and dimethyoxymethane for scenarios ranging from 1 to 6 days. It was found that the incremental reactivities increased as the length of the simulation period increased. The MIR of ethane increased faster than the value for dimethyoxymethane as the scenarios became longer. The MOIRs of ethane and dimethyoxymethane increased but the change was more modest for scenarios longer than 3 days. MOIRs of both volatile organic compounds were equal within the uncertainties of their chemical mechanisms by the 5 day scenario. These results show that dimethyoxymethane has an ozone forming potential on a per mass basis that is only somewhat greater than ethane if multiple-day scenarios are considered.

  19. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    Science.gov (United States)

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  20. Calculation of the increment reduction in spruce stands by charcoal smoke

    Energy Technology Data Exchange (ETDEWEB)

    Guede, J

    1954-01-01

    Chronic damage to spruce trees by charcoal smoke, often hardly noticeable from outward appearance but causing marked reductions of wood increment can be determined by means of a calculation by increment cores. Sulfurous acid anhydride causes the closure of the stomates of needles by which the circulation of water is checked. The assimilation and the wood increment are reduced. The cores are taken from uninjured trees belonging to the dominant class. These trees are liable to irregular variations in the trend of growth only by atmospheric influences and disturbances in the circulation of water. The decrease of increment of a stand can be judged by the trend of growth of the basal area of sample trees. Two methods are applied: in the first method, the difference between the mean total increment before the damage has been caused and that after it is calculated by the yield table in deriving the site quality classes from the basal area growth of dominant stems. This is possible by using the mean diameter of each age class and the frequency curve of basal area for each site class. In the other method, the reduction of basal area increment of sample trees is measured directly. The total reduction of a stand can be judged by the share of the dominant class of stem in the total current growth of the basal area of a sound stand and by the percent of reduction of the sample trees.

  1. Pockets of Participation: Revisiting Child-Centred Participation Research

    Science.gov (United States)

    Franks, Myfanwy

    2011-01-01

    This article revisits the theme of the clash of interests and power relations at work in participatory research which is prescribed from above. It offers a possible route toward solving conflict between adult-led research carried out by young researchers, funding requirements and organisational constraints. The article explores issues of…

  2. Rereading Albert B. Lord's The Singer of Tales . Revisiting the ...

    African Journals Online (AJOL)

    Access to a fresh set of video-recordings of Sesotho praise-poetry made in the year 2000 enabled the author to revisit his adaptation of Albert Lord's definition of the formula as a dynamic compositional device that the oral poet utilizes during delivery. The basic adaptation made in 1983 pertains to heroic praises (dithoko tsa ...

  3. Literary Origins of the Term "School Psychologist" Revisited

    Science.gov (United States)

    Fagan, Thomas K.

    2005-01-01

    Previous research on the literary origins of the term "school psychologist" is revisited, and conclusions are revised in light of new evidence. It appears that the origin of the term in the American literature occurred as early as 1898 in an article by Hugo Munsterberg, predating the usage by Wilhelm Stern in 1911. The early references to the…

  4. The Neutrosophic Logic View to Schrodinger's Cat Paradox, Revisited

    Directory of Open Access Journals (Sweden)

    Florentin Smarandache

    2008-07-01

    Full Text Available The present article discusses Neutrosophic logic view to Schrodinger's cat paradox. We argue that this paradox involves some degree of indeterminacy (unknown which Neutrosophic logic can take into consideration, whereas other methods including Fuzzy logic cannot. To make this proposition clear, we revisit our previous paper by offering an illustration using modified coin tossing problem, known as Parrondo's game.

  5. Revisiting Constructivist Teaching Methods in Ontario Colleges Preparing for Accreditation

    Science.gov (United States)

    Schultz, Rachel A.

    2015-01-01

    At the time of writing, the first community colleges in Ontario were preparing for transition to an accreditation model from an audit system. This paper revisits constructivist literature, arguing that a more pragmatic definition of constructivism effectively blends positivist and interactionist philosophies to achieve both student centred…

  6. High precision mass measurements in Ψ and Υ families revisited

    International Nuclear Information System (INIS)

    Artamonov, A.S.; Baru, S.E.; Blinov, A.E.

    2000-01-01

    High precision mass measurements in Ψ and Υ families performed in 1980-1984 at the VEPP-4 collider with OLYA and MD-1 detectors are revisited. The corrections for the new value of the electron mass are presented. The effect of the updated radiative corrections has been calculated for the J/Ψ(1S) and Ψ(2S) mass measurements [ru

  7. The Importance of Being a Complement: CED Effects Revisited

    Science.gov (United States)

    Jurka, Johannes

    2010-01-01

    This dissertation revisits subject island effects (Ross 1967, Chomsky 1973) cross-linguistically. Controlled acceptability judgment studies in German, English, Japanese and Serbian show that extraction out of specifiers is consistently degraded compared to extraction out of complements, indicating that the Condition on Extraction domains (CED,…

  8. Surface tension in soap films: revisiting a classic demonstration

    International Nuclear Information System (INIS)

    Behroozi, F

    2010-01-01

    We revisit a classic demonstration for surface tension in soap films and introduce a more striking variation of it. The demonstration shows how the film, pulling uniformly and normally on a loose string, transforms it into a circular arc under tension. The relationship between the surface tension and the string tension is analysed and presented in a useful graphical form. (letters and comments)

  9. Additively homomorphic encryption with a double decryption mechanism, revisited

    NARCIS (Netherlands)

    Peter, Andreas; Kronberg, M.; Trei, W.; Katzenbeisser, S.

    We revisit the notion of additively homomorphic encryption with a double decryption mechanism (DD-PKE), which allows for additions in the encrypted domain while having a master decryption procedure that can decrypt all properly formed ciphertexts by using a special master secret. This type of

  10. Revisiting Jack Goody to Rethink Determinisms in Literacy Studies

    Science.gov (United States)

    Collin, Ross

    2013-01-01

    This article revisits Goody's arguments about literacy's influence on social arrangements, culture, cognition, economics, and other domains of existence. Whereas some of his arguments tend toward technological determinism (i.e., literacy causes change in the world), other of his arguments construe literacy as a force that shapes and is shaped by…

  11. Surface tension in soap films: revisiting a classic demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Behroozi, F [Department of Physics, University of Northern Iowa, Cedar Falls, IA 50614 (United States)], E-mail: behroozi@uni.edu

    2010-01-15

    We revisit a classic demonstration for surface tension in soap films and introduce a more striking variation of it. The demonstration shows how the film, pulling uniformly and normally on a loose string, transforms it into a circular arc under tension. The relationship between the surface tension and the string tension is analysed and presented in a useful graphical form. (letters and comments)

  12. A control center design revisited: learning from users’ appropriation

    DEFF Research Database (Denmark)

    Souza da Conceição, Carolina; Cordeiro, Cláudia

    2014-01-01

    This paper aims to present the lessons learned during a control center design project by revisiting another control center from the same company designed two and a half years before by the same project team. In light of the experience with the first project and its analysis, the designers and res...

  13. A Feminist Revisit to the First-Year Curriculum.

    Science.gov (United States)

    Bernstein, Anita

    1996-01-01

    A seminar at Chicago-Kent College of Law (Illinois) that reviews six first-year law school courses by focusing on feminist issues in course content and structure is described. The seminar functions as both a review and a shift in perspective. Courses revisited include civil procedure, contracts, criminal law, justice and the legal system,…

  14. Revisiting deforestation in Africa (1990–2010): One more lost ...

    African Journals Online (AJOL)

    This spotlight revisits the dynamics and prognosis outlined in the late 1980's published in Déforestation en Afrique. This book on deforestation in Africa utilized available statistical data from the 1980's and was a pioneering self - styled attempt to provide a holistic viewpoint of the ongoing trends pertaining to deforestation in ...

  15. Moral Judgment Development across Cultures: Revisiting Kohlberg's Universality Claims

    Science.gov (United States)

    Gibbs, John C.; Basinger, Karen S.; Grime, Rebecca L.; Snarey, John R.

    2007-01-01

    This article revisits Kohlberg's cognitive developmental claims that stages of moral judgment, facilitative processes of social perspective-taking, and moral values are commonly identifiable across cultures. Snarey [Snarey, J. (1985). "The cross-cultural universality of social-moral development: A critical review of Kohlbergian research."…

  16. Revisiting the quantum harmonic oscillator via unilateral Fourier transforms

    International Nuclear Information System (INIS)

    Nogueira, Pedro H F; Castro, Antonio S de

    2016-01-01

    The literature on the exponential Fourier approach to the one-dimensional quantum harmonic oscillator problem is revised and criticized. It is shown that the solution of this problem has been built on faulty premises. The problem is revisited via the Fourier sine and cosine transform method and the stationary states are properly determined by requiring definite parity and square-integrable eigenfunctions. (paper)

  17. Transport benchmarks for one-dimensional binary Markovian mixtures revisited

    International Nuclear Information System (INIS)

    Malvagi, F.

    2013-01-01

    The classic benchmarks for transport through a binary Markovian mixture are revisited to look at the probability distribution function of the chosen 'results': reflection, transmission and scalar flux. We argue that the knowledge of the ensemble averaged results is not sufficient for reliable predictions: a measure of the dispersion must also be obtained. An algorithm to estimate this dispersion is tested. (author)

  18. Thorbecke Revisited : The Role of Doctrinaire Liberalism in Dutch Politics

    NARCIS (Netherlands)

    Drentje, Jan

    2011-01-01

    Thorbecke Revisited: The Role of Doctrinaire Liberalism in Dutch Politics In the political history of the nineteenth century Thorbecke played a crucial role. As the architect of the 1848 liberal constitutional reform he led three cabinets. In many ways he dominated the political discourse during the

  19. Faraday effect revisited: sum rules and convergence issues

    DEFF Research Database (Denmark)

    Cornean, Horia; Nenciu, Gheorghe

    2010-01-01

    This is the third paper of a series revisiting the Faraday effect. The question of the absolute convergence of the sums over the band indices entering the Verdet constant is considered. In general, sum rules and traces per unit volume play an important role in solid-state physics, and they give...

  20. A Multi-Level Model of Moral Functioning Revisited

    Science.gov (United States)

    Reed, Don Collins

    2009-01-01

    The model of moral functioning scaffolded in the 2008 "JME" Special Issue is here revisited in response to three papers criticising that volume. As guest editor of that Special Issue I have formulated the main body of this response, concerning the dynamic systems approach to moral development, the problem of moral relativism and the role of…

  1. Who should do the dishes now? Revisiting gender and housework in contemporary urban South Wales

    OpenAIRE

    Mannay, Dawn

    2016-01-01

    This chapter revisits Jane Pilcher’s (1994) seminal work ‘Who should do the dishes? Three generations of Welsh women talking about men and housework’, which was originally published in Our Sister’s Land: the changing identities of women in Wales. As discussed in the introductory chapter, I began revisiting classic Welsh studies as part of my doctoral study Mothers and daughters on the margins: gender, generation and education (Mannay, 2012); this lead to the later publication of a revisiting ...

  2. A Structural Equation Model of Risk Perception of Rockfall for Revisit Intention

    OpenAIRE

    Ya-Fen Lee; Yun-Yao Chi

    2014-01-01

    The study aims to explore the relationship between risk perception of rockfall and revisit intention using a Structural Equation Modeling (SEM) analysis. A total of 573 valid questionnaires are collected from travelers to Taroko National Park, Taiwan. The findings show the majority of travelers have the medium perception of rockfall risk, and are willing to revisit the Taroko National Park. The revisit intention to Taroko National Park is influenced by hazardous preferences, willingness-to-pa...

  3. Highway 61 Revisited: Bob Dilan i francuski poststrukturalizam / Highway 61 Revisited: Bob Dilan and French Poststructuralism

    Directory of Open Access Journals (Sweden)

    Nikola Dedić

    2013-06-01

    Full Text Available The main aim of this text is to show parallels between rock music and poststructuralist philosophy. As a case study one of the most celebrated rock albums of all times – Bob Dylan’s Highway 61 Revisited from 1965 is taken. It is one of the crucial albums in the history of popular culture which influenced further development of rock music within American counter culture of the 60s. Dylan’s turn from the politics of American New Left and folk movement, his relation towards the notions of the author and intertextuality, and his connection with experimental usage of language in the manner of avant-garde and neoavant-garde poetry, are juxtaposed with the main philosophical standpoints of Jean-François Lyotard, Jean Baudrillard, Roland Barthes and Julia Kristeva which historically and chronologically coincide with the appearance of Dylan’s album.

  4. The coordinate coherent states approach revisited

    International Nuclear Information System (INIS)

    Miao, Yan-Gang; Zhang, Shao-Jun

    2013-01-01

    We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies that the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: ► Suggest a canonical quantization in the coordinate coherent states approach. ► Prove the validity of the concept of point particles. ► Apply the canonical quantization to the Unruh effect and Hawking radiation. ► Find no deformations in the Unruh temperature and Hawking temperature. ► Provide the modified spectra of the Unruh effect and Hawking radiation.

  5. Backwardation in energy futures markets: Metalgesellschaft revisited

    International Nuclear Information System (INIS)

    Charupat, N.; Deaves, R.

    2003-01-01

    Energy supply contracts negotiated by the US Subsidiary of Metalgesellschaft Refining and Marketing (MGRM), which were the subject of much subsequent debate, are re-examined. The contracts were hedged by the US Subsidiary barrel-for-barrel using short-dated energy derivatives. When the hedge program experienced difficulties, the derivatives positions were promptly liquidated by the parent company. Revisiting the MGRM contracts also provides the opportunity to explore the latest evidence on backwardation in energy markets. Accordingly, the paper discusses first the theoretical reasons for backwardation, followed by an empirical examination using the MGRM data available at the time of the hedge program in 1992 and a second set of data that became available in 2000. By using a more up-to-date data set covering a longer time period and by controlling the time series properties of the data, the authors expect to provide more reliable empirical evidence on the behaviour of energy futures prices. Results based on the 1992 data suggest that the strategy employed by MGRM could be expected to be profitable while the risks are relatively low. However, analysis based on the 2000 data shows lower, although still significant profits, but higher risks. The final conclusion was that the likelihood of problems similar to those faced by MGRM in 1992 are twice as high with the updated 2000 data, suggesting that the risk-return pattern of the stack-and-roll hedging strategy using short-dated energy future contracts to hedge long-tem contracts is less appealing now than when MGRM implemented its hedging program in 1992. 24 refs., 3 tabs., 6 figs

  6. Clifford Algebra Implying Three Fermion Generations Revisited

    International Nuclear Information System (INIS)

    Krolikowski, W.

    2002-01-01

    The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √p 2 → Γ (N) ·p works, leading to a sequence N=1, 2, 3, ... of Dirac-type equations, where four Dirac-type matrices Γ (N) μ are embedded into a Clifford algebra via a Jacobi definition introducing four ''centre-of-mass'' and (N - 1) x four ''relative'' Dirac-type matrices. These define one ''centre-of-mass'' and N - 1 ''relative'' Dirac bispinor indices. Secundo, the ''centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while N - 1 ''relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ''relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1, 3, 5 in the case of N odd, and two with N = 2, 4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3 x 3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is m τ = 1776.80 MeV, when the input of experimental m e and m μ is used. (author)

  7. Solar system anomalies: Revisiting Hubble's law

    Science.gov (United States)

    Plamondon, R.

    2017-12-01

    This paper investigates the impact of a new metric recently published [R. Plamondon and C. Ouellet-Plamondon, in On Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories, edited by K. Rosquist, R. T. Jantzen, and R. Ruffini (World Scientific, Singapore, 2015), p. 1301] for studying the space-time geometry of a static symmetric massive object. This metric depends on a complementary error function (erfc) potential that characterizes the emergent gravitation field predicted by the model. This results in two types of deviations as compared to computations made on the basis of a Newtonian potential: a constant and a radial outcome. One key feature of the metric is that it postulates the existence of an intrinsic physical constant σ , the massive object-specific proper length that scales measurements in its surroundings. Although σ must be evaluated experimentally, we use a heuristic to estimate its value and point out some latent relationships between the Hubble constant, the secular increase in the astronomical unit, and the Pioneers delay. Indeed, highlighting the systematic errors that emerge when the effect of σ is neglected, one can link the Hubble constant H 0 to σ Sun and the secular increase V AU to σ Earth . The accuracy of the resulting numerical predictions, H 0 = 74 . 42 ( 0 . 02 ) ( km / s ) / Mpc and V AU ≅ 7.8 cm yr-1 , calls for more investigations of this new metric by specific experts. Moreover, we investigate the expected impacts of the new metric on the flyby anomalies, and we revisit the Pioneers delay. It is shown that both phenomena could be partly taken into account within the context of this unifying paradigm, with quite accurate numerical predictions. A correction for the osculating asymptotic velocity at the perigee of the order of 10 mm/s and an inward radial acceleration of 8 . 34 × 10 - 10 m / s 2 affecting the Pioneer ! space crafts could be explained by this new model.

  8. Double neutron stars: merger rates revisited

    Science.gov (United States)

    Chruslinska, Martyna; Belczynski, Krzysztof; Klencki, Jakub; Benacquista, Matthew

    2018-03-01

    We revisit double neutron star (DNS) formation in the classical binary evolution scenario in light of the recent Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo DNS detection (GW170817). The observationally estimated Galactic DNS merger rate of R_MW = 21^{+28}_{-14} Myr-1, based on three Galactic DNS systems, fully supports our standard input physics model with RMW = 24 Myr-1. This estimate for the Galaxy translates in a non-trivial way (due to cosmological evolution of progenitor stars in chemically evolving Universe) into a local (z ≈ 0) DNS merger rate density of Rlocal = 48 Gpc-3 yr-1, which is not consistent with the current LIGO/Virgo DNS merger rate estimate (1540^{+3200}_{-1220} Gpc-3 yr-1). Within our study of the parameter space, we find solutions that allow for DNS merger rates as high as R_local ≈ 600^{+600}_{-300} Gpc-3 yr-1 which are thus consistent with the LIGO/Virgo estimate. However, our corresponding BH-BH merger rates for the models with high DNS merger rates exceed the current LIGO/Virgo estimate of local BH-BH merger rate (12-213 Gpc-3 yr-1). Apart from being particularly sensitive to the common envelope treatment, DNS merger rates are rather robust against variations of several of the key factors probed in our study (e.g. mass transfer, angular momentum loss, and natal kicks). This might suggest that either common envelope development/survival works differently for DNS (˜10-20 M⊙ stars) than for BH-BH (˜40-100 M⊙ stars) progenitors, or high black hole (BH) natal kicks are needed to meet observational constraints for both types of binaries. Our conclusion is based on a limited number of (21) evolutionary models and is valid within this particular DNS and BH-BH isolated binary formation scenario.

  9. Clifford Algebra Implying Three Fermion Generations Revisited

    Science.gov (United States)

    Krolikowski, Wojciech

    2002-09-01

    The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √ {p2} → {Γ }(N)p works, leading to a sequence N = 1,2,3, ... of Dirac-type equations, where four Dirac-type matrices {Γ }(N)μ are embedded into a Clifford algebra via a Jacobi definition introducing four ``centre-of-mass'' and (N-1)× four ``relative'' Dirac-type matrices. These define one ``centre-of-mass'' and (N-1) ``relative'' Dirac bispinor indices. Secundo, the ``centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while (N-1) ``relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ``relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1,3,5 in the case of N odd, and two with N = 2,4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3x3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is mτ = 1776.80 MeV, when the input of experimental me and mμ is used.

  10. Meeting Earth Observation Requirements for Global Agricultural Monitoring: An Evaluation of the Revisit Capabilities of Current and Planned Moderate Resolution Optical Earth Observing Missions

    Directory of Open Access Journals (Sweden)

    Alyssa K. Whitcraft

    2015-01-01

    Full Text Available Agriculture is a highly dynamic process in space and time, with many applications requiring data with both a relatively high temporal resolution (at least every 8 days and fine-to-moderate (FTM < 100 m spatial resolution. The relatively infrequent revisit of FTM optical satellite observatories coupled with the impacts of cloud occultation have translated into a barrier for the derivation of agricultural information at the regional-to-global scale. Drawing upon the Group on Earth Observations Global Agricultural Monitoring (GEOGLAM Initiative’s general satellite Earth observation (EO requirements for monitoring of major production areas, Whitcraft et al. (this issue have described where, when, and how frequently satellite data acquisitions are required throughout the agricultural growing season at 0.05°, globally. The majority of areas and times of year require multiple revisits to probabilistically yield a view at least 70%, 80%, 90%, or 95% clear within eight days, something that no present single FTM optical observatory is capable of delivering. As such, there is a great potential to meet these moderate spatial resolution optical data requirements through a multi-space agency/multi-mission constellation approach. This research models the combined revisit capabilities of seven hypothetical constellations made from five satellite sensors—Landsat 7 Enhanced Thematic Mapper (Landsat 7 ETM+, Landsat 8 Operational Land Imager and Thermal Infrared Sensor (Landsat 8 OLI/TIRS, Resourcesat-2 Advanced Wide Field Sensor (Resourcesat-2 AWiFS, Sentinel-2A Multi-Spectral Instrument (MSI, and Sentinel-2B MSI—and compares these capabilities with the revisit frequency requirements for a reasonably cloud-free clear view within eight days throughout the agricultural growing season. Supplementing Landsat 7 and 8 with missions from different space agencies leads to an improved capacity to meet requirements, with Resourcesat-2 providing the largest

  11. THE INFLUENCE OF DESTINATION IMAGE AND TOURIST SATISFACTION TOWARD REVISIT INTENTION OF SETU BABAKAN BETAWI CULTURAL VILLAGE

    OpenAIRE

    Wibowo, Setyo Ferry; Sazali, Adnan; Kresnamurti R. P., Agung

    2016-01-01

    The purpose of this research are: 1) To find out the description of destination image, tourist satisfaction, and revisit intention of Betawi cultural village Setu Babakan, 2) test empirically the influence of destination image toward revisit intention of Betawi cultural village Setu Babakan, 3) test empirically the influence of tourist satisfaction toward revisit intention of Betawi cultural village Setu Babakan, 4) test empirically the influence of destination image toward revisit intention ...

  12. Stem analysis program (GOAP for evaluating of increment and growth data at individual tree

    Directory of Open Access Journals (Sweden)

    Gafura Aylak Özdemir

    2016-07-01

    Full Text Available Stem analysis is a method evaluating in a detailed way data of increment and growth of individual tree at the past periods and widely used in various forestry disciplines. Untreated data of stem analysis consist of annual ring count and measurement procedures performed on cross sections taken from individual tree by section method. The evaluation of obtained this untreated data takes quite some time. Thus, a computer software was developed in this study to quickly and efficiently perform stem analysis. This computer software developed to evaluate untreated data of stem analysis as numerical and graphical was programmed as macro by utilizing Visual Basic for Application feature of MS Excel 2013 program currently the most widely used. In developed this computer software, growth height model is formed from two different approaches, individual tree volume depending on section method, cross-sectional area, increments of diameter, height and volume, volume increment percent and stem form factor at breast height are calculated depending on desired period lengths. This calculated values are given as table. Development of diameter, height, volume, increments of these variables, volume increment percent and stem form factor at breast height according to periodic age are given as chart. Stem model showing development of diameter, height and shape of individual tree in the past periods also can be taken from computer software as chart.

  13. Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-09-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.

  14. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Science.gov (United States)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  15. Incremental Validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF).

    Science.gov (United States)

    Siegling, A B; Vesely, Ashley K; Petrides, K V; Saklofske, Donald H

    2015-01-01

    This study examined the incremental validity of the adult short form of the Trait Emotional Intelligence Questionnaire (TEIQue-SF) in predicting 7 construct-relevant criteria beyond the variance explained by the Five-factor model and coping strategies. Additionally, the relative contributions of the questionnaire's 4 subscales were assessed. Two samples of Canadian university students completed the TEIQue-SF, along with measures of the Big Five, coping strategies (Sample 1 only), and emotion-laden criteria. The TEIQue-SF showed consistent incremental effects beyond the Big Five or the Big Five and coping strategies, predicting all 7 criteria examined across the 2 samples. Furthermore, 2 of the 4 TEIQue-SF subscales accounted for the measure's incremental validity. Although the findings provide good support for the validity and utility of the TEIQue-SF, directions for further research are emphasized.

  16. Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy

    Directory of Open Access Journals (Sweden)

    Tarun K. Sen

    2011-11-01

    Full Text Available Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy Abstract Innovation in information technology is a primary driver for growth in developed economies. Research indicates that countries go through three stages in the adoption of innovation strategies: buying innovation through global trade, incremental innovation from other countries by enhancing efficiency, and, at the most developed stage, radically innovating independently for competitive advantage. The first two stages of innovation maturity depend more on cross-border trade than the third stage. In this paper, we find that IT professionals in in an emerging economy such as India believe in radical innovation over incremental innovation (adaptation as a growth strategy, even though competitive advantage may rest in adaptation. The results of the study report the preference for innovation strategies among IT professionals in India and its implications for other rapidly growing emerging economies.

  17. Determining frustum depth of 304 stainless steel plates with various diameters and thicknesses by incremental forming

    Energy Technology Data Exchange (ETDEWEB)

    Golabi, Sa' id [University of Kashan, Kashan (Iran, Islamic Republic of); Khazaali, Hossain [Bu-Ali Sina University, Hamedan (Iran, Islamic Republic of)

    2014-08-15

    Nowadays incremental forming is more popular because of its flexibility and cost saving. However, no engineering data is available for manufacturers for forming simple shapes like a frustum by incremental forming, and either expensive experimental tests or finite element analysis (FEA) should be employed to determine the depth of a frustum considering: thickness, material, cone diameter, wall angle, feed rate, tool diameter, etc. In this study, finite element technique, confirmed by experimental study, was employed for developing applicable curves for determining the depth of frustums made from 304 stainless steel (SS304) sheet with various cone angles, thicknesses from 0.3 to 1 mm and major diameters from 50 to 200 mm using incremental forming. Using these curves, the frustum angle and its depth knowing its thickness and major diameter can be predicted. The effects of feed rate, vertical pitch and tool diameter on frustum depth and surface quality were also addressed in this study.

  18. EFFECT OF COST INCREMENT DISTRIBUTION PATTERNS ON THE PERFORMANCE OF JIT SUPPLY CHAIN

    Directory of Open Access Journals (Sweden)

    Ayu Bidiawati J.R

    2008-01-01

    Full Text Available Cost is an important consideration in supply chain (SC optimisation. This is due to emphasis placed on cost reduction in order to optimise profit. Some researchers use cost as one of their performance measures and others propose ways of accurately calculating cost. As product moves across SC, the product cost also increases. This paper studied the effect of cost increment distribution patterns on the performance of a JIT Supply Chain. In particular, it is necessary to know if inventory allocation across SC needs to be modified to accommodate different cost increment distribution patterns. It was found that funnel is still the best card distribution pattern for JIT-SC regardless the cost increment distribution patterns used.

  19. Incremental Validity of the DSM-5 Section III Personality Disorder Traits With Respect to Psychosocial Impairment.

    Science.gov (United States)

    Simms, Leonard J; Calabrese, William R

    2016-02-01

    Traditional personality disorders (PDs) are associated with significant psychosocial impairment. DSM-5 Section III includes an alternative hybrid personality disorder (PD) classification approach, with both type and trait elements, but relatively little is known about the impairments associated with Section III traits. Our objective was to study the incremental validity of Section III traits--compared to normal-range traits, traditional PD criterion counts, and common psychiatric symptomatology--in predicting psychosocial impairment. To that end, 628 current/recent psychiatric patients completed measures of PD traits, normal-range traits, traditional PD criteria, psychiatric symptomatology, and psychosocial impairments. Hierarchical regressions revealed that Section III PD traits incrementally predicted psychosocial impairment over normal-range personality traits, PD criterion counts, and common psychiatric symptomatology. In contrast, the incremental effects for normal-range traits, PD symptom counts, and common psychiatric symptomatology were substantially smaller than for PD traits. These findings have implications for PD classification and the impairment literature more generally.

  20. The balanced scorecard: an incremental approach model to health care management.

    Science.gov (United States)

    Pineno, Charles J

    2002-01-01

    The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.

  1. Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.

    Science.gov (United States)

    Ng, Thomas W H; Feldman, Daniel C

    2015-05-01

    This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  2. Incremental net social benefit associated with using nuclear-fueled power plants

    International Nuclear Information System (INIS)

    Maoz, I.

    1976-12-01

    The incremental net social benefit (INSB) resulting from nuclear-fueled, rather than coal-fired, electric power generation is assessed. The INSB is defined as the difference between the 'incremental social benefit' (ISB)--caused by the cheaper technology of electric power generation, and the 'incremental social cost' (ISC)--associated with an increased power production, which is induced by cheaper technology. Section 2 focuses on the theoretical and empirical problems associated with the assessment of the long-run price elasticity of the demand for electricity, and the theoretical-econometric considerations that lead to the reasonable estimates of price elasticities of demand from those provided by recent empirical studies. Section 3 covers the theoretical and empirical difficulties associated with the construction of the long-run social marginal cost curves (LRSMC) of electricity. Sections 4 and 5 discuss the assessment methodology and provide numerical examples for the calculation of the INSB resulting from nuclear-fueled power generation

  3. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  4. OXYGEN UPTAKE KINETICS DURING INCREMENTAL- AND DECREMENTAL-RAMP CYCLE ERGOMETRY

    Directory of Open Access Journals (Sweden)

    Fadıl Özyener

    2011-09-01

    Full Text Available The pulmonary oxygen uptake (VO2 response to incremental-ramp cycle ergometry typically demonstrates lagged-linear first-order kinetics with a slope of ~10-11 ml·min-1·W-1, both above and below the lactate threshold (ӨL, i.e. there is no discernible VO2 slow component (or "excess" VO2 above ӨL. We were interested in determining whether a reverse ramp profile would yield the same response dynamics. Ten healthy males performed a maximum incremental -ramp (15-30 W·min-1, depending on fitness. On another day, the work rate (WR was increased abruptly to the incremental maximum and then decremented at the same rate of 15-30 W.min-1 (step-decremental ramp. Five subjects also performed a sub-maximal ramp-decremental test from 90% of ӨL. VO2 was determined breath-by-breath from continuous monitoring of respired volumes (turbine and gas concentrations (mass spectrometer. The incremental-ramp VO2-WR slope was 10.3 ± 0.7 ml·min-1·W-1, whereas that of the descending limb of the decremental ramp was 14.2 ± 1.1 ml·min-1·W-1 (p < 0.005. The sub-maximal decremental-ramp slope, however, was only 9. 8 ± 0.9 ml·min-1·W-1: not significantly different from that of the incremental-ramp. This suggests that the VO2 response in the supra-ӨL domain of incremental-ramp exercise manifest not actual, but pseudo, first-order kinetics

  5. The timeline of the lunar bombardment: Revisited

    Science.gov (United States)

    Morbidelli, A.; Nesvorny, D.; Laurenz, V.; Marchi, S.; Rubie, D. C.; Elkins-Tanton, L.; Wieczorek, M.; Jacobson, S.

    2018-05-01

    The timeline of the lunar bombardment in the first Gy of Solar System history remains unclear. Basin-forming impacts (e.g. Imbrium, Orientale), occurred 3.9-3.7 Gy ago, i.e. 600-800 My after the formation of the Moon itself. Many other basins formed before Imbrium, but their exact ages are not precisely known. There is an intense debate between two possible interpretations of the data: in the cataclysm scenario there was a surge in the impact rate approximately at the time of Imbrium formation, while in the accretion tail scenario the lunar bombardment declined since the era of planet formation and the latest basins formed in its tail-end. Here, we revisit the work of Morbidelli et al. (2012) that examined which scenario could be compatible with both the lunar crater record in the 3-4 Gy period and the abundance of highly siderophile elements (HSE) in the lunar mantle. We use updated numerical simulations of the fluxes of asteroids, comets and planetesimals leftover from the planet-formation process. Under the traditional assumption that the HSEs track the total amount of material accreted by the Moon since its formation, we conclude that only the cataclysm scenario can explain the data. The cataclysm should have started ∼ 3.95 Gy ago. However we also consider the possibility that HSEs are sequestered from the mantle of a planet during magma ocean crystallization, due to iron sulfide exsolution (O'Neil, 1991; Rubie et al., 2016). We show that this is likely true also for the Moon, if mantle overturn is taken into account. Based on the hypothesis that the lunar magma ocean crystallized about 100-150 My after Moon formation (Elkins-Tanton et al., 2011), and therefore that HSEs accumulated in the lunar mantle only after this timespan, we show that the bombardment in the 3-4 Gy period can be explained in the accretion tail scenario. This hypothesis would also explain why the Moon appears so depleted in HSEs relative to the Earth. We also extend our analysis of the

  6. Revisit ocean thermal energy conversion system

    International Nuclear Information System (INIS)

    Huang, J.C.; Krock, H.J.; Oney, S.K.

    2003-01-01

    by-products, especially drinking water, aquaculture and mariculture, can easily translate into billions of dollars in business opportunities. The current status of the OTEC system definitely deserves to be carefully revisited. This paper will examine recent major advancements in technology, evaluate costs and effectiveness, and assess the overall market environment of the OTEC system and describe its great renewable energy potential and overall benefits to the nations of the world

  7. Maximal power output during incremental exercise by resistance and endurance trained athletes.

    Science.gov (United States)

    Sakthivelavan, D S; Sumathilatha, S

    2010-01-01

    This study was aimed at comparing the maximal power output by resistance trained and endurance trained athletes during incremental exercise. Thirty male athletes who received resistance training (Group I) and thirty male athletes of similar age group who received endurance training (Group II) for a period of more than 1 year were chosen for the study. Physical parameters were measured and exercise stress testing was done on a cycle ergometer with a portable gas analyzing system. The maximal progressive incremental cycle ergometer power output at peak exercise and carbon dioxide production at VO2max were measured. Highly significant (P biofeedback and perk up the athlete's performance.

  8. BMI and BMI SDS in childhood: annual increments and conditional change

    OpenAIRE

    Brannsether-Ellingsen, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Juliusson, Petur Benedikt

    2016-01-01

    Background: Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim: To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods: The distributions of 1-year increments of BMI (kg/m2) and BMI SDS are summarised by...

  9. Learning in Different Modes: The Interaction Between Incremental and Radical Change

    DEFF Research Database (Denmark)

    Petersen, Anders Hedegaard; Boer, Harry; Gertsen, Frank

    2004-01-01

    The objective of the study presented in this article is to contribute to the development of theory on continuous innovation, i.e. the combination of operationally effective exploitation and strategically flexible exploration. A longitudinal case study is presented of the interaction between...... incremental and radical change in Danish company, observed through the lens of organizational learning. The radical change process is described in five phases, each of which had its own effects on incremental change initiatives in the company. The research identified four factors explaining these effects, all...

  10. A program for the numerical control of a pulse increment system

    Energy Technology Data Exchange (ETDEWEB)

    Gray, D.C.

    1963-08-21

    This report will describe the important features of the development of magnetic tapes for the numerical control of a pulse-increment system consisting of a modified Gorton lathe and its associated control unit developed by L. E. Foley of Equipment Development Service, Engineering Services, General Electric Co., Schenectady, N.Y. Included is a description of CUPID (Control and Utilization of Pulse Increment Devices), a FORTRAN program for the design of these tapes on the IBM 7090 computer, and instructions for its operation.

  11. Incremental Approach to the Technology of Test Design for Industrial Projects

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2014-01-01

    Full Text Available The paper presents an approach to effort reduction in developing test suites for industrial software products based on the incremental technology. The main problems to be solved by the incremental technology are full automation design of test scenarios and significant reducing of test explosion. The proposed approach provides solutions to the mentioned problems through joint co-working of a designer and a customer, through the integration of symbolic verification with the automatic generation of test suites; through the usage of an efficient technology with the toolset VRS/TAT.

  12. Incremental electrohydraulic forming - A new approach for the manufacture of structured multifunctional sheet metal blanks

    Science.gov (United States)

    Djakow, Eugen; Springer, Robert; Homberg, Werner; Piper, Mark; Tran, Julian; Zibart, Alexander; Kenig, Eugeny

    2017-10-01

    Electrohydraulic Forming (EHF) processes permit the production of complex, sharp-edged geometries even when high-strength materials are used. Unfortunately, the forming zone is often limited as compared to other sheet metal forming processes. The use of a special industrial-robot-based tool setup and an incremental process strategy could provide a promising solution for this problem. This paper describes such an innovative approach using an electrohydraulic incremental forming machine, which can be employed to manufacture the large multifunctional and complex part geometries in steel, aluminium, magnesium and reinforced plastic that are employed in lightweight constructions or heating elements.

  13. The significance test controversy revisited the fiducial Bayesian alternative

    CERN Document Server

    Lecoutre, Bruno

    2014-01-01

    The purpose of this book is not only to revisit the “significance test controversy,”but also to provide a conceptually sounder alternative. As such, it presents a Bayesian framework for a new approach to analyzing and interpreting experimental data. It also prepares students and researchers for reporting on experimental results. Normative aspects: The main views of statistical tests are revisited and the philosophies of Fisher, Neyman-Pearson and Jeffrey are discussed in detail. Descriptive aspects: The misuses of Null Hypothesis Significance Tests are reconsidered in light of Jeffreys’ Bayesian conceptions concerning the role of statistical inference in experimental investigations. Prescriptive aspects: The current effect size and confidence interval reporting practices are presented and seriously questioned. Methodological aspects are carefully discussed and fiducial Bayesian methods are proposed as a more suitable alternative for reporting on experimental results. In closing, basic routine procedures...

  14. The hard-core model on random graphs revisited

    International Nuclear Information System (INIS)

    Barbier, Jean; Krzakala, Florent; Zhang, Pan; Zdeborová, Lenka

    2013-01-01

    We revisit the classical hard-core model, also known as independent set and dual to vertex cover problem, where one puts particles with a first-neighbor hard-core repulsion on the vertices of a random graph. Although the case of random graphs with small and very large average degrees respectively are quite well understood, they yield qualitatively different results and our aim here is to reconciliate these two cases. We revisit results that can be obtained using the (heuristic) cavity method and show that it provides a closed-form conjecture for the exact density of the densest packing on random regular graphs with degree K ≥ 20, and that for K > 16 the nature of the phase transition is the same as for large K. This also shows that the hard-code model is the simplest mean-field lattice model for structural glasses and jamming

  15. Radiative corrections to neutrino deep inelastic scattering revisited

    International Nuclear Information System (INIS)

    Arbuzov, Andrej B.; Bardin, Dmitry Yu.; Kalinovskaya, Lidia V.

    2005-01-01

    Radiative corrections to neutrino deep inelastic scattering are revisited. One-loop electroweak corrections are re-calculated within the automatic SANC system. Terms with mass singularities are treated including higher order leading logarithmic corrections. Scheme dependence of corrections due to weak interactions is investigated. The results are implemented into the data analysis of the NOMAD experiment. The present theoretical accuracy in description of the process is discussed

  16. The Assassination of John F. Kennedy: Revisiting the Medical Data

    OpenAIRE

    Rohrich, Rod J.; Nagarkar, Purushottam; Stokes, Mike; Weinstein, Aaron; Mantik, David W.; Jensen, J. Arthur

    2014-01-01

    Thank you for publishing "The Assassination of John F. Kennedy: Revisiting the Medical Data."1 The central conclusion of this study is that the assassination remains controversial and that some of the controversy must be attributable to the "reporting and handling of the medical evidence." With the greatest respect for you and Dr. Robert McClelland, let me argue that your text and on-line interviews perpetuate the central misunderstanding of the assassination and there...

  17. Quark matter revisited with non-extensive MIT bag model

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Pedro H.G.; Nunes da Silva, Tiago; Menezes, Debora P. [Universidade Federal de Santa Catarina, Departamento de Fisica, CFM, Florianopolis (Brazil); Deppman, Airton [Instituto de Fisica da Universidade de Sao Paulo, Sao Paulo (Brazil)

    2017-10-15

    In this work we revisit the MIT bag model to describe quark matter within both the usual Fermi-Dirac and the Tsallis statistics. We verify the effects of the non-additivity of the latter by analysing two different pictures: the first order phase transition of the QCD phase diagram and stellar matter properties. While the QCD phase diagram is visually affected by the Tsallis statistics, the resulting effects on quark star macroscopic properties are barely noticed. (orig.)

  18. Ambulatory thyroidectomy: A multistate study of revisits and complications

    OpenAIRE

    Orosco, RK; Lin, HW; Bhattacharyya, N

    2015-01-01

    © 2015 American Academy of Otolaryngology - Head and Neck Surgery Foundation. Objective. Determine rates and reasons for revisits after ambulatory adult thyroidectomy. Study Design. Cross-sectional analysis of multistate ambulatory surgery and hospital databases. Setting. Ambulatory surgery data from the State Ambulatory Surgery Databases of California, Florida, Iowa, and New York for calendar years 2010 and 2011. Subjects and Methods. Ambulatory thyroidectomy cases were linked to state ambul...

  19. Place attachment and social legitimacy: Revisiting the sustainable entrepreneurship journey

    OpenAIRE

    Kibler, E; Fink, M; Lang, R; Munoz, PA

    2015-01-01

    This paper revisits the sustainable entrepreneurship journey by introducing a ‘place- based’ sustainable venture path model. We suggest that distinguishing between emo- tional (‘caring about the place’) and instrumental (‘using the place’) place attachment of sustainable entrepreneurs deepens our understanding of how place-based challenges of sustainable venture legitimacy are managed over time. We conclude with avenues for future sustainable entrepreneurship research.

  20. Doppler Processing with Ultra-Wideband (UWB) Radar Revisited

    Science.gov (United States)

    2018-01-01

    REPORT TYPE Technical Note 3. DATES COVERED (From - To) December 2017 4. TITLE AND SUBTITLE Doppler Processing with Ultra-Wideband (UWB) Radar...unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This technical note revisits previous work performed at the US Army Research Laboratory related to...target considered previously is proportional to a delayed version of the transmitted signal, up to a complex constant factor. We write the received

  1. Radiative corrections to double-Dalitz decays revisited

    Science.gov (United States)

    Kampf, Karol; Novotný, Jiři; Sanchez-Puertas, Pablo

    2018-03-01

    In this study, we revisit and complete the full next-to-leading order corrections to pseudoscalar double-Dalitz decays within the soft-photon approximation. Comparing to the previous study, we find small differences, which are nevertheless relevant for extracting information about the pseudoscalar transition form factors. Concerning the latter, these processes could offer the opportunity to test them—for the first time—in their double-virtual regime.

  2. Dispute Resolution and Technology: Revisiting the Justification of Conflict Management

    OpenAIRE

    Koulu, Riikka

    2016-01-01

    This study, Dispute Resolution and Technology: Revisiting the Justification of Conflict Management, belongs to the fields of procedural law, legal theory and law and technology studies. In this study the changes in dispute resolution caused by technology are evaluated. The overarching research question of this study is how does implementing technology to dispute resolution challenge the justification of law as a legitimised mode of violence? Before answering such an abstract research question...

  3. Deja vu: The Unified Command Plan of the Future Revisited

    Science.gov (United States)

    2011-05-19

    Approved for Public Release; Distribution is Unlimited Déjà vu : The Unified Command Plan of the Future Revisited A Monograph by Lieutenant...DD-MM-YYYY) 19-05-2011 2. REPORT TYPE Monograph 3. DATES COVERED (From - To) JUL 2010 – MAY 2011 4. TITLE AND SUBTITLE Déjà vu : The Unified...i SCHOOL OF ADVANCED MILITARY STUDIES MONOGRAPH APPROVAL Lieutenant Colonel Edward Francis Martignetti Title of Monograph: Déjà vu : The Unified

  4. Hospital revisit rate after a diagnosis of conversion disorder.

    Science.gov (United States)

    Merkler, Alexander E; Parikh, Neal S; Chaudhry, Simriti; Chait, Alanna; Allen, Nicole C; Navi, Babak B; Kamel, Hooman

    2016-04-01

    To estimate the hospital revisit rate of patients diagnosed with conversion disorder (CD). Using administrative data, we identified all patients discharged from California, Florida and New York emergency departments (EDs) and acute care hospitals between 2005 and 2011 with a primary discharge diagnosis of CD. Patients discharged with a primary diagnosis of seizure or transient global amnesia (TGA) served as control groups. Our primary outcome was the rate of repeat ED visits and hospital admissions after initial presentation. Poisson regression was used to compare rates between diagnosis groups while adjusting for demographic characteristics. We identified 7946 patients discharged with a primary diagnosis of CD. During a mean follow-up of 3.0 (±1.6) years, patients with CD had a median of three (IQR, 1-9) ED or inpatient revisits, compared with 0 (IQR, 0-2) in patients with TGA and 3 (IQR, 1-7) in those with seizures. Revisit rates were 18.25 (95% CI, 18.10 to 18.40) visits per 100 patients per month in those with CD, 3.90 (95% CI, 3.84 to 3.95) in those with TGA and 17.78 (95% CI, 17.75 to 17.81) in those with seizures. As compared to CD, the incidence rate ratio for repeat ED visits or hospitalisations was 0.89 (95% CI, 0.86 to 0.93) for seizure disorder and 0.32 (95% CI 0.31 to 0.34) for TGA. CD is associated with a substantial hospital revisit rate. Our findings suggest that CD is not an acute, time-limited response to stress, but rather that CD is a manifestation of a broader pattern of chronic neuropsychiatric disease. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. Serotype-specific mortality from invasive Streptococcus pneumoniae disease revisited

    DEFF Research Database (Denmark)

    Martens, Pernille; Worm, Signe Westring; Lundgren, Bettina

    2004-01-01

    Serotype-specific mortality from invasive Streptococcus pneumoniae disease revisited.Martens P, Worm SW, Lundgren B, Konradsen HB, Benfield T. Department of Infectious Diseases 144, Hvidovre University Hospital, DK-2650 Hvidovre, Denmark. pernillemartens@yahoo.com BACKGROUND: Invasive infection...... with Streptococcus pneumoniae (pneumococci) causes significant morbidity and mortality. Case series and experimental data have shown that the capsular serotype is involved in the pathogenesis and a determinant of disease outcome. METHODS: Retrospective review of 464 cases of invasive disease among adults diagnosed...

  6. Enthalpy increment measurements of Sr3Zr2O7(s) and Sr4Zr3O10(s)

    International Nuclear Information System (INIS)

    Banerjee, A.; Dash, S.; Prasad, R.; Venugopal, V.

    1998-01-01

    Enthalpy increment measurements on Sr 3 Zr 2 O 7 (s) and Sr 4 Zr 3 O 10 (s) were carried out using a Calvet micro-calorimeter. The enthalpy increment values were least squares analyzed with the constraints that H 0 (T)-H 0 (298.15 K) at 298.15 K equals to zero and C p 0 (298.15 K) equals to the estimated value. The dependence of enthalpy increment with temperature is given. (orig.)

  7. The Trait Emotional Intelligence Questionnaire: Internal Structure, Convergent, Criterion, and Incremental Validity in an Italian Sample

    Science.gov (United States)

    Andrei, Federica; Smith, Martin M.; Surcinelli, Paola; Baldaro, Bruno; Saklofske, Donald H.

    2016-01-01

    This study investigated the structure and validity of the Italian translation of the Trait Emotional Intelligence Questionnaire. Data were self-reported from 227 participants. Confirmatory factor analysis supported the four-factor structure of the scale. Hierarchical regressions also demonstrated its incremental validity beyond demographics, the…

  8. One Size Does Not Fit All: Managing Radical and Incremental Creativity

    Science.gov (United States)

    Gilson, Lucy L.; Lim, Hyoun Sook; D'Innocenzo, Lauren; Moye, Neta

    2012-01-01

    This research extends creativity theory by re-conceptualizing creativity as a two-dimensional construct (radical and incremental) and examining the differential effects of intrinsic motivation, extrinsic rewards, and supportive supervision on perceptions of creativity. We hypothesize and find two distinct types of creativity that are associated…

  9. Relating annual increments of the endangered Blanding's turtle plastron growth to climate.

    Science.gov (United States)

    Richard, Monik G; Laroque, Colin P; Herman, Thomas B

    2014-05-01

    This research is the first published study to report a relationship between climate variables and plastron growth increments of turtles, in this case the endangered Nova Scotia Blanding's turtle (Emydoidea blandingii). We used techniques and software common to the discipline of dendrochronology to successfully cross-date our growth increment data series, to detrend and average our series of 80 immature Blanding's turtles into one common chronology, and to seek correlations between the chronology and environmental temperature and precipitation variables. Our cross-dated chronology had a series intercorrelation of 0.441 (above 99% confidence interval), an average mean sensitivity of 0.293, and an average unfiltered autocorrelation of 0.377. Our master chronology represented increments from 1975 to 2007 (33 years), with index values ranging from a low of 0.688 in 2006 to a high of 1.303 in 1977. Univariate climate response function analysis on mean monthly air temperature and precipitation values revealed a positive correlation with the previous year's May temperature and current year's August temperature; a negative correlation with the previous year's October temperature; and no significant correlation with precipitation. These techniques for determining growth increment response to environmental variables should be applicable to other turtle species and merit further exploration.

  10. An Empirical Analysis of Incremental Capital Structure Decisions Under Managerial Entrenchment

    NARCIS (Netherlands)

    de Jong, A.; Veld, C.H.

    1998-01-01

    We study incremental capital structure decisions of Dutch companies. From 1977 to 1996 these companies have made 110 issues of public and private seasoned equity and 137 public issues of straight debt. Managers of Dutch companies are entrenched. For this reason a discrepancy exists between

  11. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    Science.gov (United States)

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  12. BMI and BMI SDS in childhood: annual increments and conditional change.

    Science.gov (United States)

    Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt

    2017-02-01

    Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.

  13. Investigating Postgraduate College Admission Interviews: Generalizability Theory Reliability and Incremental Predictive Validity

    Science.gov (United States)

    Arce-Ferrer, Alvaro J.; Castillo, Irene Borges

    2007-01-01

    The use of face-to-face interviews is controversial for college admissions decisions in light of the lack of availability of validity and reliability evidence for most college admission processes. This study investigated reliability and incremental predictive validity of a face-to-face postgraduate college admission interview with a sample of…

  14. Software designs of image processing tasks with incremental refinement of computation.

    Science.gov (United States)

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  15. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    International Nuclear Information System (INIS)

    Mabit, L.; Toloza, A.; Meusburger, K.; Alewell, C.; Iurian, A-R.; Owens, P.N.

    2014-01-01

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”

  16. Per tree estimates with n-tree distance sampling: an application to increment core data

    Science.gov (United States)

    Thomas B. Lynch; Robert F. Wittwer

    2002-01-01

    Per tree estimates using the n trees nearest a point can be obtained by using a ratio of per unit area estimates from n-tree distance sampling. This ratio was used to estimate average age by d.b.h. classes for cottonwood trees (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland. Increment...

  17. Literature Review of Data on the Incremental Costs to Design and Build Low-Energy Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, W. D.

    2008-05-14

    This document summarizes findings from a literature review into the incremental costs associated with low-energy buildings. The goal of this work is to help establish as firm an analytical foundation as possible for the Building Technology Program's cost-effective net-zero energy goal in the year 2025.

  18. Analogical reasoning: An incremental or insightful process? What cognitive and cortical evidence suggests.

    Science.gov (United States)

    Antonietti, Alessandro; Balconi, Michela

    2010-06-01

    Abstract The step-by-step, incremental nature of analogical reasoning can be questioned, since analogy making appears to be an insight-like process. This alternative view of analogical thinking can be integrated in Speed's model, even though the alleged role played by dopaminergic subcortical circuits needs further supporting evidence.

  19. A diameter increment model for Red Fir in California and Southern Oregon

    Science.gov (United States)

    K. Leroy Dolph

    1992-01-01

    Periodic (10-year) diameter increment of individual red fir trees in Califomia and southern Oregon can be predicted from initial diameter and crown ratio of each tree, site index, percent slope, and aspect of the site. The model actually predicts the natural logarithm ofthe change in squared diameter inside bark between the startand the end of a 10-year growth period....

  20. Is It that Difficult to Find a Good Preference Order for the Incremental Algorithm?

    Science.gov (United States)

    Krahmer, Emiel; Koolen, Ruud; Theune, Mariet

    2012-01-01

    In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order.…

  1. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    Energy Technology Data Exchange (ETDEWEB)

    Mabit, L.; Toloza, A. [Soil and Water Management and Crop Nutrition Laboratory, IAEA, Seibersdorf (Austria); Meusburger, K.; Alewell, C. [Environmental Geosciences, Department of Environmental Sciences, University of Basel, Basel (Switzerland); Iurian, A-R. [Babes-Bolyai University, Faculty of Environmental Science and Engineering, Cluj-Napoca (Romania); Owens, P. N. [Environmental Science Program and Quesnel River Research Centre, University of Northern British Columbia, Prince George, British Columbia (Canada)

    2014-07-15

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”.

  2. Evaluating growth assumptions using diameter or radial increments in natural even-aged longleaf pine

    Science.gov (United States)

    John C. Gilbert; Ralph S. Meldahl; Jyoti N. Rayamajhi; John S. Kush

    2010-01-01

    When using increment cores to predict future growth, one often assumes future growth is identical to past growth for individual trees. Once this assumption is accepted, a decision has to be made between which growth estimate should be used, constant diameter growth or constant basal area growth. Often, the assumption of constant diameter growth is used due to the ease...

  3. Diagnostic value of triphasic incremental helical CT in early and progressive gastric carcinoma

    International Nuclear Information System (INIS)

    Gao Jianbo; Yan Xuehua; Li Mengtai; Guo Hua; Chen Xuejun; Guan Sheng; Zhang Xiefu; Li Shuxin; Yang Xiaopeng

    2001-01-01

    Objective: To investigate helical CT enhancement characteristics of gastric carcinoma, and the diagnostic value and preoperative staging of gastric carcinoma with triphasic incremental helical CT of the stomach with water-filling method. Methods: Both double-contrast barium examination and triphasic incremental helical CT of the stomach with water-filling method were performed in 46 patients with gastric carcinoma. Results: (1) Among these patients, normal gastric wall exhibited one layered structure in 18 patients, two or three layered structure in 28 patients in the arterial and portal venous phase. (2) Two cases of early stomach cancer showed marked enhancement in the arterial and portal venous phase and obvious attenuation of enhancement in the equilibrium phase. On the contrary, 32 of the 44 advanced gastric carcinoma was showed marked enhancement in the venous phase compared with the arterial phase ( t = 4.226, P < 0.05). (3) The total accuracy of triphasic incremental helical CT in determining TNM-staging was 81.0%. Conclusion: Different types of gastric carcinoma have different enhancement features. Triphases incremental helical CT is more accurate than conventional CT in the preoperative staging of gastric carcinoma

  4. On the search for an appropriate metric for reaction time to suprathreshold increments and decrements.

    Science.gov (United States)

    Vassilev, Angel; Murzac, Adrian; Zlatkova, Margarita B; Anderson, Roger S

    2009-03-01

    Weber contrast, DeltaL/L, is a widely used contrast metric for aperiodic stimuli. Zele, Cao & Pokorny [Zele, A. J., Cao, D., & Pokorny, J. (2007). Threshold units: A correct metric for reaction time? Vision Research, 47, 608-611] found that neither Weber contrast nor its transform to detection-threshold units equates human reaction times in response to luminance increments and decrements under selective rod stimulation. Here we show that their rod reaction times are equated when plotted against the spatial luminance ratio between the stimulus and its background (L(max)/L(min), the larger and smaller of background and stimulus luminances). Similarly, reaction times to parafoveal S-cone selective increments and decrements from our previous studies [Murzac, A. (2004). A comparative study of the temporal characteristics of processing of S-cone incremental and decremental signals. PhD thesis, New Bulgarian University, Sofia, Murzac, A., & Vassilev, A. (2004). Reaction time to S-cone increments and decrements. In: 7th European conference on visual perception, Budapest, August 22-26. Perception, 33, 180 (Abstract).], are better described by the spatial luminance ratio than by Weber contrast. We assume that the type of stimulus detection by temporal (successive) luminance discrimination, by spatial (simultaneous) luminance discrimination or by both [Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145.] determines the appropriateness of one or other contrast metric for reaction time.

  5. Lead 210 and moss-increment dating of two Finnish Sphagnum hummocks

    International Nuclear Information System (INIS)

    El-Daoushy, F.

    1982-01-01

    A comparison is presented of 210 Pb dating data with mass-increment dates of selected peat material from Finland. The measurements of 210 Pb were carried out by determining the granddaughter product 210 Po by means of the isotope dilution. The ages in 210 Pb yr were calculated using the constant initial concentration and the constant rate of supply models. (U.K.)

  6. Successive 1-Month Weight Increments in Infancy Can Be Used to Screen for Faltering Linear Growth.

    Science.gov (United States)

    Onyango, Adelheid W; Borghi, Elaine; de Onis, Mercedes; Frongillo, Edward A; Victora, Cesar G; Dewey, Kathryn G; Lartey, Anna; Bhandari, Nita; Baerug, Anne; Garza, Cutberto

    2015-12-01

    Linear growth faltering in the first 2 y contributes greatly to a high stunting burden, and prevention is hampered by the limited capacity in primary health care for timely screening and intervention. This study aimed to determine an approach to predicting long-term stunting from consecutive 1-mo weight increments in the first year of life. By using the reference sample of the WHO velocity standards, the analysis explored patterns of consecutive monthly weight increments among healthy infants. Four candidate screening thresholds of successive increments that could predict stunting were considered, and one was selected for further testing. The selected threshold was applied in a cohort of Bangladeshi infants to assess its predictive value for stunting at ages 12 and 24 mo. Between birth and age 12 mo, 72.6% of infants in the WHO sample tracked within 1 SD of their weight and length. The selected screening criterion ("event") was 2 consecutive monthly increments below the 15th percentile. Bangladeshi infants were born relatively small and, on average, tracked downward from approximately age 6 to strategy is effective, the estimated preventable proportion in the group who experienced the event would be 34% at 12 mo and 24% at 24 mo. This analysis offers an approach for frontline workers to identify children at risk of stunting, allowing for timely initiation of preventive measures. It opens avenues for further investigation into evidence-informed application of the WHO growth velocity standards. © 2015 American Society for Nutrition.

  7. Raising Cervical Cancer Awareness: Analysing the Incremental Efficacy of Short Message Service

    Science.gov (United States)

    Lemos, Marina Serra; Rothes, Inês Areal; Oliveira, Filipa; Soares, Luisa

    2017-01-01

    Objective: To evaluate the incremental efficacy of a Short Message Service (SMS) combined with a brief video intervention in increasing the effects of a health education intervention for cervical cancer prevention, over and beyond a video-alone intervention, with respect to key determinants of health behaviour change--knowledge, motivation and…

  8. The Interpersonal Measure of Psychopathy: Construct and Incremental Validity in Male Prisoners

    Science.gov (United States)

    Zolondek, Stacey; Lilienfeld, Scott O.; Patrick, Christopher J.; Fowler, Katherine A.

    2006-01-01

    The authors examined the construct and incremental validity of the Interpersonal Measure of Psychopathy (IM-P), a relatively new instrument designed to detect interpersonal behaviors associated with psychopathy. Observers of videotaped Psychopathy Checklist-Revised (PCL-R) interviews rated male prisoners (N = 93) on the IM-P. The IM-P correlated…

  9. Between structures and norms : Assessing tax increment financing for the Dutch spatial planning toolkit

    NARCIS (Netherlands)

    Root, Liz; Van Der Krabben, Erwin; Spit, Tejo

    2015-01-01

    The aim of the paper is to assess the institutional (mis)fit of tax increment financing for the Dutch spatial planning financial toolkit. By applying an institutionally oriented assessment framework, we analyse the interconnectivity of Dutch municipal finance and spatial planning structures and

  10. Analytic description of the frictionally engaged in-plane bending process incremental swivel bending (ISB)

    Science.gov (United States)

    Frohn, Peter; Engel, Bernd; Groth, Sebastian

    2018-05-01

    Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.

  11. On critical cases in limit theory for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Podolskij, Mark

    averages. The limit theory heavily depends on the interplay between the given order of the increments, the considered power, the Blumenthal-Getoor index of the driving pure jump Lévy process L and the behavior of the kernel function g at 0. In this work we will study the critical cases, which were...

  12. TCAM-based High Speed Longest Prefix Matching with Fast Incremental Table Updates

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Kragelund, A.; Berger, Michael Stübert

    2013-01-01

    and consequently a higher throughput of the network search engine, since the TCAM down time caused by incremental updates is eliminated. The LPM scheme is described in HDL for FPGA implementation and compared to an existing scheme for customized CAM circuits. The paper shows that the proposed scheme can process...

  13. Differentiating Major and Incremental New Product Development: The Effects of Functional and Numerical Workforce Flexibility

    NARCIS (Netherlands)

    Kok, R.A.W.; Ligthart, P.E.M.

    2014-01-01

    This study seeks to explain the differential effects of workforce flexibility on incremental and major new product development (NPD). Drawing on the resource-based theory of the firm, human resource management research, and innovation management literature, the authors distinguish two types of

  14. Real Time Implementation of Incremental Fuzzy Logic Controller for Gas Pipeline Corrosion Control

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Jayapalan

    2014-01-01

    Full Text Available A robust virtual instrumentation based fuzzy incremental corrosion controller is presented to protect metallic gas pipelines. Controller output depends on error and change in error of the controlled variable. For corrosion control purpose pipe to soil potential is considered as process variable. The proposed fuzzy incremental controller is designed using a very simple control rule base and the most natural and unbiased membership functions. The proposed scheme is tested for a wide range of pipe to soil potential control. Performance comparison between the conventional proportional integral type and proposed fuzzy incremental controller is made in terms of several performance criteria such as peak overshoot, settling time, and rise time. Result shows that the proposed controller outperforms its conventional counterpart in each case. Designed controller can be taken in automode without waiting for initial polarization to stabilize. Initial startup curve of proportional integral controller and fuzzy incremental controller is reported. This controller can be used to protect any metallic structures such as pipelines, tanks, concrete structures, ship, and offshore structures.

  15. On the Perturb-and-Observe and Incremental Conductance MPPT methods for PV systems

    DEFF Research Database (Denmark)

    Sera, Dezso; Mathe, Laszlo; Kerekes, Tamas

    2013-01-01

    This paper presents a detailed analysis of the two most well-known hill-climbing MPPT algorithms, the Perturb-and-Observe (P&O) and Incremental Conductance (INC). The purpose of the analysis is to clarify some common misconceptions in the literature regarding these two trackers, therefore helping...

  16. Gradient nanostructured surface of a Cu plate processed by incremental frictional sliding

    DEFF Research Database (Denmark)

    Hong, Chuanshi; Huang, Xiaoxu; Hansen, Niels

    2015-01-01

    The flat surface of a Cu plate was processed by incremental frictional sliding at liquid nitrogen temperature. The surface treatment results in a hardened gradient surface layer as thick as 1 mm in the Cu plate, which contains a nanostructured layer on the top with a boundary spacing of the order...

  17. A gradient surface produced by combined electroplating and incremental frictional sliding

    DEFF Research Database (Denmark)

    Yu, Tianbo; Hong, Chuanshi; Kitamura, K.

    2017-01-01

    A Cu plate was first electroplated with a Ni layer, with a thickness controlled to be between 1 and 2 mu m. The coated surface was then deformed by incremental frictional sliding with liquid nitrogen cooling. The combined treatment led to a multifunctional surface with a gradient in strain...

  18. The effects of the pine processionary moth on the increment of ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-05-18

    May 18, 2009 ... sycophanta L. (Coleoptera: Carabidae) used against the pine processionary moth (Thaumetopoea pityocampa Den. & Schiff.) (Lepidoptera: Thaumetopoeidae) in biological control. T. J. Zool. 30:181-185. Kanat M, Sivrikaya F (2005). Effect of the pine processionary moth on diameter increment of Calabrian ...

  19. Incrementally Detecting Change Types of Spatial Area Object: A Hierarchical Matching Method Considering Change Process

    Directory of Open Access Journals (Sweden)

    Yanhui Wang

    2018-01-01

    Full Text Available Detecting and extracting the change types of spatial area objects can track area objects’ spatiotemporal change pattern and provide the change backtracking mechanism for incrementally updating spatial datasets. To respond to the problems of high complexity of detection methods, high redundancy rate of detection factors, and the low automation degree during incrementally update process, we take into account the change process of area objects in an integrated way and propose a hierarchical matching method to detect the nine types of changes of area objects, while minimizing the complexity of the algorithm and the redundancy rate of detection factors. We illustrate in details the identification, extraction, and database entry of change types, and how we achieve a close connection and organic coupling of incremental information extraction and object type-of-change detection so as to characterize the whole change process. The experimental results show that this method can successfully detect incremental information about area objects in practical applications, with the overall accuracy reaching above 90%, which is much higher than the existing weighted matching method, making it quite feasible and applicable. It helps establish the corresponding relation between new-version and old-version objects, and facilitate the linked update processing and quality control of spatial data.

  20. Combining Compact Representation and Incremental Generation in Large Games with Sequential Strategies

    DEFF Research Database (Denmark)

    Bosansky, Branislav; Xin Jiang, Albert; Tambe, Milind

    2015-01-01

    representation of sequential strategies and linear programming, or by incremental strategy generation of iterative double-oracle methods. In this paper, we present novel hybrid of these two approaches: compact-strategy double-oracle (CS-DO) algorithm that combines the advantages of the compact representation...