WorldWideScience

Sample records for improved multiple-coarsening methods

  1. Semi-coarsening multigrid methods for parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Jones, J.E.

    1996-12-31

    Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.

  2. Multigrid techniques with non-standard coarsening and group relaxation methods

    International Nuclear Information System (INIS)

    Danaee, A.

    1989-06-01

    In the usual (standard) multigrid methods, doubling of grid sizes with different smoothing iterations (pointwise, or blockwise) has been considered by different authors. Some have indicated that a large coarsening can also be used, but is not beneficial (cf. H3, p.59). In this paper, it is shown that with a suitable blockwise smoothing scheme, some advantages could be achieved even with a factor of H l-1 /h l = 3. (author). 10 refs, 2 figs, 6 tabs

  3. Algebraic coarsening methods for linear and nonlinear PDE and systems

    International Nuclear Information System (INIS)

    McWilliams, J C

    2000-01-01

    In [l] Brandt describes a general approach for algebraic coarsening. Given fine-grid equations and a prescribed relaxation method, an approach is presented for defining both the coarse-grid variables and the coarse-grid equations corresponding to these variables. Although, these two tasks are not necessarily related (and, indeed, are often performed independently and with distinct techniques) in the approaches of [1] both revolve around the same underlying observation. To determine whether a given set of coarse-grid variables is appropriate it is suggested that one should employ compatible relaxation. This is a generalization of so-called F-relaxation (e.g., [2]). Suppose that the coarse-grid variables are defined as a subset of the fine-grid variables. Then, F-relaxation simply means relaxing only the F-variables (i.e., fine-grid variables that do not correspond to coarse-grid variables), while leaving the remaining fine-grid variables (C-variables) unchanged. The generalization of compatible relaxation is in allowing the coarse-grid variables to be defined differently, say as linear combinations of fine-grid variables, or even nondeterministically (see examples in [1]). For the present summary it suffices to consider the simple case. The central observation regarding the set of coarse-grid variables is the following [1]: Observation 1--A general measure for the quality of the set of coarse-grid variables is the convergence rate of compatible relaxation. The conclusion is that a necessary condition for efficient multigrid solution (e.g., with convergence rates independent of problem size) is that the compatible-relaxation convergence be bounded away from 1, independently of the number of variables. This is often a sufficient condition, provided that the coarse-grid equations are sufficiently accurate. Therefore, it is suggested in [1] that the convergence rate of compatible relaxation should be used as a criterion for choosing and evaluating the set of coarse

  4. Hydrography-driven coarsening of grid digital elevation models

    Science.gov (United States)

    Moretti, G.; Orlandini, S.

    2017-12-01

    A new grid coarsening strategy, denoted as hydrography-driven (HD) coarsening, is developed in the present study. The HD coarsening strategy is designed to retain the essential hydrographic features of surface flow paths observed in high-resolution digital elevation models (DEMs): (1) depressions are filled in the considered high-resolution DEM, (2) the obtained topographic data are used to extract a reference grid network composed of all surface flow paths, (3) the Horton order is assigned to each link of the reference grid network, and (4) within each coarse grid cell, the elevation of the point lying along the highest-order path of the reference grid network and displaying the minimum distance to the cell center is assigned to this coarse grid cell center. The capabilities of the HD coarsening strategy to provide consistent surface flow paths with respect to those observed in high-resolution DEMs are evaluated over a synthetic valley and two real drainage basins located in the Italian Alps and in the Italian Apennines. The HD coarsening is found to yield significantly more accurate surface flow path profiles than the standard nearest neighbor (NN) coarsening. In addition, the proposed strategy is found to reduce drastically the impact of depression-filling procedures in coarsened topographic data. The HD coarsening strategy is therefore advocated for all those cases in which the relief of the extracted drainage network is an important hydrographic feature. The figure below reports DEMs of a synthetic valley and extracted surface flow paths. (a) 10-m grid DEM displaying no depressions and extracted surface flow path (gray line). (b) 1-km grid DEM obtained from NN coarsening. (c) 1-km grid DEM obtained from NN coarsening plus depression-filling and extracted surface flow path (light blue line). (d) 1-km grid DEM obtained from HD coarsening and extracted surface flow path (magenta line).

  5. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  6. Observation of changing crystal orientations during grain coarsening

    International Nuclear Information System (INIS)

    Sharma, Hemant; Huizenga, Richard M.; Bytchkov, Aleksei; Sietsma, Jilt; Offerman, S. Erik

    2012-01-01

    Understanding the underlying mechanisms of grain coarsening is important in controlling the properties of metals, which strongly depend on the microstructure that forms during the production process or during use at high temperature. Grain coarsening of austenite at 1273 K in a binary Fe–2 wt.% Mn alloy was studied using synchrotron radiation. Evolution of the volume, average crystallographic orientation and mosaicity of more than 2000 individual austenite grains was tracked during annealing. It was found that an approximately linear relationship exists between grain size and mosaicity, which means that orientation gradients are present in the grains. The orientation gradients remain constant during coarsening and consequently the character of grain boundaries changes during coarsening, affecting the coarsening rate. Furthermore, changes in the average orientation of grains during coarsening were observed. The changes could be understood by taking the observed orientation gradients and anisotropic movement of grain boundaries into account. Five basic modes of grain coarsening were deduced from the measurements, which include: anisotropic (I) and isotropic (II) growth (or shrinkage); movement of grain boundaries resulting in no change in volume but a change in shape (III); movement of grain boundaries resulting in no change in volume and mosaicity, but a change in crystallographic orientation (IV); no movement of grain boundaries (V).

  7. Coarsening of AA6013-T6 Precipitates During Sheet Warm Forming Applications

    Science.gov (United States)

    Di Ciano, M.; DiCecco, S.; Esmaeili, S.; Wells, M. A.; Worswick, M. J.

    2018-03-01

    The use of warm forming for AA6xxx-T6 sheet is of interest to improve its formability; however, the effect warm forming may have on the coarsening of precipitates and the mechanical strength of these sheets has not been well studied. In this research, the coarsening behavior of AA6013-T6 precipitates has been explored, in the temperature range of 200-300 °C, and time of 30 s up to 50 h. Additionally, the effect of warm deformation on coarsening behavior was explored using: (1) simulated warm forming tests in a Gleeble thermo-mechanical simulator and (2) bi-axial warm deformation tests. Using a strong obstacle model to describe the yield strength (YS) evolution of the AA6013-T6 material, and a Lifshitz, Slyozov, and Wagner (LSW) particle coarsening law to describe the change in precipitate size with time, the coarsening kinetics were modeled for this alloy. The coarsening kinetics in the range of 220-300 °C followed a trend similar to that previously found for AA6111 for the 180-220 °C range. There was strong evidence that coarsening kinetics were not altered due to warm deformation above 220 °C. For warm forming between 200 and 220 °C, the YS of the AA6013-T6 material increased slightly, which could be attributed to strain hardening during warm deformation. Finally, a non-isothermal coarsening model was used to assess the potential reduction in the YS of AA6013-T6 for practical processing conditions related to auto-body manufacturing. The model calculations showed that 90% of the original AA6013-T6 YS could be maintained, for warm forming temperatures up to 280 °C, if the heating schedule used to get the part to the warm forming temperature was limited to 1 min.

  8. Structure and grain coarsening during the processing of engineering ceramics

    International Nuclear Information System (INIS)

    Shaw, N.J.

    1987-11-01

    Studies have been made of three ceramic systems (Al 2 O 3 , Y 2 O 3 /MgO, and SiC/C/B), both to explore a surface area/density diagram approach to examining the coarsening processes during sintering and to explore an alternative coarsening parameter, i.e., the grain boundary surface area (raising it at a given value of the density) and not the pore surface area; therefore, pinning of the grain boundaries by solid-solution drag is the only function evidenced by these results. The importance of such pinning even at densities as low as 75% of theoretical is linked to the existence of microstructural inhomogeneities. The early stages of sintering of Y 2 O 3 powder have been examined using two techniques, BET surface area analysis and transmission electron microscopy. Each has given some insight into the process occurring and, used together, have provided some indication of the effect of MgO on coarsening during sintering. Attempts to further elucidate effects of MgO on the coarsening behavior of Y 2 O 3 by the surface area/density diagram approach were unsuccessful due to masking effects of contaminating reactions during sintering and/or thermal etching. The behavior of the undoped SiC which only coarsens can be clearly distinguished by the surface area/density diagram from that of SiC/C/B which also concurrently densifies. Little additional information was obtainable by this method due to unfavorable sample etching characteristics. The advantages, disadvantages, and difficulties of application of these techniques to the study of coarsening during sintering are discussed

  9. Phase field modeling of dendritic coarsening during isothermal

    Directory of Open Access Journals (Sweden)

    Zhang Yutuo

    2011-08-01

    Full Text Available Dendritic coarsening in Al-2mol%Si alloy during isothermal solidification at 880K was investigated by phase field modeling. Three coarsening mechanisms operate in the alloy: (a melting of small dendrite arms; (b coalescence of dendrites near the tips leading to the entrapment of liquid droplets; (c smoothing of dendrites. Dendrite melting is found to be dominant in the stage of dendritic growth, whereas coalescence of dendrites and smoothing of dendrites are dominant during isothermal holding. The simulated results provide a better understanding of dendrite coarsening during isothermal solidification.

  10. Foam flow in a model porous medium: I. The effect of foam coarsening.

    Science.gov (United States)

    Jones, S A; Getrouw, N; Vincent-Bonnieu, S

    2018-05-09

    Foam structure evolves with time due to gas diffusion between bubbles (coarsening). In a bulk foam, coarsening behaviour is well defined, but there is less understanding of coarsening in confined geometries such as porous media. Previous predictions suggest that coarsening will cause foam lamellae to move to low energy configurations in the pore throats, resulting in greater capillary resistance when restarting flow. Foam coarsening experiments were conducted in both a model-porous-media micromodel and in a sandstone core. In both cases, foam was generated by coinjecting surfactant solution and nitrogen. Once steady state flow had been achieved, the injection was stopped and the system sealed off. In the micromodel, the foam coarsening was recorded using time-lapse photography. In the core flood, the additional driving pressure required to reinitiate flow after coarsening was measured. In the micromodel the bubbles coarsened rapidly to the pore size. At the completion of coarsening the lamellae were located in minimum energy configurations in the pore throats. The wall effect meant that the coarsening did not conform to the unconstricted growth laws. The coreflood tests also showed coarsening to be a rapid process. The additional driving pressure to restart flow reached a maximum after just 2 minutes.

  11. Three is much more than two in coarsening dynamics of cyclic competitions

    Science.gov (United States)

    Mitarai, Namiko; Gunnarson, Ivar; Pedersen, Buster Niels; Rosiek, Christian Anker; Sneppen, Kim

    2016-04-01

    The classical game of rock-paper-scissors has inspired experiments and spatial model systems that address the robustness of biological diversity. In particular, the game nicely illustrates that cyclic interactions allow multiple strategies to coexist for long-time intervals. When formulated in terms of a one-dimensional cellular automata, the spatial distribution of strategies exhibits coarsening with algebraically growing domain size over time, while the two-dimensional version allows domains to break and thereby opens the possibility for long-time coexistence. We consider a quasi-one-dimensional implementation of the cyclic competition, and study the long-term dynamics as a function of rare invasions between parallel linear ecosystems. We find that increasing the complexity from two to three parallel subsystems allows a transition from complete coarsening to an active steady state where the domain size stays finite. We further find that this transition happens irrespective of whether the update is done in parallel for all sites simultaneously or done randomly in sequential order. In both cases, the active state is characterized by localized bursts of dislocations, followed by longer periods of coarsening. In the case of the parallel dynamics, we find that there is another phase transition between the active steady state and the coarsening state within the three-line system when the invasion rate between the subsystems is varied. We identify the critical parameter for this transition and show that the density of active boundaries has critical exponents that are consistent with the directed percolation universality class. On the other hand, numerical simulations with the random sequential dynamics suggest that the system may exhibit an active steady state as long as the invasion rate is finite.

  12. High annealing temperature induced rapid grain coarsening for efficient perovskite solar cells.

    Science.gov (United States)

    Cao, Xiaobing; Zhi, Lili; Jia, Yi; Li, Yahui; Cui, Xian; Zhao, Ke; Ci, Lijie; Ding, Kongxian; Wei, Jinquan

    2018-08-15

    Thermal annealing plays multiple roles in fabricating high quality perovskite films. Generally, it might result in large perovskite grains by elevating annealing temperature, but might also lead to decomposition of perovskite. Here, we study the effects of annealing temperature on the coarsening of perovskite grains in a temperature range from 100 to 250 °C, and find that the coarsening rate of the perovskite grain increase significantly with the annealing temperature. Compared with the perovskite films annealed at 100 °C, high quality perovskite films with large columnar grains are obtained by annealing perovskite precursor films at 250 °C for only 10 s. As a result, the power conversion efficiency of best solar cell increased from 12.35% to 16.35% due to its low recombination rate and high efficient charge transportation in solar cells. Copyright © 2018. Published by Elsevier Inc.

  13. Experimental investigation of particle size distribution influence on diffusion controlled coarsening

    International Nuclear Information System (INIS)

    Fang, Zhigang; Patterson, B.R.

    1993-01-01

    The influence of initial particle size distribution on coarsening during liquid phase sintering has been experimentally investigated using W-14Ni-6Fe alloy as a model system. It was found that initially wider size distribution particles coarsened more rapidly than those of an initially narrow distribution. The well known linear relationship between the cube of the average particle radius bar r -3 , and time was observed for most of the coarsening process, although the early stage coarsening rate constant changed with time, as expected with concomitant early changes in the tungsten particle size distribution. The instantaneous transient rate constant was shown to be related to the geometric standard deviation, 1nσ, of the instantaneous size distributions, with higher rate constants corresponding to larger 1nσ values. The form of the particle size distributions changed rapidly during early coarsening and reached a quasi-stable state, different from the theoretical asymptotic distribution, after some time. A linear relationship was found between the experimentally observed instantaneous rate constant and that computed from an earlier model incorporating the effect of particle size distribution. The above results compare favorably with those from prior theoretical modeling and computer simulation studies of the effect of particle size distribution on coarsening, based on the DeHoff communicating neighbor model

  14. Coarsening of Ni(3)Si precipitates in binary Ni-Si alloys

    Science.gov (United States)

    Cho, Jin-Hoon

    The coarsening behavior of coherent gammasp'\\ (Nisb3Si) precipitates with volume fractions, f, ranging from 0.017 to 0.32 in binary Ni-Si alloys was investigated. All of the alloys were aged at 650sp° C for times as long as 2760 h and measurements were made of the kinetics of coarsening, particle size distributions and the evolution of particle morphologies using transmission electron microscopy. The kinetics of solute depletion were investigated using measurements of the ferromagnetic Curie temperature. We successfully overcame the difficulties in obtaining uniform spatial distributions of precipitates at small f by employing an up-quenching treatment; alloys with f less than 0.1 were pre-aged at 530sp° C prior to re-aging at the normal aging temperature of 650sp° C. Almost identical coarsening behavior exhibited by an alloy subjected to both isothermal and up-quenching treatments confirm that the up-quenching treatments do not affect any aspect of the coarsening behavior. Consistent with previous studies, the particles are spherical in shape when small and evolve to a cuboidal shape, with flat faces parallel to {}, as they grow. This shape transition was characterized quantitatively by analyzing the intensity distributions of Fast Fourier Transform spectra generated from the digitized images of TEM micrographs. The precipitates display no tendency towards becoming plate-shaped and they resist coalescence even at the largest sizes, which approach 400 nm in diameter at 2760 h of aging for higher volume fraction alloys. For f < 0.1, the kinetics of coarsening and solute depletion as well as the standard deviation of the particle size distributions decrease as f increases. This anomalous behavior has been documented previously by other investigators, but is contrary to the predictions of theories that incorporate the volume fraction effect in coarsening kinetics. We find no convincing evidence to suggest that f influences any aspect of the coarsening behavior at

  15. Rapid Solidification of Sn-Cu-Al Alloys for High-Reliability, Lead-Free Solder: Part II. Intermetallic Coarsening Behavior of Rapidly Solidified Solders After Multiple Reflows

    Science.gov (United States)

    Reeve, Kathlene N.; Choquette, Stephanie M.; Anderson, Iver E.; Handwerker, Carol A.

    2016-12-01

    Controlling the size, dispersion, and stability of intermetallic compounds in lead-free solder alloys is vital to creating reliable solder joints regardless of how many times the solder joints are melted and resolidified (reflowed) during circuit board assembly. In this article, the coarsening behavior of Cu x Al y and Cu6Sn5 in two Sn-Cu-Al alloys, a Sn-2.59Cu-0.43Al at. pct alloy produced via drip atomization and a Sn-5.39Cu-1.69Al at. pct alloy produced via melt spinning at a 5-m/s wheel speed, was characterized after multiple (1-5) reflow cycles via differential scanning calorimetry between the temperatures of 293 K and 523 K (20 °C and 250 °C). Little-to-no coarsening of the Cu x Al y particles was observed for either composition; however, clustering of Cu x Al y particles was observed. For Cu6Sn5 particle growth, a bimodal size distribution was observed for the drip atomized alloy, with large, faceted growth of Cu6Sn5 observed, while in the melt spun alloy, Cu6Sn5 particles displayed no significant increase in the average particle size, with irregularly shaped, nonfaceted Cu6Sn5 particles observed after reflow, which is consistent with shapes observed in the as-solidified alloys. The link between original alloy composition, reflow undercooling, and subsequent intermetallic coarsening behavior was discussed by using calculated solidification paths. The reflowed microstructures suggested that the heteroepitaxial relationship previously observed between the Cu x Al y and the Cu6Sn5 was maintained for both alloys.

  16. PETN Coarsening - Predictions from Accelerated Aging Data

    Energy Technology Data Exchange (ETDEWEB)

    Maiti, Amitesh [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gee, Richard H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-03-30

    Ensuring good ignition properties over long periods of time necessitates maintaining a good level of porosity in powders of initiator materials and preventing particle coarsening. To simulate porosity changes of such powder materials over long periods of time a common strategy is to perform accelerated aging experiments over shorter time spans at elevated temperatures. In this paper we examine historical accelerated-aging data on powders of Pentaerythritol Tetranitrate (PETN), an important energetic material, and make predictions for long-term aging under ambient conditions. Lastly, we develop an evaporation-condensation- based model to provide some mechanistic understanding of the coarsening process.

  17. Finite element analysis of mechanical stability of coarsened nanoporous gold

    International Nuclear Information System (INIS)

    Cho, Hoon-Hwe; Chen-Wiegart, Yu-chen Karen; Dunand, David C.

    2016-01-01

    The mechanical stability of nanoporous gold (np-Au) at various stages of thermal coarsening is studied via finite element analysis under volumetric compression using np-Au architectures imaged via X-ray nano-tomography. As the np-Au is coarsened thermally over ligament sizes ranging from 185 to 465 nm, the pore volume fraction is determinant for the mechanical stability of the coarsened np-Au, unlike the curvature and surface orientation of the ligaments. The computed Young's modulus and yield strength of the structures are compared with the Gibson–Ashby model. The geometry of the structures determines the locations where stress concentrations occur at the onset of yielding.

  18. Experimental, computational and theoretical studies of δ′ phase coarsening in Al–Li alloys

    International Nuclear Information System (INIS)

    Pletcher, B.A.; Wang, K.G.; Glicksman, M.E.

    2012-01-01

    Experimental characterization of microstructure evolution in three binary Al–Li alloys provides critical tests of both diffusion screening theory and multiparticle diffusion simulations, which predict late-stage phase-coarsening kinetics. Particle size distributions, growth kinetics and maximum particle sizes obtained using quantitative, centered dark-field transmission electron microscopy are compared quantitatively with theoretical and computational predictions. We also demonstrate the dependence on δ′ precipitate volume fraction of the rate constant for coarsening and the microstructure’s maximum particle size, both of which remained undetermined for this alloy system for nearly a half century. Our experiments show quantitatively that the diffusion-screening theoretical description of phase coarsening yields reasonable kinetic predictions, and that useful simulations of microstructure evolution are obtained via multiparticle diffusion. The tested theory and simulation method will provide useful tools for future design of two-phase alloys for elevated temperature applications.

  19. Coarsening by network restructuring in model nanoporous gold

    International Nuclear Information System (INIS)

    Kolluri, Kedarnath; Demkowicz, Michael J.

    2011-01-01

    Using atomistic modeling, we show that restructuring of the network of interconnected ligaments causes coarsening in a model of nanoporous gold. The restructuring arises from the collapse of some ligaments onto neighboring ones and is enabled by localized plasticity at ligaments and nodes. This mechanism may explain the occurrence of enclosed voids and reduction in volume in nanoporous metals during their synthesis. An expression is developed for the critical ligament radius below which coarsening by network restructuring may occur spontaneously, setting a lower limit to the ligament dimensions of nanofoams.

  20. Coarsening behaviours of coherent γ' and γ precipitates in elastically constrained Ni-Al-Ti alloys

    International Nuclear Information System (INIS)

    Maebashi, T.; Doi, M.

    2004-01-01

    The coarsening behaviours of γ' and γ precipitates in elastically constrained Ni-Al-Ti alloys were investigated by means of transmission electron microscopy. When the Ni-8 at.% Al-6 at.% Ti alloy is aged at 1023 K, coherent γ' particles having L1 2 structure appear and coarsen in the γ matrix having disordered A1 structure. At first the mean particle size increases in proportion to the cube root of ageing time t ( ∝ t 1/3 ), and then the coarsening remarkably decelerates. The shape of γ' precipitate changes from the sphere to the cube as the coarsening progresses. When the Ni-13 at.% Al-9 at.% Ti alloy is aged at 973 K, coherent γ particles appear and coarsen in the γ' matrix. At first the relation of ∝ t 1/3 holds good, and then the coarsening accelerates, so that the increases in proportion to the square root of t ( ∝ t 1/2 ). The shape of γ precipitate changes to the plate having {1 0 0} planes as the coarsening progresses. Such coarsening behaviours of γ' and γ precipitates are good examples of the elasticity effects in elastically constrained systems

  1. Coarsening kinetics of γ' precipitates in the Ni-Al-Mo system

    International Nuclear Information System (INIS)

    Wang Tao; Sheng Guang; Liu Zikui; Chen Longqing

    2008-01-01

    The effect of Mo on the microstructure evolution and coarsening kinetics of γ' precipitates in the Ni-Al-Mo system is studied using phase-field simulations with inputs from thermodynamic, kinetic and lattice parameter databases. For alloys of different compositions, the precipitate morphology and the statistical information of precipitate sizes are predicted as a function of annealing time. It is observed that increasing Mo content leads to a change of the precipitate morphology from being cuboidal to spherical as well as a reduction in the coarsening rate. Comparison between simulated results and existing experimental microstructure morphologies and coarsening rates shows good agreements

  2. Structure and grain coarsening during the processing of engineering ceramics. Ph.D. Thesis - Leeds Univ., United Kingdom

    Science.gov (United States)

    Shaw, Nancy J.

    1987-01-01

    Studies have been made of three ceramic systems (Al2O3, Y2O3/MgO, and SiC/C/B), both to explore a surface area/density diagram approach to examining the coarsening processes during sintering and to explore an alternative coarsening parameter, i.e., the grain boundary surface area (raising it at a given value of the density) and not the pore surface area; therefore, pinning of the grain boundaries by solid-solution drag is the only function evidenced by these results. The importance of such pinning even at densities as low as 75% of theoretical is linked to the existence of microstructural inhomogeneities. The early stages of sintering of Y2O3 powder have been examined using two techniques, BET surface area analysis and transmission electron microscopy. Each has given some insight into the process occurring and, used together, have provided some indication of the effect of MgO on coarsening during sintering. Attempts to further elucidate effects of MgO on the coarsening behavior of Y2O3 by the surface area/density diagram approach were unsuccessful due to masking effects of contaminating reactions during sintering and/or thermal etching. The behavior of the undoped SiC which only coarsens can be clearly distinguished by the surface area/density diagram from that of SiC/C/B which also concurrently densifies. Little additional information was obtainable by this method due to unfavorable sample etching characteristics. The advantages, disadvantages, and difficulties of application of these techniques to the study of coarsening during sintering are discussed.

  3. A novel coarsening mechanism of droplets in immiscible fluid mixtures

    Science.gov (United States)

    Shimizu, Ryotaro; Tanaka, Hajime

    2015-06-01

    In our daily lives, after shaking a salad dressing, we see the coarsening of oil droplets suspended in vinegar. Such a demixing process is observed everywhere in nature and also of technological importance. For a case of high droplet density, domain coarsening proceeds with inter-droplet collisions and the resulting coalescence. This phenomenon has been explained primarily by the so-called Brownian-coagulation mechanism: stochastic thermal forces exerted by molecules induce random motion of individual droplets, causing accidental collisions and subsequent interface-tension-driven coalescence. Contrary to this, here we demonstrate that the droplet motion is not random, but hydrodynamically driven by the composition Marangoni force due to an interfacial tension gradient produced in each droplet as a consequence of composition correlation among droplets. This alters our physical understanding of droplet coarsening in immiscible liquid mixtures on a fundamental level.

  4. A pursuit of significance of the coarsened gastric rugae in radiologic examination

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ok Dong [Chung Ang University College of Medicine, Seoul (Korea, Republic of)

    1979-06-15

    The radiologic upper G.I. series and gastroscopic examination with gastric biopsies of 230 cases were carried out in Korea General Hospital for the purpose of pursuit of significance of coarsened gastric rugae. Out of the above series the 26 cases showing mere radiologic finding of coarsening of the gastric mucosal rugae were selected, excluding the cases with definite evidence of ulceration, malignancies and others. The correlativity of the coarsened gastric rugae was investigated with clinical pictures, gastroscopic features and biopsy findings. The following results were obtained: 1. There were 24 cases of gastritis, 5 of stomach ulcer and 2 of stomach cancer in the 26 cases with mere finding of mucosal coarsening. 2. There was 5 cases of stomach ulcer disease revealing no radiologic evidence, but there were found tiny ulcers in 4 cases and a large ulcer crater of 1.0 cm by 1.5 cm in diameter in the other case under the gastroscopic study. 3. Two cases of stomach cancer were not detected in neither radiologic nor gastroscopic examination, however, they were found by gastric biopsy. 4. It should be strongly emphasized that the biopsy under the gastroscopic control must be followed when a radiologic evidence of coarsened gastric rugae is demonstrated.

  5. A pursuit of significance of the coarsened gastric rugae in radiologic examination

    International Nuclear Information System (INIS)

    Kim, Ok Dong

    1979-01-01

    The radiologic upper G.I. series and gastroscopic examination with gastric biopsies of 230 cases were carried out in Korea General Hospital for the purpose of pursuit of significance of coarsened gastric rugae. Out of the above series the 26 cases showing mere radiologic finding of coarsening of the gastric mucosal rugae were selected, excluding the cases with definite evidence of ulceration, malignancies and others. The correlativity of the coarsened gastric rugae was investigated with clinical pictures, gastroscopic features and biopsy findings. The following results were obtained: 1. There were 24 cases of gastritis, 5 of stomach ulcer and 2 of stomach cancer in the 26 cases with mere finding of mucosal coarsening. 2. There was 5 cases of stomach ulcer disease revealing no radiologic evidence, but there were found tiny ulcers in 4 cases and a large ulcer crater of 1.0 cm by 1.5 cm in diameter in the other case under the gastroscopic study. 3. Two cases of stomach cancer were not detected in neither radiologic nor gastroscopic examination, however, they were found by gastric biopsy. 4. It should be strongly emphasized that the biopsy under the gastroscopic control must be followed when a radiologic evidence of coarsened gastric rugae is demonstrated.

  6. Grain coarsening in polymineralic contact metamorphic carbonate rocks: The role of different physical interactions during coarsening

    DEFF Research Database (Denmark)

    Brodhag, Sabine; Herwegh, Marco; Berger, Alfons

    2011-01-01

    ) and microstructures with considerable second-phase volume fractions of up to 0.5. The variations might be of general validity for any polymineralic rock, which undergoes grain coarsening during metamorphism. The new findings are important for a better understanding of the initiation of strain localization based...... on the activation of grain size dependent deformation mechanisms....

  7. Structural refinement and coarsening in deformed metals

    DEFF Research Database (Denmark)

    Hansen, N.; Huang, X.; Xing, Q.

    2005-01-01

    The microstructural refinement by plastic deformation is analysed in terms of key parameters, the spacing between and the misorientation angle across the boundaries subdividing the structure. Coarsening of such structures by annealing is also characterised. For both deformed and annealed structur...

  8. Slow coarsening of B2-ordered domains at low temperatures: A kinetic Monte Carlo study

    International Nuclear Information System (INIS)

    Le Floc'h, D.; Bellon, P.; Athenes, M.

    2000-01-01

    The kinetics of the ordering and coarsening of B2-ordered domains is studied using atomistic kinetic Monte Carlo simulations. Special emphasis is put on the effect of annealing temperature, alloy composition, and atom dynamics on the coarsening behavior. When atomic diffusion proceeds by vacancy jumps to nearest-neighbor sites, a transient slow coarsening regime is observed at temperatures below half the order-disorder transition temperature T c . It results in apparent coarsening exponents that decrease with decreasing the annealing temperature. Values as low as 0.14 are measured at 0.25T c . Slow transients take place in both stoichiometric and nonstoichiometric alloys. These regimes are correlated with the transient creation of excess antisites during domain disappearance. Since antiphase boundary mobility decreases with increasing antisite concentration, this transient excess results in the slow coarsening observed in simulations. (c) 2000 The American Physical Society

  9. Coarsening behavior of lath and its effect on creep rates in tempered martensitic 9Cr-W steels

    International Nuclear Information System (INIS)

    Abe, F.

    2004-01-01

    The coarsening behavior of martensite lath has been investigated by means of transmission electron microscopy for tempered martensitic 9 wt.% Cr-(0, 1, 2, 4 wt.%) W steels during creep at 823-923 K. During creep, the recovery of excess dislocations, the agglomeration of carbides and the coarsening of laths take place. The coarsening of laths with absorbing excess dislocations is the major process in the creep acceleration. The coarsening rate of lath decreases with increasing W concentration, which is correlated with the rate of Ostwald ripening of M 23 C 6 carbides. The progressive local-coalescence of two adjacent laths boundaries near the Y-junction causes the movement of Y-junction, resulting in the coarsening of lath

  10. Ripple coarsening on ion beam-eroded surfaces.

    Science.gov (United States)

    Teichmann, Marc; Lorbeer, Jan; Frost, Frank; Rauschenbach, Bernd

    2014-01-01

    The temporal evolution of ripple pattern on Ge, Si, Al 2 O 3, and SiO 2 by low-energy ion beam erosion with Xe (+) ions is studied. The experiments focus on the ripple dynamics in a fluence range from 1.1 × 10(17) cm(-2) to 1.3 × 10(19) cm(-2) at ion incidence angles of 65° and 75° and ion energies of 600 and 1,200 eV. At low fluences a short-wavelength ripple structure emerges on the surface that is superimposed and later on dominated by long wavelength structures for increasing fluences. The coarsening of short wavelength ripples depends on the material system and angle of incidence. These observations are associated with the influence of reflected primary ions and gradient-dependent sputtering. The investigations reveal that coarsening of the pattern is a universal behavior for all investigated materials, just at the earliest accessible stage of surface evolution.

  11. Structural coarsening during annealing of an aluminum plate heavily deformed using ECAE

    DEFF Research Database (Denmark)

    Mishin, Oleg V.; Zhang, Yubin; Godfrey, A.

    2015-01-01

    The microstructure and softening behaviour have been investigated in an aluminum plate heavily deformed by equal channel angular extrusion and subsequently annealed at 170 °C. It is found that at this temperature the microstructure evolves by coarsening with no apparent signs of recrystallization...... even after 2 h of annealing. Both coarsening and softening are rapid within first 10 minutes of annealing followed by a slower evolution with increasing annealing duration. Evidence of triple junction (TJ) motion during coarsening is obtained by inspecting the microstructure in one region using...... the electron backscatter diffraction technique both before and after annealing for 10 minutes. The fraction of fast-migrating TJs is found to strongly depend of the type of boundaries composing a junction. The greatest fraction of fast-migrating TJs is in the group, where all boundaries forming a junction...

  12. Coarsening dynamics in a vibrofluidized compartmentalized granulas gas

    NARCIS (Netherlands)

    van der Meer, Roger M.; van der Weele, J.P.; Lohse, Detlef

    2004-01-01

    Coarsening is studied in a vertically driven, initially uniformly distributed granular gas within a container divided into many connected compartments. The clustering is experimentally observed to occur in a two-stage process: first, the particles cluster in a few of the compartments. Subsequently,

  13. Coarsening behaviour and interfacial structure of γ′ precipitates in Co-Al-W based superalloys

    International Nuclear Information System (INIS)

    Vorontsov, V.A.; Barnard, J.S.; Rahman, K.M.; Yan, H.-Y.; Midgley, P.A.; Dye, D.

    2016-01-01

    This work discusses the effects of alloying on the coarsening behaviour of the L1 2 ordered γ ′ phase and the structure of the γ/γ ′ interfaces in three Co-Al-W base superalloys aged at ∼90 °C below the respective solvus temperatures: Co-7Al-7W, Co-10Al-5W-2Ta and Co-7Al-7W-20Ni (at.%). The coarsening kinetics are adequately characterised by the classical Lifshitz-Slyozov-Wagner model for Ostwald ripening. Co-7Al-7W exhibited much slower coarsening than its quaternary derivatives. Alloying can be exploited to modify the coarsening kinetics either by increasing the solvus temperature by adding tantalum, or by adding nickel to shift the rate controlling mechanism towards dependence on the diffusion of aluminium rather than tungsten. Lattice resolution STEM imaging was used to measure the widths of the order-disorder (structural) and Z-contrast (compositional) gradients across the γ/γ ′ interfaces. Similarly to nickel base superalloys, the compositional gradient was found to be wider than the structural. Co-7Al-7W-20Ni had much wider interface gradients than Co-7Al-7W and Co-10Al-5W-2Ta, which suggests that its γ ′ phase stoichiometry is less constrained. A possible correlation between temperature and misfit normalised r vs. t 1/3 coarsening rate coefficients and the structural gradient width has also been identified, whereby alloys with wider interfaces exhibit faster coarsening rates.

  14. Crack Front Segmentation and Facet Coarsening in Mixed-Mode Fracture

    Science.gov (United States)

    Chen, Chih-Hung; Cambonie, Tristan; Lazarus, Veronique; Nicoli, Matteo; Pons, Antonio J.; Karma, Alain

    2015-12-01

    A planar crack generically segments into an array of "daughter cracks" shaped as tilted facets when loaded with both a tensile stress normal to the crack plane (mode I) and a shear stress parallel to the crack front (mode III). We investigate facet propagation and coarsening using in situ microscopy observations of fracture surfaces at different stages of quasistatic mixed-mode crack propagation and phase-field simulations. The results demonstrate that the bifurcation from propagating a planar to segmented crack front is strongly subcritical, reconciling previous theoretical predictions of linear stability analysis with experimental observations. They further show that facet coarsening is a self-similar process driven by a spatial period-doubling instability of facet arrays.

  15. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-01-01

    Background Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Methods Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. Results A ‘Rich Picture’ was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. Conclusions When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further

  16. Coarsening of Faraday Heaps: Experiment, Simulation, and Theory

    NARCIS (Netherlands)

    Gerner, van H.J.; Robledo, Caballero G.A.; Meer, van der D.; Weele, van der J.P.; Hoef, van der M.A.

    2009-01-01

    When a layer of granular material is vertically shaken, the surface spontaneously breaks up in a landscape of small Faraday heaps that merge into larger ones on an ever increasing time scale. This coarsening process is studied in a linear setup, for which the average life span of the transient state

  17. Coupling between drainage and coarsening in wet foam

    Indian Academy of Sciences (India)

    Abstract. Drainage and coarsening are two coupled phenomena during the evolution of wet foam. We show the variation in the growth rate of bubble size, along the height in a column of Gillette shaving foam, by microscope imaging. Simultaneously, the drainage of liquid at the same heights has been investigated by ...

  18. A kinetic Monte Carlo study of coarsening resistance of novel core/shell precipitates

    International Nuclear Information System (INIS)

    Zhang, Xuan; Gao, Wenpei; Bellon, Pascal; Averback, Robert S.; Zuo, Jian-Min

    2014-01-01

    A novel approach towards the design of coarsening-resistant nanoprecipitates in structural alloys was investigated by kinetic Monte Carlo (KMC) simulation. The approach is motivated by recent experimental results in Cu–Nb–W alloys showing that room temperature ion irradiation resulted in W nanoprecipitation, leading to exceptional stability of W-rich-core/Nb-rich-shell nanoprecipitates formed following thermal annealing (Zhang et al., 2013 [11]). Here, image simulations of atomically resolved scanning transmission electron microscopy are performed to establish that these W nanoprecipitates are highly ramified. Thermal precipitate coarsening in an A–B–C ternary alloy similar to Cu–Nb–W is then studied by KMC simulations, where the highly immiscible and refractory C solute atoms are initially distributed into fractal nanoprecipitates, or cores, which become coated by a shell of B atoms during elevated temperature annealing. Compared with nanoprecipitates generated by compact C cores, the ramified nanoprecipitates result in exceptionally high trapping efficiency of B solute atoms during thermal coarsening, and the efficiency increases with the cluster size. The KMC results are analyzed and rationalized by noting that, owing to the Gibbs–Thomson effect, when the curvatures of the shell of the precipitates are zero or negative, the microstructure is coarsening-resistant. Such morphology can be realized by facets, or by dynamic balance within positive, negative and zero curvatures

  19. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings.

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-08-01

    Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. A 'Rich Picture' was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further consideration. Published by the BMJ Publishing Group

  20. Strain-induced γ{sup '}-coarsening during aging of Ni-based superalloys under uniaxial load. Modeling and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Mushongera, Leslie T.

    2016-07-28

    called for a regularization that enforces local equilibrium at the corners, and the method of Eggelston et al. [Physica D 150, 91 (2001)], generalized to arbitrary crystal symmetries and rotations of the crystalline axes was adapted for that context. Mechanical effects accounting for the contributions from the misfit, anisotropic and inhomogeneous elasticity and creep loading were integrated physically consistent. The mechanical effects are incorporated into the phase field model via the Allen-Cahn equation based on Steinbach [Physica D, 217, 153 (2006)] and Fleck et. al [Philos. Mag., 90, 265 (2010)]. The relaxed displacement fields required to calculate the elastic driving force was obtained by solving the mechanical equilibrium using an iterative Jacobi relaxation scheme using a staggered grid based on the finite difference method. Morphological evolution and kinetics in single crystal Ni-base superalloys was studied. To gain insight in optimized alloying, a systematic computational measure to assess and track the evolution anisotropic microstructures was integrated in the model. Previously, focusing on the solidification behavior, Heckl et al. [Metal. and Mater. Trans. A, 41, 202 (2010)] discussed Ruthenium (Ru) as a possible Rhenium (Re) replacement-candidate for next generation Ni-based superalloys. Employing phase field simulation studies, we performed virtual experiments of the coarsening behavior in Re and Ru containing alloys. The simulations revealed that the degradation of the γ-γ{sup '} microstructure via coarsening is considerably slower in Re-containing superalloys. We observed that an increase in the Re content strongly reduces the γ{sup '}-coarsening kinetics and the simulations explicitly resolved the time dependence of that slow down beyond experiment. Likewise, it was found that Ru variations have no significant effect on the coarsening kinetics. The simulations revealed the mechanism by which Re reduces coarsening kinetics. The

  1. Coarsening of Ni–Ge solid-solution precipitates in “inverse” Ni3Ge alloys

    International Nuclear Information System (INIS)

    Ardell, Alan J.; Ma Yong

    2012-01-01

    Highlights: ► We report microstructural evolution of disordered Ni–Ge precipitates in Ni 3 Ge alloys. ► Coarsening kinetics and particle size distributions are presented. ► Data are analyzed quantitatively using the MSLW theory, but agreement is only fair. ► The shapes of large precipitates are unusual, with discus or boomerang cross-sections. ► Results are compared with morphology, kinetics of Ni–Al in inverse Ni 3 Al alloys. - Abstract: The morphological evolution and coarsening kinetics of Ni–Ge solid solution precipitates from supersaturated solutions of hypostoichiometric Ni 3 Ge were investigated in alloys containing from 22.48 to 23.50 at.% Ge at 600, 650 and 700 °C. The particles evolve from spheres to cuboids, though the flat portions of the interfaces are small. At larger sizes the precipitates coalesce into discus shapes, and are sometimes boomerang-shaped in cross section after intersection. The rate constant for coarsening increases strongly with equilibrium volume fraction, much more so than predicted by current theories; this is very different from the coarsening behavior of Ni 3 Ge precipitates in normal Ni–Ge alloys and of Ni–Al precipitates in inverse Ni 3 Al alloys. The activation energy for coarsening, 275.86 ± 24.17 kJ/mol, is somewhat larger than the result from conventional diffusion experiments, though within the limits of experimental error. Quantitative agreement between theory and experiment, estimated using available data on tracer diffusion coefficients in Ni 3 Ge, is fair, the calculated rate constants exceeding measured ones by a factor of about 15. The particle size distributions are not in very good agreement with the predictions of any theory. These results are discussed in the context of recent theories and observations.

  2. Heat exchanges in coarsening systems

    Energy Technology Data Exchange (ETDEWEB)

    Corberi, Federico [Dipartimento di Fisica ' E R Caianiello' , Università di Salerno, via Ponte don Melillo, 84084 Fisciano (Italy); Gonnella, Giuseppe; Piscitelli, Antonio [Dipartimento di Fisica, Università di Bari and Istituto Nazionale di Fisica Nucleare, Sezione di Bari, via Amendola 173, 70126 Bari (Italy)

    2011-10-15

    This paper is a contribution to the understanding of the thermal properties of ageing systems where statistically independent degrees of freedom with greatly separated time scales are expected to coexist. Focusing on the prototypical case of quenched ferromagnets, where fast and slow modes can be respectively associated with fluctuations in the bulk of the coarsening domains and in their interfaces, we perform a set of numerical experiments specifically designed to compute the heat exchanges between different degrees of freedom. Our studies promote a scenario with fast modes acting as an equilibrium reservoir to which interfaces may release heat through a mechanism that allows fast and slow degrees to maintain their statistical properties independently.

  3. An Improved Clutter Suppression Method for Weather Radars Using Multiple Pulse Repetition Time Technique

    Directory of Open Access Journals (Sweden)

    Yingjie Yu

    2017-01-01

    Full Text Available This paper describes the implementation of an improved clutter suppression method for the multiple pulse repetition time (PRT technique based on simulated radar data. The suppression method is constructed using maximum likelihood methodology in time domain and is called parametric time domain method (PTDM. The procedure relies on the assumption that precipitation and clutter signal spectra follow a Gaussian functional form. The multiple interleaved pulse repetition frequencies (PRFs that are used in this work are set to four PRFs (952, 833, 667, and 513 Hz. Based on radar simulation, it is shown that the new method can provide accurate retrieval of Doppler velocity even in the case of strong clutter contamination. The obtained velocity is nearly unbiased for all the range of Nyquist velocity interval. Also, the performance of the method is illustrated on simulated radar data for plan position indicator (PPI scan. Compared with staggered 2-PRT transmission schemes with PTDM, the proposed method presents better estimation accuracy under certain clutter situations.

  4. An improved early detection method of type-2 diabetes mellitus using multiple classifier system

    KAUST Repository

    Zhu, Jia

    2015-01-01

    The specific causes of complex diseases such as Type-2 Diabetes Mellitus (T2DM) have not yet been identified. Nevertheless, many medical science researchers believe that complex diseases are caused by a combination of genetic, environmental, and lifestyle factors. Detection of such diseases becomes an issue because it is not free from false presumptions and is accompanied by unpredictable effects. Given the greatly increased amount of data gathered in medical databases, data mining has been used widely in recent years to detect and improve the diagnosis of complex diseases. However, past research showed that no single classifier can be considered optimal for all problems. Therefore, in this paper, we focus on employing multiple classifier systems to improve the accuracy of detection for complex diseases, such as T2DM. We proposed a dynamic weighted voting scheme called multiple factors weighted combination for classifiers\\' decision combination. This method considers not only the local and global accuracy but also the diversity among classifiers and localized generalization error of each classifier. We evaluated our method on two real T2DM data sets and other medical data sets. The favorable results indicated that our proposed method significantly outperforms individual classifiers and other fusion methods.

  5. Chemical-Reaction-Controlled Phase Separated Drops: Formation, Size Selection, and Coarsening

    Science.gov (United States)

    Wurtz, Jean David; Lee, Chiu Fan

    2018-02-01

    Phase separation under nonequilibrium conditions is exploited by biological cells to organize their cytoplasm but remains poorly understood as a physical phenomenon. Here, we study a ternary fluid model in which phase-separating molecules can be converted into soluble molecules, and vice versa, via chemical reactions. We elucidate using analytical and simulation methods how drop size, formation, and coarsening can be controlled by the chemical reaction rates, and categorize the qualitative behavior of the system into distinct regimes. Ostwald ripening arrest occurs above critical reaction rates, demonstrating that this transition belongs entirely to the nonequilibrium regime. Our model is a minimal representation of the cell cytoplasm.

  6. Computational thermodynamic investigations of growth and coarsening of laves phase precipitates in 12%Cr creep resistant steels

    Energy Technology Data Exchange (ETDEWEB)

    Prat, O.; Rojas, D. [Max-Planck-Institut fuer Eisenforschung GmbH, Duesseldorf (Germany); Garcia, J.; Kaysser-Pyzalla, A.R. [Helmholtz-Zentrum Berlin fuer Materialien und Energie GmbH, Berlin (Germany); Bochum Univ. (Germany)

    2010-07-01

    Precipitation phenomena in 12%Cr high alloyed steels have been investigated at creep conditions of 650 and 150 MPa up to 6.500 hours in two different alloys. Growth and coarsening of Laves phase was determined experimentally by measuring the size of Laves phase on crept samples using scanning transmission electron microscopy (STEM). The simulations were performed using the software DICTRA based on the assumption the local equilibrium at the moving phase interface. For equilibrium calculations, the Thermo-Calc software was used. The experimental results were compared with DICTRA simulations, showing good agreement. Both the quantitative metallographic measurements as well as the simulations indicate very low coarsening for Laves Phase. The influence of different elements such as Co, Si and Cu on coarsening for Laves phase was simulated. (orig.)

  7. Three dimensional characterization of nickel coarsening in solid oxide cells via ex-situ ptychographic nano-tomography

    DEFF Research Database (Denmark)

    De Angelis, Salvatore; Jørgensen, Peter Stanley; Tsai, Esther Hsiao Rho

    2018-01-01

    Nickel coarsening is considered a significant cause of solid oxide cell (SOC) performance degradation. Therefore, understanding the morphological changes in the nickel-yttria stabilized zirconia (Ni-YSZ) fuel electrode is crucial for the wide spread usage of SOC technology. This paper reports...... a study of the initial 3D microstructure evolution of a SOC analyzed in the pristine state and after 3 and 8 h of annealing at 850 °C, in dry hydrogen. The analysis of the evolution of the same location of the electrode shows a substantial change of the nickel and pore network during the first 3 h...... of treatment, while only negligible changes are observed after 8 h. The nickel coarsening results in loss of connectivity in the nickel network, reduced nickel specific surface area and decreased total triple phase boundary density. For the condition of this experiment, nickel coarsening is shown...

  8. Coarsening of Ni-Ge solid-solution precipitates in 'inverse' Ni{sub 3}Ge alloys

    Energy Technology Data Exchange (ETDEWEB)

    Ardell, Alan J., E-mail: alan.ardell@gmail.com [National Science Foundation, 4201 Wilson Boulevard, Arlington, VA 22230 (United States); Ma Yong [Aquatic Sensor Network Technology LLC, Storrs, CT 06268 (United States)

    2012-07-30

    Highlights: Black-Right-Pointing-Pointer We report microstructural evolution of disordered Ni-Ge precipitates in Ni{sub 3}Ge alloys. Black-Right-Pointing-Pointer Coarsening kinetics and particle size distributions are presented. Black-Right-Pointing-Pointer Data are analyzed quantitatively using the MSLW theory, but agreement is only fair. Black-Right-Pointing-Pointer The shapes of large precipitates are unusual, with discus or boomerang cross-sections. Black-Right-Pointing-Pointer Results are compared with morphology, kinetics of Ni-Al in inverse Ni{sub 3}Al alloys. - Abstract: The morphological evolution and coarsening kinetics of Ni-Ge solid solution precipitates from supersaturated solutions of hypostoichiometric Ni{sub 3}Ge were investigated in alloys containing from 22.48 to 23.50 at.% Ge at 600, 650 and 700 Degree-Sign C. The particles evolve from spheres to cuboids, though the flat portions of the interfaces are small. At larger sizes the precipitates coalesce into discus shapes, and are sometimes boomerang-shaped in cross section after intersection. The rate constant for coarsening increases strongly with equilibrium volume fraction, much more so than predicted by current theories; this is very different from the coarsening behavior of Ni{sub 3}Ge precipitates in normal Ni-Ge alloys and of Ni-Al precipitates in inverse Ni{sub 3}Al alloys. The activation energy for coarsening, 275.86 {+-} 24.17 kJ/mol, is somewhat larger than the result from conventional diffusion experiments, though within the limits of experimental error. Quantitative agreement between theory and experiment, estimated using available data on tracer diffusion coefficients in Ni{sub 3}Ge, is fair, the calculated rate constants exceeding measured ones by a factor of about 15. The particle size distributions are not in very good agreement with the predictions of any theory. These results are discussed in the context of recent theories and observations.

  9. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    Science.gov (United States)

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; Pcreatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  10. Shape and coarsening dynamics of strained islands

    DEFF Research Database (Denmark)

    Schifani, Guido; Frisch, Thomas; Argentina, Mederic

    2016-01-01

    and numerically the formation of an equilibrium island using a two-dimensional continuous model. We have found that these equilibrium island-like solutions have a maximum height h_{0} and they sit on top of a flat wetting layer with a thickness h_{w}. We then consider two islands, and we report that they undergo...... and leads to the shrinkage of the smallest island. Once its height becomes smaller than a minimal equilibrium height h_{0}^{*}, its mass spreads over the entire system. Our results pave the way for a future analysis of coarsening of an assembly of islands....

  11. Coarsening in 3D nonconserved Ising model at zero temperature: Anomaly in structure and slow relaxation of order-parameter autocorrelation

    Science.gov (United States)

    Chakraborty, Saikat; Das, Subir K.

    2017-09-01

    Via Monte Carlo simulations we study pattern and aging during coarsening in a nonconserved nearest-neighbor Ising model, following quenches from infinite to zero temperature, in space dimension d = 3. The decay of the order-parameter autocorrelation function appears to obey a power-law behavior, as a function of the ratio between the observation and waiting times, in the large ratio limit. However, the exponent of the power law, estimated accurately via a state-of-the-art method, violates a well-known lower bound. This surprising fact has been discussed in connection with a quantitative picture of the structural anomaly that the 3D Ising model exhibits during coarsening at zero temperature. These results are compared with those for quenches to a temperature above that of the roughening transition.

  12. Three dimensional characterization of nickel coarsening in solid oxide cells via ex-situ ptychographic nano-tomography

    Science.gov (United States)

    De Angelis, Salvatore; Jørgensen, Peter Stanley; Tsai, Esther Hsiao Rho; Holler, Mirko; Kreka, Kosova; Bowen, Jacob R.

    2018-04-01

    Nickel coarsening is considered a significant cause of solid oxide cell (SOC) performance degradation. Therefore, understanding the morphological changes in the nickel-yttria stabilized zirconia (Ni-YSZ) fuel electrode is crucial for the wide spread usage of SOC technology. This paper reports a study of the initial 3D microstructure evolution of a SOC analyzed in the pristine state and after 3 and 8 h of annealing at 850 °C, in dry hydrogen. The analysis of the evolution of the same location of the electrode shows a substantial change of the nickel and pore network during the first 3 h of treatment, while only negligible changes are observed after 8 h. The nickel coarsening results in loss of connectivity in the nickel network, reduced nickel specific surface area and decreased total triple phase boundary density. For the condition of this experiment, nickel coarsening is shown to be predominantly curvature driven, and changes in the electrode microstructure parameters are discussed in terms of local microstructural evolution.

  13. Study of phase decomposition and coarsening of γ′ precipitates in Ni-12 at.% Ti alloy

    International Nuclear Information System (INIS)

    Garay-Reyes, C.G.; Hernández-Santiago, F.; Cayetano-Castro, N.; López-Hirata, V.M.; García-Rocha, J.; Hernández-Rivera, J.L.; Dorantes-Rosales, H.J.; Cruz-Rivera, J.J.

    2013-01-01

    The early stages of phase decomposition, morphological evolution of precipitates, coarsening kinetics of γ′ precipitates and micro-hardness in Ni-12 at.% Ti alloy are studied by transmission electron microscopy (TEM) and Vickers hardness tests (VHN). Disk-shaped specimens are solution treated at 1473 K (1200 °C) and aged at 823, 923 and 1023 K (550, 650 and 750 °C) during several periods of time. TEM results show that a conditional spinodal of order occurs at the beginning of the phase decomposition and exhibit the following decomposition sequence and morphological evolution of precipitates: α sss → γ″ irregular–cuboidal + γ s → γ′ cuboidal–parallelepiped + γ → η plates + γ. In general during the coarsening of γ′ precipitates, the experimental coarsening kinetics do not fit well to the LSW or TIDC (n = 2.281) theoretical models, however the activation energies determined using the TIDC and LSW theories (262.846 and 283.6075 kJ mol −1 , respectively) are consistent with previously reported values. The highest hardness obtained at 823, 923 and 1023 K (550, 650 and 750 °C) is associated with the presence of γ′ precipitates. - Highlights: • It was studied the conditional spinodal during early stages of phase decomposition. • It was obtained decomposition sequence and morphological evolution of precipitates. • It was experimentally evaluated the coarsening kinetics of γ′ precipitates. • The maximum hardness is associated with the γ′ precipitates

  14. Study of phase decomposition and coarsening of γ′ precipitates in Ni-12 at.% Ti alloy

    Energy Technology Data Exchange (ETDEWEB)

    Garay-Reyes, C.G., E-mail: garay_820123@hotmail.com [Universidad Autónoma de San Luis Potosí, Instituto de Metalurgia, Sierra leona 550, Col. Lomas 2 sección, 78210 S.L.P. (Mexico); Hernández-Santiago, F. [Instituto Politécnico Nacional, ESIME-AZC, Av. de las Granjas 682, col. Sta. Catarina, 02550 D.F. (Mexico); Cayetano-Castro, N. [Instituto Potosino de Investigación Científica y Tecnológica, División de Materiales Avanzados, camino a la Presa San José 2055, Col Lomas 4 sección, 78216 S.L.P. (Mexico); López-Hirata, V.M. [Instituto Politécnico Nacional, ESIQIE-DIM, 118-556, D.F. (Mexico); García-Rocha, J. [Universidad Autónoma de San Luis Potosí, Instituto de Metalurgia, Sierra leona 550, Col. Lomas 2 sección, 78210 S.L.P. (Mexico); Hernández-Rivera, J.L. [Centro de Investigación de Materiales Avanzados (CIMAV), Laboratorio Nacional de Nanotecnología, Miguel de Cervantes 120, 31109 Chihuahua (Mexico); Dorantes-Rosales, H.J. [Instituto Politécnico Nacional, ESIQIE-DIM, 118-556, D.F. (Mexico); Cruz-Rivera, J.J. [Universidad Autónoma de San Luis Potosí, Instituto de Metalurgia, Sierra leona 550, Col. Lomas 2 sección, 78210 S.L.P. (Mexico)

    2013-09-15

    The early stages of phase decomposition, morphological evolution of precipitates, coarsening kinetics of γ′ precipitates and micro-hardness in Ni-12 at.% Ti alloy are studied by transmission electron microscopy (TEM) and Vickers hardness tests (VHN). Disk-shaped specimens are solution treated at 1473 K (1200 °C) and aged at 823, 923 and 1023 K (550, 650 and 750 °C) during several periods of time. TEM results show that a conditional spinodal of order occurs at the beginning of the phase decomposition and exhibit the following decomposition sequence and morphological evolution of precipitates: α{sub sss} → γ″ irregular–cuboidal + γ{sub s} → γ′ cuboidal–parallelepiped + γ → η plates + γ. In general during the coarsening of γ′ precipitates, the experimental coarsening kinetics do not fit well to the LSW or TIDC (n = 2.281) theoretical models, however the activation energies determined using the TIDC and LSW theories (262.846 and 283.6075 kJ mol{sup −1}, respectively) are consistent with previously reported values. The highest hardness obtained at 823, 923 and 1023 K (550, 650 and 750 °C) is associated with the presence of γ′ precipitates. - Highlights: • It was studied the conditional spinodal during early stages of phase decomposition. • It was obtained decomposition sequence and morphological evolution of precipitates. • It was experimentally evaluated the coarsening kinetics of γ′ precipitates. • The maximum hardness is associated with the γ′ precipitates.

  15. A General Method for QTL Mapping in Multiple Related Populations Derived from Multiple Parents

    Directory of Open Access Journals (Sweden)

    Yan AO

    2009-03-01

    Full Text Available It's well known that incorporating some existing populations derived from multiple parents may improve QTL mapping and QTL-based breeding programs. However, no general maximum likelihood method has been available for this strategy. Based on the QTL mapping in multiple related populations derived from two parents, a maximum likelihood estimation method was proposed, which can incorporate several populations derived from three or more parents and also can be used to handle different mating designs. Taking a circle design as an example, we conducted simulation studies to study the effect of QTL heritability and sample size upon the proposed method. The results showed that under the same heritability, enhanced power of QTL detection and more precise and accurate estimation of parameters could be obtained when three F2 populations were jointly analyzed, compared with the joint analysis of any two F2 populations. Higher heritability, especially with larger sample sizes, would increase the ability of QTL detection and improve the estimation of parameters. Potential advantages of the method are as follows: firstly, the existing results of QTL mapping in single population can be compared and integrated with each other with the proposed method, therefore the ability of QTL detection and precision of QTL mapping can be improved. Secondly, owing to multiple alleles in multiple parents, the method can exploit gene resource more adequately, which will lay an important genetic groundwork for plant improvement.

  16. Marginal regression analysis of recurrent events with coarsened censoring times.

    Science.gov (United States)

    Hu, X Joan; Rosychuk, Rhonda J

    2016-12-01

    Motivated by an ongoing pediatric mental health care (PMHC) study, this article presents weakly structured methods for analyzing doubly censored recurrent event data where only coarsened information on censoring is available. The study extracted administrative records of emergency department visits from provincial health administrative databases. The available information of each individual subject is limited to a subject-specific time window determined up to concealed data. To evaluate time-dependent effect of exposures, we adapt the local linear estimation with right censored survival times under the Cox regression model with time-varying coefficients (cf. Cai and Sun, Scandinavian Journal of Statistics 2003, 30, 93-111). We establish the pointwise consistency and asymptotic normality of the regression parameter estimator, and examine its performance by simulation. The PMHC study illustrates the proposed approach throughout the article. © 2016, The International Biometric Society.

  17. Coarsening-densification transition temperature in sintering of uranium dioxide

    International Nuclear Information System (INIS)

    Balakrishna, Palanki; Narasimha Murty, B.; Chakraborthy, K.P.; Jayaraj, R.N.; Ganguly, C.

    2001-01-01

    The concept of coarsening-densification transition temperature (CDTT) has been proposed to explain the experimental observations of the study of sintering undoped uranium dioxide and niobia-doped uranium dioxide powder compacts in argon atmosphere in a laboratory tubular furnace. The general method for deducing CDTT for a given material under the prevailing conditions of sintering and the likely variables that influence the CDTT are described. Though the present work is specific in nature for uranium dioxide sintering in argon atmosphere, the concept of CDTT is fairly general and must be applicable to sintering of any material and has immense potential to offer advantages in designing and/or optimizing the profile of a sintering furnace, in the diagnosis of the fault in the process conditions of sintering, and so on. The problems of viewing the effect of heating rate only in terms of densification are brought out in the light of observing the undesirable phenomena of coring and bloating and causes were identified and remedial measures suggested

  18. Investigation of Dendrite Coarsening in Complex Shaped Lamellar Graphite Iron Castings

    Directory of Open Access Journals (Sweden)

    Péter Svidró

    2017-07-01

    Full Text Available Shrinkage porosity and metal expansion penetration are two casting defects that appear frequently during the production of complex-shaped lamellar graphite iron components. These casting defects are formed during the solidification and usually form in the part of the casting which solidifies last. The position of the area that solidifies last is dependent on the thermal conditions. Test castings with thermal conditions like those existing in a complex-shaped casting were successfully applied to provoke a shrinkage porosity defect and a metal expansion penetration defect. The investigation of the primary dendrite morphology in the defected positions indicates a maximum intradendritic space, where the shrinkage porosity and metal expansion penetration defects appear. Moving away from the defect formation area, the intradendritic space decreases. A comparison of the intradendritic space with the simulated local solidification times indicates a strong relationship, which can be explained by the dynamic coarsening process. More specifically, long local solidification times facilitates the formation of a locally coarsened austenite morphology. This, in turn, enables the formation of a shrinkage porosity or a metal expansion penetration.

  19. Improved H-κ Method by Harmonic Analysis on Ps and Crustal Multiples in Receiver Functions with respect to Dipping Moho and Crustal Anisotropy

    Science.gov (United States)

    Li, J.; Song, X.; Wang, P.; Zhu, L.

    2017-12-01

    The H-κ method (Zhu and Kanamori, 2000) has been widely used to estimate the crustal thickness and Vp/Vs ratio with receiver functions. However, in regions where the crustal structure is complicated, the method may produce uncertain or even unrealistic results, arising particularly from dipping Moho and/or crustal anisotropy. Here, we propose an improved H-κ method, which corrects for these effects first before stacking. The effect of dipping Moho and crustal anisotropy on Ps receiver function has been well studied, but not as much on crustal multiples (PpPs and PpSs+PsPs). Synthetic tests show that the effect of crustal anisotropy on the multiples are similar to Ps, while the effect of dipping Moho on the multiples is 5 times that on Ps (same cosine trend but 5 times in time shift). A Harmonic Analysis (HA) method for dipping/anisotropy was developed by Wang et al. (2017) for crustal Ps receiver functions to extract parameters of dipping Moho and crustal azimuthal anisotropy. In real data, the crustal multiples are much more complicated than the Ps. Therefore, we use the HA method (Wang et al., 2017), but apply separately to Ps and the multiples. It shows that although complicated, the trend of multiples can still be reasonably well represented by the HA. We then perform separate azimuthal corrections for Ps and the multiples and stack to obtain a combined receiver function. Lastly, the traditional H-κ procedure is applied to the stacked receiver function. We apply the improved H-κ method on 40 CNDSN (Chinese National Digital Seismic Network) stations distributed in a variety of geological setting across the Chinese continent. The results show apparent improvement compared to the traditional H-κ method, with clearer traces of multiples and stronger stacking energy in the grid search, as well as more reliable H-κ values.

  20. The effect of β grain coarsening on variant selection and texture evolution in a near-β Ti alloy

    Energy Technology Data Exchange (ETDEWEB)

    Obasi, G.C; Quinta da Fonseca, J. [Manchester Materials Science Centre, The University of Manchester, Grosvenor street, Manchester M13 9PL (United Kingdom); Rugg, D. [Rolls-Royce plc, P.O. Box 31, Derby DE24 8BJ (United Kingdom); Preuss, M., E-mail: michael.preuss@manchester.ac.uk [Manchester Materials Science Centre, The University of Manchester, Grosvenor street, Manchester M13 9PL (United Kingdom)

    2013-08-01

    In the present study, the role of β grain coarsening on α variant selection has been investigated in the near-titanium alloy Ti–21S (Ti–15Mo–3Nb–3Al–0.21Si). The material was first thermomechanically processed in a fully β stabilised condition in order to obtain a fine β grain size before undertaking controlled β grain-coarsening heat treatments. Two different cooling regimes ensured that either all β was retained at room temperature or significant α formation was achieved during cooling with predominant nucleation from β grain boundaries. Detailed electron backscatter diffraction (EBSD) characterisation was carried out on the β quenched and slowly cooled samples in order to compare the predicted α texture based on the β texture measurements assuming no variant selection with the measured α textures. A strong correlation was found between β coarsening and level of variant selection. It was also found that the grain coarsening is driven by the predominant growth of low energy grain boundaries, which strengthen specific β texture components that are part of the 〈1 1 1〉∥ND γ fibre. Finally, it was possible to demonstrate that the strengthened β texture components promote β grain pairs with a common 〈110〉, which is known to enhance variant selection when α nucleates from β grain boundaries.

  1. Kibble-Zurek Scaling and String-Net Coarsening in Topologically Ordered Systems

    Science.gov (United States)

    Khemani, Vedika; Chandran, Anushya; Burnell, F. J.; Sondhi, S. L.

    2013-03-01

    We consider the non-equilibrium dynamics of topologically ordered systems, such as spin liquids, driven across a continuous phase transition into proximate phases with no, or reduced, topological order. This dynamics exhibits scaling in the spirit of Kibble and Zurek but now without the presence of symmetry breaking and a local order parameter. The non-equilibrium dynamics near the critical point is universal in a particular scaling limit. The late stages of the process are seen to exhibit slow, quantum coarsening dynamics for the extended string-nets characterizing the topological phase, a potentially interesting signature of topological order. Certain gapped degrees of freedom that could potentially destroy coarsening are, at worst, dangerously irrelevant in the scaling limit. We also note a time dependent amplification of the energy splitting between topologically degenerate states on closed manifolds. We illustrate these phenomena in the context of particular phase transitions out of the abelian Z2 topologically ordered phase of the toric code, and the non-abelian SU(2)k ordered phases of the relevant Levin-Wen models. This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915 and DMR 10-06608.

  2. Coarsening dynamics of binary liquids with active rotation.

    Science.gov (United States)

    Sabrina, Syeda; Spellings, Matthew; Glotzer, Sharon C; Bishop, Kyle J M

    2015-11-21

    Active matter comprised of many self-driven units can exhibit emergent collective behaviors such as pattern formation and phase separation in both biological (e.g., mussel beds) and synthetic (e.g., colloidal swimmers) systems. While these behaviors are increasingly well understood for ensembles of linearly self-propelled "particles", less is known about the collective behaviors of active rotating particles where energy input at the particle level gives rise to rotational particle motion. A recent simulation study revealed that active rotation can induce phase separation in mixtures of counter-rotating particles in 2D. In contrast to that of linearly self-propelled particles, the phase separation of counter-rotating fluids is accompanied by steady convective flows that originate at the fluid-fluid interface. Here, we investigate the influence of these flows on the coarsening dynamics of actively rotating binary liquids using a phenomenological, hydrodynamic model that combines a Cahn-Hilliard equation for the fluid composition with a Navier-Stokes equation for the fluid velocity. The effect of active rotation is introduced though an additional force within the Navier-Stokes equations that arises due to gradients in the concentrations of clockwise and counter-clockwise rotating particles. Depending on the strength of active rotation and that of frictional interactions with the stationary surroundings, we observe and explain new dynamical behaviors such as "active coarsening" via self-generated flows as well as the emergence of self-propelled "vortex doublets". We confirm that many of the qualitative behaviors identified by the continuum model can also be found in discrete, particle-based simulations of actively rotating liquids. Our results highlight further opportunities for achieving complex dissipative structures in active materials subject to distributed actuation.

  3. Coarsening dynamics in the Vicsek model

    Science.gov (United States)

    Dey, Supravat; Katyal, Nisha; Das, Dibyendu; Puri, Sanjay

    We numerically study the flocking model introduced by Vicsek et al. (1995) in the coarsening regime. At standard self-propulsion speeds, we find two distinct growth laws for the coupled density and velocity fields. The characteristic length scale of the density domains grows as Lρ (t) t 1 / 4 , while the velocity length scale grows much faster, viz . , Lv (t) t 5 / 6 . The spatial fluctuations in the density and velocity ordering are studied by calculating the two-point correlation function and the structure factor, which show deviations from the well-known Porod's law. This is a natural consequence of scattering from irregular morphologies that dynamically arise in the system. In contrast, at lower self-propulsion speeds, the morphology is distinct, and as a result a new set of scaling exponents emerge. Most strikingly, the velocity order follows the density order with Lρ (t) Lv (t) t 1 / 4 .

  4. Microstructure taxonomy based on spatial correlations: Application to microstructure coarsening

    International Nuclear Information System (INIS)

    Fast, Tony; Wodo, Olga; Ganapathysubramanian, Baskar; Kalidindi, Surya R.

    2016-01-01

    To build materials knowledge, rigorous description of the material structure and associated tools to explore and exploit information encoded in the structure are needed. These enable recognition, categorization and identification of different classes of microstructure and ultimately enable to link structure with properties of materials. Particular interest lies in the protocols capable of mining the essential information in large microstructure datasets and building robust knowledge systems that can be easily accessed, searched, and shared by the broader materials community. In this paper, we develop a protocol based on automated tools to classify microstructure taxonomies in the context of coarsening behavior which is important for long term stability of materials. Our new concepts for enhanced description of the local microstructure state provide flexibility of description. The mathematical description of microstructure that capture crucial attributes of the material, although central to building materials knowledge, is still elusive. The new description captures important higher order spatial information, but at the same time, allows down sampling if less information is needed. We showcase the classification protocol by studying coarsening of binary polymer blends and classifying steady state structures. We study several microstructure descriptions by changing the microstructure local state order and discretization and critically evaluate their efficacy. Our analysis revealed the superior properties of microstructure representation is based on the first order-gradient of the atomic fraction.

  5. Hybrid multiple criteria decision-making methods

    DEFF Research Database (Denmark)

    Zavadskas, Edmundas Kazimieras; Govindan, K.; Antucheviciene, Jurgita

    2016-01-01

    Formal decision-making methods can be used to help improve the overall sustainability of industries and organisations. Recently, there has been a great proliferation of works aggregating sustainability criteria by using diverse multiple criteria decision-making (MCDM) techniques. A number of revi...

  6. The investigation of abnormal particle-coarsening phenomena in friction stir repair weld of 2219-T6 aluminum alloy

    International Nuclear Information System (INIS)

    Li, Bo; Shen, Yifu

    2011-01-01

    Highlights: → Defective friction stir welds were repaired by overlapping FSW technique. → Abnormal Al 2 Cu-coarsening phenomena were found in 2219-T6 friction stir repair weld. → Three formation mechanisms were proposed for reasonable explanations. -- Abstract: The single-pass friction stir weld of aluminum 2219-T6 with weld-defects was repaired by overlapping friction stir welding technique. However, without any post weld heat treatment process, it was found that the phenomena of abnormal particle-coarsening of Al 2 Cu had occurred in the overlapping friction stir repair welds. The detecting results of non-destructive X-ray inspection proved that not only one group of repair FSW process parameters could lead to occurrence of the abnormal phenomena. And the abnormally coarsened particles always appeared on the advancing side of repair welds rather than the retreating side where the fracture behaviors occurred after mechanical tensile testing. The size of the biggest particle lying in the dark bands of 'Onion-rings' was more than 150 μm. After the related investigation by scanning electron microscope and X-ray energy spectrometer, three types of formation mechanisms were proposed for reasonably explaining the abnormal phenomenon: Aggregation Mechanism, Diffusion Mechanisms I and II. Aggregation Mechanism was according to the motion-laws of stir-pin. Diffusion Mechanisms were based on the classical theories of precipitate growth in metallic systems. The combined action of the three detailed mechanisms contributed to the abnormal coarsening behavior of Al 2 Cu particles in the friction stir repair weld.

  7. Quality improvement through multiple response optimization

    International Nuclear Information System (INIS)

    Noorossana, R.; Alemzad, H.

    2003-01-01

    The performance of a product is often evaluated by several quality characteristics. Optimizing the manufacturing process with respect to only one quality characteristic will not always lead to the optimum values for other characteristics. Hence, it would be desirable to improve the overall quality of a product by improving quality characteristics, which are considered to be important. The problem consists of optimizing several responses using multiple objective decision making approach and design of experiments. A case study will be discussed to show the application of the proposal method

  8. Dendritic coarsening of γ' phase in a directionally solidified superalloy during 24,000 h of exposure at 1173 K

    International Nuclear Information System (INIS)

    Li, H.; Wang, L.; Lou, L.H.

    2010-01-01

    Dendritic coarsening of γ' was investigated in a directionally solidified Ni-base superalloy during exposure at 1173 K for 24,000 h. Chemical homogeneity along different directions and residual internal strain in the experimental superalloy were measured by electronic probe microanalysis (EPMA) and electron back-scattered diffraction (EBSD) technique. It was indicated that the gradient of element distribution was anisotropic and the inner strain between dendrite core and interdendritic regions was different even after 24,000 h of exposure at 1173 K, which influenced the kinetics for the dendrite coarsening of γ' phase.

  9. In situ TEM study of the coarsening of carbon black supported Pt nanoparticles in hydrogen

    DEFF Research Database (Denmark)

    Simonsen, Søren Bredmose; Wang, Yan; Jensen, Jens Oluf

    2017-01-01

    The control of sizes and shapes of nanostructures is of tremendous importance for the catalytic activity in electrochemistry and in catalysis more generally. However, due to relatively large surface free energies, nanostructures often sinter to form coarser and more stable structures that may...... not have the intended physicochemical properties. Pt is known to be a very active catalyst in several chemical reactions and for example as carbon supported nanoparticles in fuel cells. The presentation focusses on coarsening mechanisms of Pt nanoparticles supported on carbon black during exposure...... to hydrogen. By means of in situ transmission electron microscopy (TEM), Pt nanoparticle coarsening was monitored in 6 mbar 20 % H2/Ar while ramping up the temperature to ca. 900 °C. Time-resolved TEM images directly reveal that separated ca. 3 nm sized Pt nanoparticles in the pure hydrogen environment...

  10. Composition pathway in Fe–Cu–Ni alloy during coarsening

    International Nuclear Information System (INIS)

    Mukherjee, Rajdip; Nestler, Britta; Choudhury, Abhik

    2013-01-01

    In this work the microstructure evolution for a two phase Fe–Cu–Ni ternary alloy is studied in order to understand the kinetic composition paths during coarsening of precipitates. We have employed a quantitative phase-field model utilizing the CALPHAD database to simulate the temporal evolution of a multi-particle system in a two-dimensional domain. The paths for the far-field matrix and for precipitate average compositions obtained from simulation are found to be rectilinear. The trends are compared with the corresponding sharp interface theory, in the context of an additional degree of freedom for determining the interface compositions due to the Gibbs–Thomson effect in a ternary alloy. (paper)

  11. Composition pathway in Fe-Cu-Ni alloy during coarsening

    Science.gov (United States)

    Mukherjee, Rajdip; Choudhury, Abhik; Nestler, Britta

    2013-10-01

    In this work the microstructure evolution for a two phase Fe-Cu-Ni ternary alloy is studied in order to understand the kinetic composition paths during coarsening of precipitates. We have employed a quantitative phase-field model utilizing the CALPHAD database to simulate the temporal evolution of a multi-particle system in a two-dimensional domain. The paths for the far-field matrix and for precipitate average compositions obtained from simulation are found to be rectilinear. The trends are compared with the corresponding sharp interface theory, in the context of an additional degree of freedom for determining the interface compositions due to the Gibbs-Thomson effect in a ternary alloy.

  12. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  13. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Directory of Open Access Journals (Sweden)

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  14. Improved nonparametric inference for multiple correlated periodic sequences

    KAUST Repository

    Sun, Ying

    2013-08-26

    This paper proposes a cross-validation method for estimating the period as well as the values of multiple correlated periodic sequences when data are observed at evenly spaced time points. The period of interest is estimated conditional on the other correlated sequences. An alternative method for period estimation based on Akaike\\'s information criterion is also discussed. The improvement of the period estimation performance is investigated both theoretically and by simulation. We apply the multivariate cross-validation method to the temperature data obtained from multiple ice cores, investigating the periodicity of the El Niño effect. Our methodology is also illustrated by estimating patients\\' cardiac cycle from different physiological signals, including arterial blood pressure, electrocardiography, and fingertip plethysmograph.

  15. Beyond the Young-Laplace model for cluster growth during dewetting of thin films: effective coarsening exponents and the role of long range dewetting interactions.

    Science.gov (United States)

    Constantinescu, Adi; Golubović, Leonardo; Levandovsky, Artem

    2013-09-01

    Long range dewetting forces acting across thin films, such as the fundamental van der Waals interactions, may drive the formation of large clusters (tall multilayer islands) and pits, observed in thin films of diverse materials such as polymers, liquid crystals, and metals. In this study we further develop the methodology of the nonequilibrium statistical mechanics of thin films coarsening within continuum interface dynamics model incorporating long range dewetting interactions. The theoretical test bench model considered here is a generalization of the classical Mullins model for the dynamics of solid film surfaces. By analytic arguments and simulations of the model, we study the coarsening growth laws of clusters formed in thin films due to the dewetting interactions. The ultimate cluster growth scaling laws at long times are strongly universal: Short and long range dewetting interactions yield the same coarsening exponents. However, long range dewetting interactions, such as the van der Waals forces, introduce a distinct long lasting early time scaling behavior characterized by a slow growth of the cluster height/lateral size aspect ratio (i.e., a time-dependent Young angle) and by effective coarsening exponents that depend on cluster size. In this study, we develop a theory capable of analytically calculating these effective size-dependent coarsening exponents characterizing the cluster growth in the early time regime. Such a pronounced early time scaling behavior has been indeed seen in experiments; however, its physical origin has remained elusive to this date. Our theory attributes these observed phenomena to ubiquitous long range dewetting interactions acting across thin solid and liquid films. Our results are also applicable to cluster growth in initially very thin fluid films, formed by depositing a few monolayers or by a submonolayer deposition. Under this condition, the dominant coarsening mechanism is diffusive intercluster mass transport while the

  16. Generalization of the Lifshitz-Slyozov-Wagner coarsening theory to non-dilute multi-component systems

    Czech Academy of Sciences Publication Activity Database

    Svoboda, Jiří; Fischer, F. D.

    2014-01-01

    Roč. 79, OCT (2014), s. 304-314 ISSN 1359-6454 R&D Projects: GA ČR(CZ) GA14-24252S Institutional support: RVO:68081723 Keywords : Coarsening * Ostwald ripening * Multicomponent * Theory and modelling * Non-zero volume fraction of precipitates Subject RIV: BJ - Thermodynamics Impact factor: 4.465, year: 2014

  17. Adaptive Finite Volume Method for the Shallow Water Equations on Triangular Grids

    Directory of Open Access Journals (Sweden)

    Sudi Mungkasi

    2016-01-01

    Full Text Available This paper presents a numerical entropy production (NEP scheme for two-dimensional shallow water equations on unstructured triangular grids. We implement NEP as the error indicator for adaptive mesh refinement or coarsening in solving the shallow water equations using a finite volume method. Numerical simulations show that NEP is successful to be a refinement/coarsening indicator in the adaptive mesh finite volume method, as the method refines the mesh or grids around nonsmooth regions and coarsens them around smooth regions.

  18. The coarsening process of Ge precipitates in an Al-4 wt.% Ge alloy

    Energy Technology Data Exchange (ETDEWEB)

    Deaf, G.H

    2004-05-01

    In this paper the results of a quantitative transmission electron microscopy (TEM) investigation of the precipitation process of Ge in an Al-4 wt.% Ge alloy are described. Two crystallographic orientation relationships between the irregular germanium precipitate and aluminum matrix were found to be [1 0 0]{sub Ge} || [1 1 0]{sub Al} and [1 1 4]{sub Ge} || [1 0 0]{sub Al}. The irregular germanium precipitates formed on [0 0 1]{sub Al} habit planes. The origin of the irregular shape is due to the existence of a highly anisotropic interfacial energy as well as in an isotropic growth rate along <1 1 0>{sub A1} directions. Particles sizes were determined for variety of isothermal ageing times at 348, 423 and 523 K. The coarsening of the different morphologies of Ge precipitates was found to obey Ostwald ripening kinetics. The TEM results showed that the coarsening of irregular particles was due to the interfacial coalescence between these particles. Nine different morphologies have been distinguished in the form of (i) irregular particles, (ii) spheres, (iii) hexagonal plates, (iv) rods, (v) triangular plates, (vi) laths, (vii) small tetrahedra, (viii) rectangular plates, and (ix) Lamellae shape.

  19. Mechanistic Prediction of the Effect of Microstructural Coarsening on Creep Response of SnAgCu Solder Joints

    Science.gov (United States)

    Mukherjee, S.; Chauhan, P.; Osterman, M.; Dasgupta, A.; Pecht, M.

    2016-07-01

    Mechanistic microstructural models have been developed to capture the effect of isothermal aging on time dependent viscoplastic response of Sn3.0Ag0.5Cu (SAC305) solders. SnAgCu (SAC) solders undergo continuous microstructural coarsening during both storage and service because of their high homologous temperature. The microstructures of these low melting point alloys continuously evolve during service. This results in evolution of creep properties of the joint over time, thereby influencing the long term reliability of microelectronic packages. It is well documented that isothermal aging degrades the creep resistance of SAC solder. SAC305 alloy is aged for (24-1000) h at (25-100)°C (~0.6-0.8 × T melt). Cross-sectioning and image processing techniques were used to periodically quantify the effect of isothermal aging on phase coarsening and evolution. The parameters monitored during isothermal aging include size, area fraction, and inter-particle spacing of nanoscale Ag3Sn intermetallic compounds (IMCs) and the volume fraction of micronscale Cu6Sn5 IMCs, as well as the area fraction of pure tin dendrites. Effects of microstructural evolution on secondary creep constitutive response of SAC305 solder joints were then modeled using a mechanistic multiscale creep model. The mechanistic phenomena modeled include: (1) dispersion strengthening by coarsened nanoscale Ag3Sn IMCs in the eutectic phase; and (2) load sharing between pro-eutectic Sn dendrites and the surrounding coarsened eutectic Sn-Ag phase and microscale Cu6Sn5 IMCs. The coarse-grained polycrystalline Sn microstructure in SAC305 solder was not captured in the above model because isothermal aging does not cause any significant change in the initial grain size and orientation of SAC305 solder joints. The above mechanistic model can successfully capture the drop in creep resistance due to the influence of isothermal aging on SAC305 single crystals. Contribution of grain boundary sliding to the creep strain of

  20. A solvable model for coarsening soap froths and other domain boundary networks in two dimensions

    International Nuclear Information System (INIS)

    Flyvbjerg, H.; Jeppesen, C.

    1990-09-01

    The dynamical processes leading to coarsening of soap froths and other domain boundary networks in two dimensions are described statistically by a 'random neighbour model'. The model is solved using the principle of maximum entropy. The solution describes normal growth with realistic probability distribution for area and topology. (orig.)

  1. Numerical simulation of two-dimensional late-stage coarsening for nucleation and growth

    International Nuclear Information System (INIS)

    Akaiwa, N.; Meiron, D.I.

    1995-01-01

    Numerical simulations of two-dimensional late-stage coarsening for nucleation and growth or Ostwald ripening are performed at area fractions 0.05 to 0.4 using the monopole and dipole approximations of a boundary integral formulation for the steady state diffusion equation. The simulations are performed using two different initial spatial distributions. One is a random spatial distribution, and the other is a random spatial distribution with depletion zones around the particles. We characterize the spatial correlations of particles by the radial distribution function, the pair correlation functions, and the structure function. Although the initial spatial correlations are different, we find time-independent scaled correlation functions in the late stage of coarsening. An important feature of the late-stage spatial correlations is that depletion zones exist around particles. A log-log plot of the structure function shows that the slope at small wave numbers is close to 4 and is -3 at very large wave numbers for all area fractions. At large wave numbers we observe oscillations in the structure function. We also confirm the cubic growth law of the average particle radius. The rate constant of the cubic growth law and the particle size distribution functions are also determined. We find qualitatively good agreement between experiments and the present simulations. In addition, the present results agree well with simulation results using the Cahn-Hilliard equation

  2. Application of a multi-component mean field model to the coarsening behaviour of a nickel-based superalloy

    International Nuclear Information System (INIS)

    Anderson, M.J.; Rowe, A.; Wells, J.; Basoalto, H.C.

    2016-01-01

    A multi-component mean field model has been applied to predict the particle evolution of the γ′ particles in the nickel based superalloy IN738LC, capturing the transition from an initial multimodal particle distribution towards a unimodal distribution. Experiments have been performed to measure the coarsening behaviour during isothermal heat treatments using quantitative analysis of micrographs. The three dimensional size of the γ′ particles has been approximated for use in simulation. A coupled thermodynamic/mean field modelling framework is presented and applied to describe the particle size evolution. A robust numerical implementation of the model is detailed that makes use of surrogate models to capture the thermodynamics. Different descriptions of the particle growth rate of non-dilute particle systems have been explored. A numerical investigation of the influence of scatter in chemical composition upon the particle size distribution evolution has been carried out. It is shown how the tolerance in chemical composition of a given alloy can impact particle coarsening behaviour. Such predictive capability is of interest in understanding variation in component performance and the refinement of chemical composition tolerances. It has been found that the inclusion of misfit strain within the current model formulation does not have a significant affect upon predicted long term particle coarsening behaviour. Model predictions show good agreement with experimental data. In particular, the model predicts a reduced growth rate of the mean particle size during the transition from bimodal to unimodal distributions.

  3. Improved radioanalytical methods

    International Nuclear Information System (INIS)

    Erickson, M.D.; Aldstadt, J.H.; Alvarado, J.S.; Crain, J.S.; Orlandini, K.A.; Smith, L.L.

    1995-01-01

    Methods for the chemical characterization of the environment are being developed under a multitask project for the Analytical Services Division (EM-263) within the US Department of Energy (DOE) Office of Environmental Management. This project focuses on improvement of radioanalytical methods with an emphasis on faster and cheaper routine methods. We have developed improved methods, for separation of environmental levels of technetium-99 and strontium-89/90, radium, and actinides from soil and water; and for separation of actinides from soil and water matrix interferences. Among the novel separation techniques being used are element- and class-specific resins and membranes. (The 3M Corporation is commercializing Empore trademark membranes under a cooperative research and development agreement [CRADA] initiated under this project). We have also developed methods for simultaneous detection of multiple isotopes using inductively coupled plasma-mass spectrometry (ICP-MS). The ICP-MS method requires less rigorous chemical separations than traditional radiochemical analyses because of its mass-selective mode of detection. Actinides and their progeny have been isolated and concentrated from a variety of natural water matrices by using automated batch separation incorporating selective resins prior to ICP-MS analyses. In addition, improvements in detection limits, sample volume, and time of analysis were obtained by using other sample introduction techniques, such as ultrasonic nebulization and electrothermal vaporization. Integration and automation of the separation methods with the ICP-MS methodology by using flow injection analysis is underway, with an objective of automating methods to achieve more reproducible results, reduce labor costs, cut analysis time, and minimize secondary waste generation through miniaturization of the process

  4. The evolution of interfacial morphology during coarsening: A comparison between 4D experiments and phase-field simulations

    DEFF Research Database (Denmark)

    Aagesen, L.K.; Fife, J.L.; Lauridsen, Erik Mejdal

    2011-01-01

    The evolution of the solid–liquid interface in an Al–Cu dendritic microstructure is predicted using a phase-field model and compared to experimental data. The interfacial velocities are measured during isothermal coarsening using in situ X-ray tomographic microscopy. Good qualitative agreement...

  5. Measuring laves phase particle size and thermodynamic calculating its growth and coarsening behavior in P92 steels

    DEFF Research Database (Denmark)

    Yao, Bing-Yin; Zhou, Rong-Can; Fan, Chang-Xin

    2010-01-01

    The growth of Laves phase particles in three kinds of P92 steels were investigated. Laves phase particles can be easily separated and distinguished from the matrix and other particles by atom number contrast using comparisons of the backscatter electrons (BSE) images and the secondary electrons (SE......) images in scanning electron microscope (SEM). The smaller Laves phase particle size results in higher creep strength and longer creep exposure time at the same conditions. DICTRA software was used to model the growth and coarsening behavior of Laves phase in the three P92 steels. Good agreements were...... attained between measurements in SEM and modeling by DICTRA. Ostwald ripening should be used for the coarsening calculation of Laves phase in P92 steels for time longer than 20000 h and 50000 h at 650°C and 600°C, respectively. © 2010 Chin. Soc. for Elec. Eng....

  6. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  7. Coarsening of Pd nanoparticles in an oxidizing atmosphere studied by in situ TEM

    DEFF Research Database (Denmark)

    Simonsen, Søren Bredmose; Chorkendorff, Ib; Dahl, Søren

    2016-01-01

    The coarsening of supported palladium nanoparticles in an oxidizing atmosphere was studied in situ by means of transmission electron microscopy (TEM). Specifically, the Pd nanoparticles were dispersed on a planar and amorphous Al2O3 support and were observed during the exposure to 10 mbar technical...... for the Ostwald ripening process indicates that the observed change in the particle size distribution can be accounted for by wetting of the Al2O3 support by the larger Pd nanoparticles....

  8. Direct observation of grain rotations during coarsening of a semisolid Al-Cu alloy

    DEFF Research Database (Denmark)

    Dake, Jules M.; Oddershede, Jette; Sørensen, Henning O.

    2016-01-01

    ideal arrangements of constituent powders while ignoring their underlying crystallinity, achieve at best a qualitative description of the rearrangement, densification, and coarsening of powder compacts during thermal processing. Treating a semisolid Al-Cu alloy as a model system for late-stage sintering......Sintering is a key technology for processing ceramic and metallic powders into solid objects of complex geometry, particularly in the burgeoning field of energy storage materials. The modeling of sintering processes, however, has not kept pace with applications. Conventional models, which assume...

  9. Effects of Rhenium Addition on the Temporal Evolution of the Nanostructure and Chemistry of a Model Ni-Cr-Al Superalloy. 2; Analysis of the Coarsening Behavior

    Science.gov (United States)

    Yoon, Kevin E.; Noebe, Ronald D.; Seidman, David N.

    2007-01-01

    The temporal evolution of the nanostructure and chemistry of a model Ni-8.5 at.% Cr-10 at.% Al alloy with the addition of 2 at.% Re was studied using transmission electron microscopy and atom-probe tomography in order to measure the number density and mean radius of the y' (LIZ) precipitates and the chemistry of the y'-precipitates and the y (fcc)-matrix. In this article, the coarsening behavior of the y'-precipitates is discussed in detail and compared with the Umantsev-Olson model for multi-component alloys. In addition, the experimental results are evaluated with PrecipiCalc(TradeMark) simulations. The results show that the diffusivities of the solute elements play a major role in the coarsening behavior of the y'-precipitates and that the addition of Re retards the coarsening kinetics and stabilizes the spheroidal morphology of the precipitates by reducing the interfacial energy.

  10. Improved exact method for the double TSP with multiple stacks

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper

    2011-01-01

    and delivery problems. The results suggest an impressive improvement, and we report, for the first time, optimal solutions to several unsolved instances from the literature containing 18 customers. Instances with 28 customers are also shown to be solvable within a few percent of optimality. © 2011 Wiley...... the first delivery, and the container cannot be repacked once packed. In this paper we improve the previously proposed exact method of Lusby et al. (Int Trans Oper Res 17 (2010), 637–652) through an additional preprocessing technique that uses the longest common subsequence between the respective pickup...

  11. A temperature dependent cyclic plasticity model for hot work tool steel including particle coarsening

    Science.gov (United States)

    Jilg, Andreas; Seifert, Thomas

    2018-05-01

    Hot work tools are subjected to complex thermal and mechanical loads during hot forming processes. Locally, the stresses can exceed the material's yield strength in highly loaded areas as e.g. in small radii in die cavities. To sustain the high loads, the hot forming tools are typically made of martensitic hot work steels. While temperatures for annealing of the tool steels usually lie in the range between 400 and 600 °C, the steels may experience even higher temperatures during hot forming, resulting in softening of the material due to coarsening of strengthening particles. In this paper, a temperature dependent cyclic plasticity model for the martensitic hot work tool steel 1.2367 (X38CrMoV5-3) is presented that includes softening due to particle coarsening and that can be applied in finite-element calculations to assess the effect of softening on the thermomechanical fatigue life of hot work tools. To this end, a kinetic model for the evolution of the mean size of secondary carbides based on Ostwald ripening is coupled with a cyclic plasticity model with kinematic hardening. Mechanism-based relations are developed to describe the dependency of the mechanical properties on carbide size and temperature. The material properties of the mechanical and kinetic model are determined on the basis of tempering hardness curves as well as monotonic and cyclic tests.

  12. Positioning performance improvements with European multiple-frequency satellite navigation - Galileo

    Science.gov (United States)

    Ji, Shengyue

    2008-10-01

    The rapid development of Global Positioning System has demonstrated the advantages of satellite based navigation systems. In near future, there will be a number of Global Navigation Satellite System (GNSS) available, i.e. modernized GPS, Galileo, restored GLONASS, BeiDou and many other regional GNSS augmentation systems. Undoubtedly, the new GNSS systems will significantly improve navigation performance over current GPS, with a better satellite coverage and multiple satellite signal bands. In this dissertation, the positioning performance improvement of new GNSS has been investigated based on both theoretical analysis and numerical study. First of all, the navigation performance of new GNSS systems has been analyzed, particularly for urban applications. The study has demonstrated that Receiver Autonomous Integrity Monitoring (RAIM) performance can be significantly improved with multiple satellite constellations, although the position accuracy improvement is limited. Based on a three-dimensional urban building model in Hong Kong streets, it is found that positioning availability is still very low in high-rising urban areas, even with three GNSS systems. On the other hand, the discontinuity of navigation solutions is significantly reduced with the combined constellations. Therefore, it is possible to use cheap DR systems to bridge the gaps of GNSS positioning, with high accuracy. Secondly, the ambiguity resolution performance has been investigated with Galileo multiple frequency band signals. The ambiguity resolution performance of three different algorithms is compared, including CAR, ILS and improved CAR methods (a new method proposed in this study). For short baselines, with four frequency Galileo data, it is highly possible to achieve reliable single epoch ambiguity resolution, when the carrier phase noise level is reasonably low (i.e. less than 6mm). For long baselines (up to 800 km), the integer ambiguity can be determined within 1 min on average. Ambiguity

  13. Curvelet-domain multiple matching method combined with cubic B-spline function

    Science.gov (United States)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  14. Recent coarsening of sediments on the southern Yangtze subaqueous delta front: A response to river damming

    Science.gov (United States)

    Yang, H. F.; Yang, S. L.; Meng, Y.; Xu, K. H.; Luo, X. X.; Wu, C. S.; Shi, B. W.

    2018-03-01

    After more than 50,000 dams were built in the Yangtze basin, especially the Three Gorges Dam (TGD) in 2003, the sediment discharge to the East China Sea decreased from 470 Mt/yr before dams to the current level of 140 Mt/yr. The delta sediment's response to this decline has interested many researchers. Based on a dataset of repeated samplings at 44 stations in this study, we compared the surficial sediment grain sizes in the southern Yangtze subaqueous delta front for two periods: pre-TGD (1982) and post-TGD (2012). External factors of the Yangtze River, including water discharge, sediment discharge and suspended sediment grain size, were analysed, as well as wind speed, tidal range and wave height of the coastal ocean. We found that the average median size of the sediments in the delta front coarsened from 8.0 μm in 1982 to 15.4 μm in 2012. This coarsening was accompanied by a decrease of clay components, better sorting and more positive skewness. Moreover, the delta morphology in the study area changed from an overall accretion of 1.0 cm/yr to an erosion of - 0.6 cm/yr. At the same time, the riverine sediment discharge decreased by 70%, and the riverine suspended sediment grain size increased from 8.4 μm to 10.5 μm. The annual wind speed and wave height slightly increased by 2% and 3%, respectively, and the tidal range showed no change trend. Considering the increased wind speed and wave height, there was no evidence that the capability of the China Coastal Current to transport sediment southward has declined in recent years. The sediment coarsening in the Yangtze delta front was thus mainly attributed to the delta's transition from accumulation to erosion which was originally generated by river damming. These findings have important implications for sediment change in many large deltaic systems due to worldwide human impacts.

  15. Multiple Improvements of Multiple Imputation Likelihood Ratio Tests

    OpenAIRE

    Chan, Kin Wai; Meng, Xiao-Li

    2017-01-01

    Multiple imputation (MI) inference handles missing data by first properly imputing the missing values $m$ times, and then combining the $m$ analysis results from applying a complete-data procedure to each of the completed datasets. However, the existing method for combining likelihood ratio tests has multiple defects: (i) the combined test statistic can be negative in practice when the reference null distribution is a standard $F$ distribution; (ii) it is not invariant to re-parametrization; ...

  16. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  17. Resistance training improves muscle strength and functional capacity in multiple sclerosis

    DEFF Research Database (Denmark)

    Dalgas, U; Stenager, E; Jakobsen, J

    2009-01-01

    strength and functional capacity in patients with multiple sclerosis, the effects persisting after 12 weeks of self-guided physical activity. Level of evidence: The present study provides level III evidence supporting the hypothesis that lower extremity progressive resistance training can improve muscle......OBJECTIVE: To test the hypothesis that lower extremity progressive resistance training (PRT) can improve muscle strength and functional capacity in patients with multiple sclerosis (MS) and to evaluate whether the improvements are maintained after the trial. METHODS: The present study was a 2-arm...... and was afterward encouraged to continue training. After the trial, the control group completed the PRT intervention. Both groups were tested before and after 12 weeks of the trial and at 24 weeks (follow-up), where isometric muscle strength of the knee extensors (KE MVC) and functional capacity (FS; combined score...

  18. Precipitation behavior and martensite lath coarsening during tempering of T/P92 ferritic heat-resistant steel

    Science.gov (United States)

    Xu, Lin-qing; Zhang, Dan-tian; Liu, Yong-chang; Ning, Bao-qun; Qiao, Zhi-xia; Yan, Ze-sheng; Li, Hui-jun

    2014-05-01

    Tempering is an important process for T/P92 ferritic heat-resistant steel from the viewpoint of microstructure control, as it facilitates the formation of final tempered martensite under serving conditions. In this study, we have gained deeper insights on the mechanism underlying the microstructural evolution during tempering treatment, including the precipitation of carbides and the coarsening of martensite laths, as systematically analyzed by optical microscopy, transmission electron microscopy, and high-resolution transmission electron microscopy. The chemical composition of the precipitates was analyzed using energy dispersive X-ray spectroscopy. Results indicate the formation of M3C (cementite) precipitates under normalized conditions. However, they tend to dissolve within a short time of tempering, owing to their low thermal stability. This phenomenon was substantiated by X-ray diffraction analysis. Besides, we could observe the precipitation of fine carbonitrides (MX) along the dislocations. The mechanism of carbon diffusion controlled growth of M23C6 can be expressed by the Zener's equation. The movement of Y-junctions was determined to be the fundamental mechanism underlying the martensite lath coarsening process. Vickers hardness was estimated to determine their mechanical properties. Based on the comprehensive analysis of both the micro-structural evolution and hardness variation, the process of tempering can be separated into three steps.

  19. Neutron source multiplication method

    International Nuclear Information System (INIS)

    Clayton, E.D.

    1985-01-01

    Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system

  20. Laboratory model study of newly deposited dredger fills using improved multiple-vacuum preloading technique

    Directory of Open Access Journals (Sweden)

    Jingjin Liu

    2017-10-01

    Full Text Available Problems continue to be encountered concerning the traditional vacuum preloading method in field during the treatment of newly deposited dredger fills. In this paper, an improved multiple-vacuum preloading method was developed to consolidate newly dredger fills that are hydraulically placed in seawater for land reclamation in Lingang Industrial Zone of Tianjin City, China. With this multiple-vacuum preloading method, the newly deposited dredger fills could be treated effectively by adopting a novel moisture separator and a rapid improvement technique without sand cushion. A series of model tests was conducted in the laboratory for comparing the results from the multiple-vacuum preloading method and the traditional one. Ten piezometers and settlement plates were installed to measure the variations in excess pore water pressures and moisture content, and vane shear strength was measured at different positions. The testing results indicate that water discharge–time curves obtained by the traditional vacuum preloading method can be divided into three phases: rapid growth phase, slow growth phase, and steady phase. According to the process of fluid flow concentrated along tiny ripples and building of larger channels inside soils during the whole vacuum loading process, the fluctuations of pore water pressure during each loading step are divided into three phases: steady phase, rapid dissipation phase, and slow dissipation phase. An optimal loading pattern which could have a best treatment effect was proposed for calculating the water discharge and pore water pressure of soil using the improved multiple-vacuum preloading method. For the newly deposited dredger fills at Lingang Industrial Zone of Tianjin City, the best loading step was 20 kPa and the loading of 40–50 kPa produced the highest drainage consolidation. The measured moisture content and vane shear strength were discussed in terms of the effect of reinforcement, both of which indicate

  1. The coarsening effect of SA508-3 steel used as heavy forgings material

    Directory of Open Access Journals (Sweden)

    Dingqian Dong

    2015-01-01

    Full Text Available SA508Gr.3 steel is popularly used to produce core unit of nuclear power reactors due to its outstanding ability of anti-neutron irradiation and good fracture toughness. The forging process takes important role in manufacturing to refine the grain size and improve the material properties. But due to their huge size, heavy forgings cannot be cooled down quickly, and the refined grains usually have long time to grow in high temperature conditions. If the forging process is not adequately scheduled or implemented, very large grains up to millimetres in size may be found in this steel and cannot be eliminated in the subsequent heat treatment. To fix the condition which may causes the coarsening of the steel, hot upsetting experiments in the industrial production environment were performed under different working conditions and the corresponding grain sizes were measured and analysed. The observation showed that the grain will abnormally grow if the deformation is less than a critical value. The strain energy takes a critical role in the grain evolution. If dynamic recrystallization consumes the strain energy as much as possible, the normal grains will be obtained. While if not, the stored strain energy will promote abnormal growth of the grains.

  2. An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization

    Directory of Open Access Journals (Sweden)

    Cheng Zhang

    2018-04-01

    Full Text Available The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization.

  3. Intensity ratio to improve black hole assessment in multiple sclerosis.

    Science.gov (United States)

    Adusumilli, Gautam; Trinkaus, Kathryn; Sun, Peng; Lancia, Samantha; Viox, Jeffrey D; Wen, Jie; Naismith, Robert T; Cross, Anne H

    2018-01-01

    Improved imaging methods are critical to assess neurodegeneration and remyelination in multiple sclerosis. Chronic hypointensities observed on T1-weighted brain MRI, "persistent black holes," reflect severe focal tissue damage. Present measures consist of determining persistent black holes numbers and volumes, but do not quantitate severity of individual lesions. Develop a method to differentiate black and gray holes and estimate the severity of individual multiple sclerosis lesions using standard magnetic resonance imaging. 38 multiple sclerosis patients contributed images. Intensities of lesions on T1-weighted scans were assessed relative to cerebrospinal fluid intensity using commercial software. Magnetization transfer imaging, diffusion tensor imaging and clinical testing were performed to assess associations with T1w intensity-based measures. Intensity-based assessments of T1w hypointensities were reproducible and achieved > 90% concordance with expert rater determinations of "black" and "gray" holes. Intensity ratio values correlated with magnetization transfer ratios (R = 0.473) and diffusion tensor imaging metrics (R values ranging from 0.283 to -0.531) that have been associated with demyelination and axon loss. Intensity ratio values incorporated into T1w hypointensity volumes correlated with clinical measures of cognition. This method of determining the degree of hypointensity within multiple sclerosis lesions can add information to conventional imaging. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Multiple-Features-Based Semisupervised Clustering DDoS Detection Method

    Directory of Open Access Journals (Sweden)

    Yonghao Gu

    2017-01-01

    Full Text Available DDoS attack stream from different agent host converged at victim host will become very large, which will lead to system halt or network congestion. Therefore, it is necessary to propose an effective method to detect the DDoS attack behavior from the massive data stream. In order to solve the problem that large numbers of labeled data are not provided in supervised learning method, and the relatively low detection accuracy and convergence speed of unsupervised k-means algorithm, this paper presents a semisupervised clustering detection method using multiple features. In this detection method, we firstly select three features according to the characteristics of DDoS attacks to form detection feature vector. Then, Multiple-Features-Based Constrained-K-Means (MF-CKM algorithm is proposed based on semisupervised clustering. Finally, using MIT Laboratory Scenario (DDoS 1.0 data set, we verify that the proposed method can improve the convergence speed and accuracy of the algorithm under the condition of using a small amount of labeled data sets.

  5. Defect dynamics and coarsening dynamics in smectic-C films

    Science.gov (United States)

    Pargellis, A. N.; Finn, P.; Goodby, J. W.; Panizza, P.; Yurke, B.; Cladis, P. E.

    1992-12-01

    We study the dynamics of defects generated in free-standing films of liquid crystals following a thermal quench from the smectic-A phase to the smectic-C phase. The defects are type-1 disclinations, and the strain field between defect pairs is confined to 2π walls. We compare our observations with a phenomenological model that includes dipole coupling of the director field to an external ordering field. This model is able to account for both the observed coalescence dynamics and the observed ordering dynamics. In the absence of an ordering field, our model predicts the defect density ρ to scale with time t as ρ lnρ~t-1. When the dipole coupling of the director field to an external ordering field is included, both the model and experiments show the defect coarsening proceeds as ρ~e-αt with the strain field confined to 2π walls. The external ordering field most likely arises from the director's tendency to align with edge dislocations within the liquid-crystal film.

  6. Improved patient-reported health impact of multiple sclerosis

    DEFF Research Database (Denmark)

    Macdonell, Richard; Nagels, Guy; Laplaud, David-Axel

    2016-01-01

    BACKGROUND: Multiple sclerosis (MS) is a debilitating disease that negatively impacts patients' lives. OBJECTIVE: ENABLE assessed the effect of long-term prolonged-release (PR) fampridine (dalfampridine extended release in the United States) treatment on patient-perceived health impact in patients...... with MS with walking impairment. METHODS: ENABLE was a 48-week, open-label, Phase 4 study of PR-fampridine 10 mg twice daily. Patients who showed any improvement in Timed 25-Foot Walk walking speed at weeks 2 and 4 and any improvement in 12-item MS Walking Scale score at week 4 remained on treatment....... The primary endpoint was change from baseline in 36-Item Short-Form Health Survey (SF-36) physical component summary (PCS) score. RESULTS: At week 4, 707/901 (78.5%) patients met the criteria to remain on treatment. Patients on treatment demonstrated significant and clinically meaningful improvements in SF-36...

  7. Border-crossing model for the diffusive coarsening of two-dimensional and quasi-two-dimensional wet foams

    Science.gov (United States)

    Schimming, C. D.; Durian, D. J.

    2017-09-01

    For dry foams, the transport of gas from small high-pressure bubbles to large low-pressure bubbles is dominated by diffusion across the thin soap films separating neighboring bubbles. For wetter foams, the film areas become smaller as the Plateau borders and vertices inflate with liquid. So-called "border-blocking" models can explain some features of wet-foam coarsening based on the presumption that the inflated borders totally block the gas flux; however, this approximation dramatically fails in the wet or unjamming limit where the bubbles become close-packed spheres and coarsening proceeds even though there are no films. Here, we account for the ever-present border-crossing flux by a new length scale defined by the average gradient of gas concentration inside the borders. We compute that it is proportional to the geometric average of film and border thicknesses, and we verify this scaling by numerical solution of the diffusion equation. We similarly consider transport across inflated vertices and surface Plateau borders in quasi-two-dimensional foams. And we show how the d A /d t =K0(n -6 ) von Neumann law is modified by the appearance of terms that depend on bubble size and shape as well as the concentration gradient length scales. Finally, we use the modified von Neumann law to compute the growth rate of the average bubble area, which is not constant.

  8. Hybrid models for the simulation of microstructural evolution influenced by coupled, multiple physical processes

    Energy Technology Data Exchange (ETDEWEB)

    Tikare, Veena [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hernandez-Rivera, Efrain [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Madison, Jonathan D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Holm, Elizabeth Ann [Carnegie Mellon Univ., Pittsburgh, PA (United States); Patterson, Burton R. [Univ. of Florida, Gainesville, FL (United States). Dept. of Materials Science and Engineering; Homer, Eric R. [Brigham Young Univ., Provo, UT (United States). Dept. of Mechanical Engineering

    2013-09-01

    Most materials microstructural evolution processes progress with multiple processes occurring simultaneously. In this work, we have concentrated on the processes that are active in nuclear materials, in particular, nuclear fuels. These processes are coarsening, nucleation, differential diffusion, phase transformation, radiation-induced defect formation and swelling, often with temperature gradients present. All these couple and contribute to evolution that is unique to nuclear fuels and materials. Hybrid model that combines elements from the Potts Monte Carlo, phase-field models and others have been developed to address these multiple physical processes. These models are described and applied to several processes in this report. An important feature of the models developed are that they are coded as applications within SPPARKS, a Sandiadeveloped framework for simulation at the mesoscale of microstructural evolution processes by kinetic Monte Carlo methods. This makes these codes readily accessible and adaptable for future applications.

  9. The multiple imputation method: a case study involving secondary data analysis.

    Science.gov (United States)

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  10. Multiple and sequential data acquisition method: an improved method for fragmentation and detection of cross-linked peptides on a hybrid linear trap quadrupole Orbitrap Velos mass spectrometer.

    Science.gov (United States)

    Rudashevskaya, Elena L; Breitwieser, Florian P; Huber, Marie L; Colinge, Jacques; Müller, André C; Bennett, Keiryn L

    2013-02-05

    The identification and validation of cross-linked peptides by mass spectrometry remains a daunting challenge for protein-protein cross-linking approaches when investigating protein interactions. This includes the fragmentation of cross-linked peptides in the mass spectrometer per se and following database searching, the matching of the molecular masses of the fragment ions to the correct cross-linked peptides. The hybrid linear trap quadrupole (LTQ) Orbitrap Velos combines the speed of the tandem mass spectrometry (MS/MS) duty circle with high mass accuracy, and these features were utilized in the current study to substantially improve the confidence in the identification of cross-linked peptides. An MS/MS method termed multiple and sequential data acquisition method (MSDAM) was developed. Preliminary optimization of the MS/MS settings was performed with a synthetic peptide (TP1) cross-linked with bis[sulfosuccinimidyl] suberate (BS(3)). On the basis of these results, MSDAM was created and assessed on the BS(3)-cross-linked bovine serum albumin (BSA) homodimer. MSDAM applies a series of multiple sequential fragmentation events with a range of different normalized collision energies (NCE) to the same precursor ion. The combination of a series of NCE enabled a considerable improvement in the quality of the fragmentation spectra for cross-linked peptides, and ultimately aided in the identification of the sequences of the cross-linked peptides. Concurrently, MSDAM provides confirmatory evidence from the formation of reporter ions fragments, which reduces the false positive rate of incorrectly assigned cross-linked peptides.

  11. A fast combination method in DSmT and its application to recommender system.

    Directory of Open Access Journals (Sweden)

    Yilin Dong

    Full Text Available In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian basic belief assignments (BBAs to subjective probabilities (called Bayesian BBAs. This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems, or if one needs to make a decision in the decision making problems. In this paper, we present a new fast combination method, called modified rigid coarsening (MRC, to obtain the final Bayesian BBAs based on hierarchical decomposition (coarsening of the frame of discernment. Regarding this method, focal elements with probabilities are coarsened efficiently to reduce computational complexity in the process of combination by using disagreement vector and a simple dichotomous approach. In order to prove the practicality of our approach, this new approach is applied to combine users' soft preferences in recommender systems (RSs. Additionally, in order to make a comprehensive performance comparison, the proportional conflict redistribution rule #6 (PCR6 is regarded as a baseline in a range of experiments. According to the results of experiments, MRC is more effective in accuracy of recommendations compared to original Rigid Coarsening (RC method and comparable in computational time.

  12. A fast combination method in DSmT and its application to recommender system.

    Science.gov (United States)

    Dong, Yilin; Li, Xinde; Liu, Yihai

    2018-01-01

    In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian) basic belief assignments (BBAs) to subjective probabilities (called Bayesian BBAs). This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems), or if one needs to make a decision in the decision making problems. In this paper, we present a new fast combination method, called modified rigid coarsening (MRC), to obtain the final Bayesian BBAs based on hierarchical decomposition (coarsening) of the frame of discernment. Regarding this method, focal elements with probabilities are coarsened efficiently to reduce computational complexity in the process of combination by using disagreement vector and a simple dichotomous approach. In order to prove the practicality of our approach, this new approach is applied to combine users' soft preferences in recommender systems (RSs). Additionally, in order to make a comprehensive performance comparison, the proportional conflict redistribution rule #6 (PCR6) is regarded as a baseline in a range of experiments. According to the results of experiments, MRC is more effective in accuracy of recommendations compared to original Rigid Coarsening (RC) method and comparable in computational time.

  13. Heterogeneous coarsening of Pb phase and the effect of Cu addition on it in a nanophase composite of Al-10 wt%Pb alloy prepared by mechanical alloying

    International Nuclear Information System (INIS)

    Zhu, M.; Liu, X.; Wu, Z.F.; Ouyang, L.Z.; Zeng, M.Q.

    2009-01-01

    A nanophase composite of Al-10 wt%Pb alloy was prepared by mechanical alloying. The coarsening behavior of Pb phase in the composite during heating process was investigated by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and nanoindentation test. The present work shows that the Pb phase grew substantially and had two different size distributions when the heating temperature was above 823 K. The different size distributions of Pb phase were owing to different grain size ranges of Al matrix in different regions, which led to the different growth rates of the Pb phase in those regions. It has been proposed that the different size ranges of Al grain appeared upon heating were originated from a statistical size distribution of Al grains in the as-milled powder. With the addition of a small amount of Cu, the heterogeneous growth of Pb phase can be suppressed, and the coarsening of Pb phase shows two distinct rates. This indicates that the coarsening is mainly governed by grain boundary diffusion and lattice diffusion of Al matrix in the initial stage and the later one, respectively

  14. A discontinuous Galerkin method with a bound preserving limiter for the advection of non-diffusive fields in solid Earth geodynamics

    Science.gov (United States)

    He, Ying; Puckett, Elbridge Gerry; Billen, Magali I.

    2017-02-01

    Mineral composition has a strong effect on the properties of rocks and is an essentially non-diffusive property in the context of large-scale mantle convection. Due to the non-diffusive nature and the origin of compositionally distinct regions in the Earth the boundaries between distinct regions can be nearly discontinuous. While there are different methods for tracking rock composition in numerical simulations of mantle convection, one must consider trade-offs between computational cost, accuracy or ease of implementation when choosing an appropriate method. Existing methods can be computationally expensive, cause over-/undershoots, smear sharp boundaries, or are not easily adapted to tracking multiple compositional fields. Here we present a Discontinuous Galerkin method with a bound preserving limiter (abbreviated as DG-BP) using a second order Runge-Kutta, strong stability-preserving time discretization method for the advection of non-diffusive fields. First, we show that the method is bound-preserving for a point-wise divergence free flow (e.g., a prescribed circular flow in a box). However, using standard adaptive mesh refinement (AMR) there is an over-shoot error (2%) because the cell average is not preserved during mesh coarsening. The effectiveness of the algorithm for convection-dominated flows is demonstrated using the falling box problem. We find that the DG-BP method maintains sharper compositional boundaries (3-5 elements) as compared to an artificial entropy-viscosity method (6-15 elements), although the over-/undershoot errors are similar. When used with AMR the DG-BP method results in fewer degrees of freedom due to smaller regions of mesh refinement in the neighborhood of the discontinuity. However, using Taylor-Hood elements and a uniform mesh there is an over-/undershoot error on the order of 0.0001%, but this error increases to 0.01-0.10% when using AMR. Therefore, for research problems in which a continuous field method is desired the DG

  15. An Improved Quantum-Behaved Particle Swarm Optimization Method for Economic Dispatch Problems with Multiple Fuel Options and Valve-Points Effects

    Directory of Open Access Journals (Sweden)

    Hong-Yun Zhang

    2012-09-01

    Full Text Available Quantum-behaved particle swarm optimization (QPSO is an efficient and powerful population-based optimization technique, which is inspired by the conventional particle swarm optimization (PSO and quantum mechanics theories. In this paper, an improved QPSO named SQPSO is proposed, which combines QPSO with a selective probability operator to solve the economic dispatch (ED problems with valve-point effects and multiple fuel options. To show the performance of the proposed SQPSO, it is tested on five standard benchmark functions and two ED benchmark problems, including a 40-unit ED problem with valve-point effects and a 10-unit ED problem with multiple fuel options. The results are compared with differential evolution (DE, particle swarm optimization (PSO and basic QPSO, as well as a number of other methods reported in the literature in terms of solution quality, convergence speed and robustness. The simulation results confirm that the proposed SQPSO is effective and reliable for both function optimization and ED problems.

  16. Improved method for calculating neoclassical transport coefficients in the banana regime

    Energy Technology Data Exchange (ETDEWEB)

    Taguchi, M., E-mail: taguchi.masayoshi@nihon-u.ac.jp [College of Industrial Technology, Nihon University, Narashino 275-8576 (Japan)

    2014-05-15

    The conventional neoclassical moment method in the banana regime is improved by increasing the accuracy of approximation to the linearized Fokker-Planck collision operator. This improved method is formulated for a multiple ion plasma in general tokamak equilibria. The explicit computation in a model magnetic field shows that the neoclassical transport coefficients can be accurately calculated in the full range of aspect ratio by the improved method. The some neoclassical transport coefficients for the intermediate aspect ratio are found to appreciably deviate from those obtained by the conventional moment method. The differences between the transport coefficients with these two methods are up to about 20%.

  17. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  18. Multiple attenuation to reflection seismic data using Radon filter and Wave Equation Multiple Rejection (WEMR) method

    Energy Technology Data Exchange (ETDEWEB)

    Erlangga, Mokhammad Puput [Geophysical Engineering, Institut Teknologi Bandung, Ganesha Street no.10 Basic Science B Buliding fl.2-3 Bandung, 40132, West Java Indonesia puput.erlangga@gmail.com (Indonesia)

    2015-04-16

    Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.

  19. Improving the surface metrology accuracy of optical profilers by using multiple measurements

    Science.gov (United States)

    Xu, Xudong; Huang, Qiushi; Shen, Zhengxiang; Wang, Zhanshan

    2016-10-01

    The performance of high-resolution optical systems is affected by small angle scattering at the mid-spatial-frequency irregularities of the optical surface. Characterizing these irregularities is, therefore, important. However, surface measurements obtained with optical profilers are influenced by additive white noise, as indicated by the heavy-tail effect observable on their power spectral density (PSD). A multiple-measurement method is used to reduce the effects of white noise by averaging individual measurements. The intensity of white noise is determined using a model based on the theoretical PSD of fractal surface measurements with additive white noise. The intensity of white noise decreases as the number of times of multiple measurements increases. Using multiple measurements also increases the highest observed spatial frequency; this increase is derived and calculated. Additionally, the accuracy obtained using multiple measurements is carefully studied, with the analysis of both the residual reference error after calibration, and the random errors appearing in the range of measured spatial frequencies. The resulting insights on the effects of white noise in optical profiler measurements and the methods to mitigate them may prove invaluable to improve the quality of surface metrology with optical profilers.

  20. New weighting methods for phylogenetic tree reconstruction using multiple loci.

    Science.gov (United States)

    Misawa, Kazuharu; Tajima, Fumio

    2012-08-01

    Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.

  1. Multiple model analysis with discriminatory data collection (MMA-DDC): A new method for improving measurement selection

    Science.gov (United States)

    Kikuchi, C.; Ferre, P. A.; Vrugt, J. A.

    2011-12-01

    Hydrologic models are developed, tested, and refined based on the ability of those models to explain available hydrologic data. The optimization of model performance based upon mismatch between model outputs and real world observations has been extensively studied. However, identification of plausible models is sensitive not only to the models themselves - including model structure and model parameters - but also to the location, timing, type, and number of observations used in model calibration. Therefore, careful selection of hydrologic observations has the potential to significantly improve the performance of hydrologic models. In this research, we seek to reduce prediction uncertainty through optimization of the data collection process. A new tool - multiple model analysis with discriminatory data collection (MMA-DDC) - was developed to address this challenge. In this approach, multiple hydrologic models are developed and treated as competing hypotheses. Potential new data are then evaluated on their ability to discriminate between competing hypotheses. MMA-DDC is well-suited for use in recursive mode, in which new observations are continuously used in the optimization of subsequent observations. This new approach was applied to a synthetic solute transport experiment, in which ranges of parameter values constitute the multiple hydrologic models, and model predictions are calculated using likelihood-weighted model averaging. MMA-DDC was used to determine the optimal location, timing, number, and type of new observations. From comparison with an exhaustive search of all possible observation sequences, we find that MMA-DDC consistently selects observations which lead to the highest reduction in model prediction uncertainty. We conclude that using MMA-DDC to evaluate potential observations may significantly improve the performance of hydrologic models while reducing the cost associated with collecting new data.

  2. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Directory of Open Access Journals (Sweden)

    Simona Jeners

    2013-06-01

    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  3. Universal postquench coarsening and aging at a quantum critical point

    Science.gov (United States)

    Gagel, Pia; Orth, Peter P.; Schmalian, Jörg

    2015-09-01

    The nonequilibrium dynamics of a system that is located in the vicinity of a quantum critical point is affected by the critical slowing down of order-parameter correlations with the potential for novel out-of-equilibrium universality. After a quantum quench, i.e., a sudden change of a parameter in the Hamiltonian, such a system is expected to almost instantly fall out of equilibrium and undergo aging dynamics, i.e., dynamics that depends on the time passed since the quench. Investigating the quantum dynamics of an N -component φ4 model coupled to an external bath, we determine this universal aging and demonstrate that the system undergoes a coarsening, governed by a critical exponent that is unrelated to the equilibrium exponents of the system. We analyze this behavior in the large-N limit, which is complementary to our earlier renormalization-group analysis, allowing in particular the direct investigation of the order-parameter dynamics in the symmetry-broken phase and at the upper critical dimension. By connecting the long-time limit of fluctuations and response, we introduce a distribution function that shows that the system remains nonthermal and exhibits quantum coherence even on long time scales.

  4. Fuzzy multiple attribute decision making methods and applications

    CERN Document Server

    Chen, Shu-Jen

    1992-01-01

    This monograph is intended for an advanced undergraduate or graduate course as well as for researchers, who want a compilation of developments in this rapidly growing field of operations research. This is a sequel to our previous works: "Multiple Objective Decision Making--Methods and Applications: A state-of-the-Art Survey" (No.164 of the Lecture Notes); "Multiple Attribute Decision Making--Methods and Applications: A State-of-the-Art Survey" (No.186 of the Lecture Notes); and "Group Decision Making under Multiple Criteria--Methods and Applications" (No.281 of the Lecture Notes). In this monograph, the literature on methods of fuzzy Multiple Attribute Decision Making (MADM) has been reviewed thoroughly and critically, and classified systematically. This study provides readers with a capsule look into the existing methods, their characteristics, and applicability to the analysis of fuzzy MADM problems. The basic concepts and algorithms from the classical MADM methods have been used in the development of the f...

  5. Recommendations to improve imaging and analysis of brain lesion load and atrophy in longitudinal studies of multiple sclerosis

    DEFF Research Database (Denmark)

    Vrenken, H; Jenkinson, M; Horsfield, M A

    2013-01-01

    resonance image analysis methods for assessing brain lesion load and atrophy, this paper makes recommendations to improve these measures for longitudinal studies of MS. Briefly, they are (1) images should be acquired using 3D pulse sequences, with near-isotropic spatial resolution and multiple image......Focal lesions and brain atrophy are the most extensively studied aspects of multiple sclerosis (MS), but the image acquisition and analysis techniques used can be further improved, especially those for studying within-patient changes of lesion load and atrophy longitudinally. Improved accuracy...

  6. Study of nucleation, growth and coarsening of precipitates in a novel 9%Cr heat resistant steel: Experimental and modeling

    International Nuclear Information System (INIS)

    Prat, O.; García, J.; Rojas, D.; Sanhueza, J.P.; Camurri, C.

    2014-01-01

    Nucleation, growth and coarsening of three different precipitates (NbC, M 23 C 6 and V(C,N)) in a novel 9%Cr heat resistant steel designed by the authors were investigated. The microstructure evolution after tempering (780 °C/2 h) and after creep (650 °C/100 MPa) was characterized using transmission electron microscopy in the scanning mode (STEM). Thermodynamic and kinetic modeling was carried out using the softwares Thermo-Calc, DICTRA and TC-PRISMA. The Thermo-Calc software predicted formation of NbC, V(C,N) and M 23 C 6 carbides at the tempering temperature of 780 °C. STEM investigations revealed that M 23 C 6 precipitated on prior austenite grain boundaries and lath or block boundaries whereas NbC and V(C,N) were located within sub-grains. Simulations by TC-PRISMA showed that M 23 C 6 , NbC and V(C,N) particles nucleation begins as soon as the tempering treatment starts and it is completed in a very short time, reaching the equilibrium volume fraction after 40 s for M 23 C 6 , 100 s for NbC and 80 s for V(C,N). Best agreement between simulations and experimental investigations was found for low interfacial energy values of 0.1 J m −2 . Both STEM measurements as well as DICTRA simulations indicate very low coarsening rate for both kind of precipitates. Creep tests up to 4000–5000 h suggest that this special combination of NbC, V(C,N) and M 23 C 6 may provide increased pinning of dislocations reducing boundary migration therefore enhancing creep strength. - Highlights: • Nucleation, growth and coarsening of NbC and M 23 C 6 precipitates were investigated. • The microstructure was characterized using transmission electron microscopy (STEM). • Modeling was carried out using the softwares Thermo-Calc, DICTRA and TC-PRISMA. • M 23 C 6 and NbC nucleation begins as soon as the solution treatmentinitiates. • Best agreement modeling/experiments was found for low interfacial energy values of 0.1 J m −2

  7. [An improved low spectral distortion PCA fusion method].

    Science.gov (United States)

    Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong

    2013-10-01

    Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.

  8. Improved ESPRIT Method for Joint Direction-of-Arrival and Frequency Estimation Using Multiple-Delay Output

    Directory of Open Access Journals (Sweden)

    Wang Xudong

    2012-01-01

    Full Text Available An automatic pairing joint direction-of-arrival (DOA and frequency estimation is presented to overcome the unsatisfactory performances of estimation of signal parameter via rotational invariance techniques- (ESPRIT- like algorithm of Wang (2010, which requires an additional pairing. By using multiple-delay output of a uniform linear antenna arrays (ULA, the proposed algorithm can estimate joint angles and frequencies with an improved ESPRIT. Compared with Wang’s ESPRIT algorithm, the angle estimation performance of the proposed algorithm is greatly improved. The frequency estimation performance of the proposed algorithm is same with that of Wang’s ESPRIT algorithm. Furthermore, the proposed algorithm can obtain automatic pairing DOA and frequency parameters, and it has a comparative computational complexity in contrast to Wang’s ESPRIT algorithm. By the way, this proposed algorithm can also work well for nonuniform linear arrays. The useful behavior of this proposed algorithm is verified by simulations.

  9. Toward Small-Diameter Carbon Nanotubes Synthesized from Captured Carbon Dioxide: Critical Role of Catalyst Coarsening.

    Science.gov (United States)

    Douglas, Anna; Carter, Rachel; Li, Mengya; Pint, Cary L

    2018-05-23

    Small-diameter carbon nanotubes (CNTs) often require increased sophistication and control in synthesis processes, but exhibit improved physical properties and greater economic value over their larger-diameter counterparts. Here, we study mechanisms controlling the electrochemical synthesis of CNTs from the capture and conversion of ambient CO 2 in molten salts and leverage this understanding to achieve the smallest-diameter CNTs ever reported in the literature from sustainable electrochemical synthesis routes, including some few-walled CNTs. Here, Fe catalyst layers are deposited at different thicknesses onto stainless steel to produce cathodes, and atomic layer deposition of Al 2 O 3 is performed on Ni to produce a corrosion-resistant anode. Our findings indicate a correlation between the CNT diameter and Fe metal layer thickness following electrochemical catalyst reduction at the cathode-molten salt interface. Further, catalyst coarsening during long duration synthesis experiments leads to a 2× increase in average diameters from 3 to 60 min durations, with CNTs produced after 3 min exhibiting a tight diameter distribution centered near ∼10 nm. Energy consumption analysis for the conversion of CO 2 into CNTs demonstrates energy input costs much lower than the value of CNTs-a concept that strictly requires and motivates small-diameter CNTs-and is more favorable compared to other costly CO 2 conversion techniques that produce lower-value materials and products.

  10. The study on defects in aluminum 2219-T6 thick butt friction stir welds with the application of multiple non-destructive testing methods

    International Nuclear Information System (INIS)

    Li, Bo; Shen, Yifu; Hu, Weiye

    2011-01-01

    Research highlights: → Friction stir weld-defect forming mechanisms of thick butt-joints. → Relationship between weld-defects and friction stir welding process parameters. → Multiple non-destructive testing methods applied to friction stir welds. → Empirical criterion basing on mass-conservation for inner material-loss defects. → Nonlinear correlation between weld strengths and root-flaw lengths. -- Abstract: The present study focused on the relationship between primary friction stir welding process parameters and varied types of weld-defect discovered in aluminum 2219-T6 friction stir butt-welds of thick plates, meanwhile, the weld-defect forming mechanisms were investigated. Besides a series of optical metallographic examinations for friction stir butt welds, multiple non-destructive testing methods including X-ray detection, ultrasonic C-scan testing, ultrasonic phased array inspection and fluorescent penetrating fluid inspection were successfully used aiming to examine the shapes and existence locations of different weld-defects. In addition, precipitated Al 2 Cu phase coarsening particles were found around a 'kissing-bond' defect within the weld stirred nugget zone by means of scanning electron microscope and energy dispersive X-ray analysis. On the basis of volume conservation law in material plastic deformation, a simple empirical criterion for estimating the existence of inner material-loss defects was proposed. Defect-free butt joints were obtained after process optimization of friction stir welding for aluminum 2219-T6 plates in 17-20 mm thickness. Process experiments proved that besides of tool rotation speed and travel speed, more other appropriate process parameter variables played important roles at the formation of high-quality friction stir welds, such as tool-shoulder target depth, spindle tilt angle, and fixture clamping conditions on the work-pieces. Furthermore, the nonlinear correlation between weld tensile strengths and weld crack

  11. IMPROVING FUNCTIONAL INDEPENDENCE OF PATIENTS WITH MULTIPLE SCLEROSIS BY PHYSICAL THERAPY AND OCCUPATIONAL THERAPY

    Directory of Open Access Journals (Sweden)

    Ana-Maria Ticărat

    2011-06-01

    Full Text Available Introduction. Patients with multiple sclerosis can have a normal life despite of their real or possible disability and of the progressive nature of it. Scope. Patients who follow physical therapy and occupational therapy will have an increased quality of life and a greater functional independence.Methods. The randomized study was made on 7 patients with multiple sclerosis, from Oradea Day Centre, 3 times/week, ages between 35 – 55 years, functional level between mild and sever. Assessment and rehabilitation methods: inspection, BARTHEL Index. Frenkel method, brething exercises, weights exercises, gait exercises, writind exercises and games were used in the rehabilitation process. Group therapies: sociotherapy, arttherapy, music therapy. Results analysis consisted of the comparison of baseline and final means.Results. By analizing baseline and final means for Barthel Index for each functon separately, it was shown a mild improvement of functional independence for almost assessed functions, with at least 1-1,5 points.Conclusions. Persons with multiple sclerosis who follow physical therapy and occupational therapy presents a better functional independence after the treatment.

  12. Addressing the targeting range of the ABILHAND-56 in relapsing-remitting multiple sclerosis: A mixed methods psychometric study.

    Science.gov (United States)

    Cleanthous, Sophie; Strzok, Sara; Pompilus, Farrah; Cano, Stefan; Marquis, Patrick; Cohan, Stanley; Goldman, Myla D; Kresa-Reahl, Kiren; Petrillo, Jennifer; Castrillo-Viguera, Carmen; Cadavid, Diego; Chen, Shih-Yin

    2018-01-01

    ABILHAND, a manual ability patient-reported outcome instrument originally developed for stroke patients, has been used in multiple sclerosis clinical trials; however, psychometric analyses indicated the measure's limited measurement range and precision in higher-functioning multiple sclerosis patients. The purpose of this study was to identify candidate items to expand the measurement range of the ABILHAND-56, thus improving its ability to detect differences in manual ability in higher-functioning multiple sclerosis patients. A step-wise mixed methods design strategy was used, comprising two waves of patient interviews, a combination of qualitative (concept elicitation and cognitive debriefing) and quantitative (Rasch measurement theory) analytic techniques, and consultation interviews with three clinical neurologists specializing in multiple sclerosis. Original ABILHAND was well understood in this context of use. Eighty-two new manual ability concepts were identified. Draft supplementary items were generated and refined with patient and neurologist input. Rasch measurement theory psychometric analysis indicated supplementary items improved targeting to higher-functioning multiple sclerosis patients and measurement precision. The final pool of Early Multiple Sclerosis Manual Ability items comprises 20 items. The synthesis of qualitative and quantitative methods used in this study improves the ABILHAND content validity to more effectively identify manual ability changes in early multiple sclerosis and potentially help determine treatment effect in higher-functioning patients in clinical trials.

  13. Basic thinking patterns and working methods for multiple DFX

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Mortensen, Niels Henrik

    1997-01-01

    This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX.......This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX....

  14. Self-calibrated multiple-echo acquisition with radial trajectories using the conjugate gradient method (SMART-CG).

    Science.gov (United States)

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F

    2011-04-01

    To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.

  15. Improved parallel solution techniques for the integral transport matrix method

    Energy Technology Data Exchange (ETDEWEB)

    Zerr, R. Joseph, E-mail: rjz116@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA (United States); Azmy, Yousry Y., E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Burlington Engineering Laboratories, Raleigh, NC (United States)

    2011-07-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  16. Improved parallel solution techniques for the integral transport matrix method

    International Nuclear Information System (INIS)

    Zerr, R. Joseph; Azmy, Yousry Y.

    2011-01-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  17. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  18. Numerical method of lines for the relaxational dynamics of nematic liquid crystals.

    Science.gov (United States)

    Bhattacharjee, A K; Menon, Gautam I; Adhikari, R

    2008-08-01

    We propose an efficient numerical scheme, based on the method of lines, for solving the Landau-de Gennes equations describing the relaxational dynamics of nematic liquid crystals. Our method is computationally easy to implement, balancing requirements of efficiency and accuracy. We benchmark our method through the study of the following problems: the isotropic-nematic interface, growth of nematic droplets in the isotropic phase, and the kinetics of coarsening following a quench into the nematic phase. Our results, obtained through solutions of the full coarse-grained equations of motion with no approximations, provide a stringent test of the de Gennes ansatz for the isotropic-nematic interface, illustrate the anisotropic character of droplets in the nucleation regime, and validate dynamical scaling in the coarsening regime.

  19. Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.

    Science.gov (United States)

    Smith, Kent W.; Sasaki, M. S.

    1979-01-01

    A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)

  20. Use of multiple methods to determine factors affecting quality of care of patients with diabetes.

    Science.gov (United States)

    Khunti, K

    1999-10-01

    The process of care of patients with diabetes is complex; however, GPs are playing a greater role in its management. Despite the research evidence, the quality of care of patients with diabetes is variable. In order to improve care, information is required on the obstacles faced by practices in improving care. Qualitative and quantitative methods can be used for formation of hypotheses and the development of survey procedures. However, to date few examples exist in general practice research on the use of multiple methods using both quantitative and qualitative techniques for hypothesis generation. We aimed to determine information on all factors that may be associated with delivery of care to patients with diabetes. Factors for consideration on delivery of diabetes care were generated by multiple qualitative methods including brainstorming with health professionals and patients, a focus group and interviews with key informants which included GPs and practice nurses. Audit data showing variations in care of patients with diabetes were used to stimulate the brainstorming session. A systematic literature search focusing on quality of care of patients with diabetes in primary care was also conducted. Fifty-four potential factors were identified by multiple methods. Twenty (37.0%) were practice-related factors, 14 (25.9%) were patient-related factors and 20 (37.0%) were organizational factors. A combination of brainstorming and the literature review identified 51 (94.4%) factors. Patients did not identify factors in addition to those identified by other methods. The complexity of delivery of care to patients with diabetes is reflected in the large number of potential factors identified in this study. This study shows the feasibility of using multiple methods for hypothesis generation. Each evaluation method provided unique data which could not otherwise be easily obtained. This study highlights a way of combining various traditional methods in an attempt to overcome the

  1. Multiple imputation for multivariate data with missing and below-threshold measurements: time-series concentrations of pollutants in the Arctic.

    Science.gov (United States)

    Hopke, P K; Liu, C; Rubin, D B

    2001-03-01

    Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.

  2. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  3. s-Step Krylov Subspace Methods as Bottom Solvers for Geometric Multigrid

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lijewski, Mike [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Almgren, Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Straalen, Brian Van [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Carson, Erin [Univ. of California, Berkeley, CA (United States); Knight, Nicholas [Univ. of California, Berkeley, CA (United States); Demmel, James [Univ. of California, Berkeley, CA (United States)

    2014-08-14

    Geometric multigrid solvers within adaptive mesh refinement (AMR) applications often reach a point where further coarsening of the grid becomes impractical as individual sub domain sizes approach unity. At this point the most common solution is to use a bottom solver, such as BiCGStab, to reduce the residual by a fixed factor at the coarsest level. Each iteration of BiCGStab requires multiple global reductions (MPI collectives). As the number of BiCGStab iterations required for convergence grows with problem size, and the time for each collective operation increases with machine scale, bottom solves in large-scale applications can constitute a significant fraction of the overall multigrid solve time. In this paper, we implement, evaluate, and optimize a communication-avoiding s-step formulation of BiCGStab (CABiCGStab for short) as a high-performance, distributed-memory bottom solver for geometric multigrid solvers. This is the first time s-step Krylov subspace methods have been leveraged to improve multigrid bottom solver performance. We use a synthetic benchmark for detailed analysis and integrate the best implementation into BoxLib in order to evaluate the benefit of a s-step Krylov subspace method on the multigrid solves found in the applications LMC and Nyx on up to 32,768 cores on the Cray XE6 at NERSC. Overall, we see bottom solver improvements of up to 4.2x on synthetic problems and up to 2.7x in real applications. This results in as much as a 1.5x improvement in solver performance in real applications.

  4. Measuring multiple residual-stress components using the contour method and multiple cuts

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  5. Mathematical Modeling of the Growth and Coarsening of Ice Particles in the Context of High Pressure Shift Freezing Processes

    KAUST Repository

    Smith, N. A. S.; Burlakov, V. M.; Ramos, Á . M.

    2013-01-01

    High pressure shift freezing (HPSF) has been proven more beneficial for ice crystal size and shape than traditional (at atmospheric pressure) freezing.1-3 A model for growth and coarsening of ice crystals inside a frozen food sample (either at atmospheric or high pressure) is developed, and some numerical experiments are given, with which the model is validated by using experimental data. To the best of our knowledge, this is the first model suited for freezing crystallization in the context of high pressure. © 2013 American Chemical Society.

  6. Mathematical Modeling of the Growth and Coarsening of Ice Particles in the Context of High Pressure Shift Freezing Processes

    KAUST Repository

    Smith, N. A. S.

    2013-07-25

    High pressure shift freezing (HPSF) has been proven more beneficial for ice crystal size and shape than traditional (at atmospheric pressure) freezing.1-3 A model for growth and coarsening of ice crystals inside a frozen food sample (either at atmospheric or high pressure) is developed, and some numerical experiments are given, with which the model is validated by using experimental data. To the best of our knowledge, this is the first model suited for freezing crystallization in the context of high pressure. © 2013 American Chemical Society.

  7. Causal Effect of Self-esteem on Cigarette Smoking Stages in Adolescents: Coarsened Exact Matching in a Longitudinal Study.

    Science.gov (United States)

    Khosravi, Ahmad; Mohammadpoorasl, Asghar; Holakouie-Naieni, Kourosh; Mahmoodi, Mahmood; Pouyan, Ali Akbar; Mansournia, Mohammad Ali

    2016-12-01

    Identification of the causal impact of self-esteem on smoking stages faces seemingly insurmountable problems in observational data, where self-esteem is not manipulable by the researcher and cannot be assigned randomly. The aim of this study was to find out if weaker self-esteem in adolescence is a risk factor of cigarette smoking in a longitudinal study in Iran. In this longitudinal study, 4,853 students (14-18 years) completed a self-administered multiple-choice anonym questionnaire. The students were evaluated twice, 12 months apart. Students were matched based on coarsened exact matching on pretreatment variables, including age, gender, smoking stages at the first wave of study, socioeconomic status, general risk-taking behavior, having a smoker in the family, having a smoker friend, attitude toward smoking, and self-injury, to ensure statistically equivalent comparison groups. Self-esteem was measured using the Rosenberg 10-item questionnaire and were classified using a latent class analysis. After matching, the effect of self-esteem was evaluated using a multinomial logistic model. In the causal fitted model, for adolescents with weaker self-esteem relative to those with stronger self-esteem, the relative risk for experimenters and regular smokers relative to nonsmokers would be expected to increase by a factor of 2.2 (1.9-2.6) and 2.0 (1.5-2.6), respectively. Using a causal approach, our study indicates that low self-esteem is consistently associated with progression in cigarette smoking stages.

  8. Multiple Signal Classification Algorithm Based Electric Dipole Source Localization Method in an Underwater Environment

    Directory of Open Access Journals (Sweden)

    Yidong Xu

    2017-10-01

    Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.

  9. Improved accuracy of multiple ncRNA alignment by incorporating structural information into a MAFFT-based framework

    Directory of Open Access Journals (Sweden)

    Toh Hiroyuki

    2008-04-01

    Full Text Available Abstract Background Structural alignment of RNAs is becoming important, since the discovery of functional non-coding RNAs (ncRNAs. Recent studies, mainly based on various approximations of the Sankoff algorithm, have resulted in considerable improvement in the accuracy of pairwise structural alignment. In contrast, for the cases with more than two sequences, the practical merit of structural alignment remains unclear as compared to traditional sequence-based methods, although the importance of multiple structural alignment is widely recognized. Results We took a different approach from a straightforward extension of the Sankoff algorithm to the multiple alignments from the viewpoints of accuracy and time complexity. As a new option of the MAFFT alignment program, we developed a multiple RNA alignment framework, X-INS-i, which builds a multiple alignment with an iterative method incorporating structural information through two components: (1 pairwise structural alignments by an external pairwise alignment method such as SCARNA or LaRA and (2 a new objective function, Four-way Consistency, derived from the base-pairing probability of every sub-aligned group at every multiple alignment stage. Conclusion The BRAliBASE benchmark showed that X-INS-i outperforms other methods currently available in the sum-of-pairs score (SPS criterion. As a basis for predicting common secondary structure, the accuracy of the present method is comparable to or rather higher than those of the current leading methods such as RNA Sampler. The X-INS-i framework can be used for building a multiple RNA alignment from any combination of algorithms for pairwise RNA alignment and base-pairing probability. The source code is available at the webpage found in the Availability and requirements section.

  10. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance.

    Science.gov (United States)

    Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S

    2017-10-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten

  11. A new method to improve multiplication factor in micro-pixel avalanche photodiodes with high pixel density

    Energy Technology Data Exchange (ETDEWEB)

    Sadygov, Z. [National Nuclear Research Center, Baku (Azerbaijan); Joint Institute for Nuclear Research, Dubna (Russian Federation); Ahmadov, F. [National Nuclear Research Center, Baku (Azerbaijan); Khorev, S. [Zecotek Photonics Inc., Vancouver (Canada); Sadigov, A., E-mail: saazik@yandex.ru [National Nuclear Research Center, Baku (Azerbaijan); Suleymanov, S. [National Nuclear Research Center, Baku (Azerbaijan); Madatov, R.; Mehdiyeva, R. [Institute of Radiation Problems, Baku (Azerbaijan); Zerrouk, F. [Zecotek Photonics Inc., Vancouver (Canada)

    2016-07-11

    Presented is a new model describing development of the avalanche process in time, taking into account the dynamics of electric field within the depleted region of the diode and the effect of parasitic capacitance shunting individual quenching micro-resistors on device parameters. Simulations show that the effective capacitance of a single pixel, which defines the multiplication factor, is the sum of the pixel capacitance and a parasitic capacitance shunting its quenching micro-resistor. Conclusions obtained as a result of modeling open possibilities of improving the pixel gain in micropixel avalanche photodiodes with high pixel density (or low pixel capacitance).

  12. Coarsening of stripe patterns: variations with quench depth and scaling.

    Science.gov (United States)

    Tripathi, Ashwani K; Kumar, Deepak

    2015-02-01

    The coarsening of stripe patterns when the system is evolved from random initial states is studied by varying the quench depth ε, which is a measure of distance from the transition point of the stripe phase. The dynamics of the growth of stripe order, which is characterized by two length scales, depends on the quench depth. The growth exponents of the two length scales vary continuously with ε. The decay exponents for free energy, stripe curvature, and densities of defects like grain boundaries and dislocations also show similar variation. This implies a breakdown of the standard picture of nonequilibrium dynamical scaling. In order to understand the variations with ε we propose an additional scaling with a length scale dependent on ε. The main contribution to this length scale comes from the "pinning potential," which is unique to systems where the order parameter is spatially periodic. The periodic order parameter gives rise to an ε-dependent potential, which can pin defects like grain boundaries, dislocations, etc. This additional scaling provides a compact description of variations of growth exponents with quench depth in terms of just one exponent for each of the length scales. The relaxation of free energy, stripe curvature, and the defect densities have also been related to these length scales. The study is done at zero temperature using Swift-Hohenberg equation in two dimensions.

  13. Improved multiple displacement amplification (iMDA) and ultraclean reagents.

    Science.gov (United States)

    Motley, S Timothy; Picuri, John M; Crowder, Chris D; Minich, Jeremiah J; Hofstadler, Steven A; Eshoo, Mark W

    2014-06-06

    Next-generation sequencing sample preparation requires nanogram to microgram quantities of DNA; however, many relevant samples are comprised of only a few cells. Genomic analysis of these samples requires a whole genome amplification method that is unbiased and free of exogenous DNA contamination. To address these challenges we have developed protocols for the production of DNA-free consumables including reagents and have improved upon multiple displacement amplification (iMDA). A specialized ethylene oxide treatment was developed that renders free DNA and DNA present within Gram positive bacterial cells undetectable by qPCR. To reduce DNA contamination in amplification reagents, a combination of ion exchange chromatography, filtration, and lot testing protocols were developed. Our multiple displacement amplification protocol employs a second strand-displacing DNA polymerase, improved buffers, improved reaction conditions and DNA free reagents. The iMDA protocol, when used in combination with DNA-free laboratory consumables and reagents, significantly improved efficiency and accuracy of amplification and sequencing of specimens with moderate to low levels of DNA. The sensitivity and specificity of sequencing of amplified DNA prepared using iMDA was compared to that of DNA obtained with two commercial whole genome amplification kits using 10 fg (~1-2 bacterial cells worth) of bacterial genomic DNA as a template. Analysis showed >99% of the iMDA reads mapped to the template organism whereas only 0.02% of the reads from the commercial kits mapped to the template. To assess the ability of iMDA to achieve balanced genomic coverage, a non-stochastic amount of bacterial genomic DNA (1 pg) was amplified and sequenced, and data obtained were compared to sequencing data obtained directly from genomic DNA. The iMDA DNA and genomic DNA sequencing had comparable coverage 99.98% of the reference genome at ≥1X coverage and 99.9% at ≥5X coverage while maintaining both balance

  14. Comparative study of He bubble formation in nanostructured reduced activation steel and its coarsen-grained counterpart

    Science.gov (United States)

    Liu, W. B.; Zhang, J. H.; Ji, Y. Z.; Xia, L. D.; Liu, H. P.; Yun, D.; He, C. H.; Zhang, C.; Yang, Z. G.

    2018-03-01

    High temperature (550 °C) He ions irradiation was performed on nanostructured (NS) and coarsen-grained (CG) reduced activation steel to investigate the effects of GBs/interfaces on the formation of bubbles during irradiation. Experimental results showed that He bubbles were preferentially trapped at dislocations and/or grain boundaries (GBs) for both of the samples. Void denuded zones (VDZs) were observed in the CG samples, while VDZs near GBs were unobvious in NS sample. However, both the average bubble size and the bubble density in peak damage region of the CG sample were significantly larger than that observed in the NS sample, which indicated that GBs play an important role during the irradiation, and the NS steel had better irradiation resistance than its CG counterpart.

  15. Modified Truncated Multiplicity Analysis to Improve Verification of Uranium Fuel Cycle Materials

    International Nuclear Information System (INIS)

    LaFleur, A.; Miller, K.; Swinhoe, M.; Belian, A.; Croft, S.

    2015-01-01

    Accurate verification of 235U enrichment and mass in UF6 storage cylinders and the UO2F2 holdup contained in the process equipment is needed to improve international safeguards and nuclear material accountancy at uranium enrichment plants. Small UF6 cylinders (1.5'' and 5'' diameter) are used to store the full range of enrichments from depleted to highly-enriched UF6. For independent verification of these materials, it is essential that the 235U mass and enrichment measurements do not rely on facility operator declarations. Furthermore, in order to be deployed by IAEA inspectors to detect undeclared activities (e.g., during complementary access), it is also imperative that the measurement technique is quick, portable, and sensitive to a broad range of 235U masses. Truncated multiplicity analysis is a technique that reduces the variance in the measured count rates by only considering moments 1, 2, and 3 of the multiplicity distribution. This is especially important for reducing the uncertainty in the measured doubles and triples rates in environments with a high cosmic ray background relative to the uranium signal strength. However, we believe that the existing truncated multiplicity analysis throws away too much useful data by truncating the distribution after the third moment. This paper describes a modified truncated multiplicity analysis method that determines the optimal moment to truncate the multiplicity distribution based on the measured data. Experimental measurements of small UF6 cylinders and UO2F2 working reference materials were performed at Los Alamos National Laboratory (LANL). The data were analyzed using traditional and modified truncated multiplicity analysis to determine the optimal moment to truncate the multiplicity distribution to minimize the uncertainty in the measured count rates. The results from this analysis directly support nuclear safeguards at enrichment plants and provide a more accurate verification method for UF6

  16. An improved EMD method for modal identification and a combined static-dynamic method for damage detection

    Science.gov (United States)

    Yang, Jinping; Li, Peizhen; Yang, Youfa; Xu, Dian

    2018-04-01

    Empirical mode decomposition (EMD) is a highly adaptable signal processing method. However, the EMD approach has certain drawbacks, including distortions from end effects and mode mixing. In the present study, these two problems are addressed using an end extension method based on the support vector regression machine (SVRM) and a modal decomposition method based on the characteristics of the Hilbert transform. The algorithm includes two steps: using the SVRM, the time series data are extended at both endpoints to reduce the end effects, and then, a modified EMD method using the characteristics of the Hilbert transform is performed on the resulting signal to reduce mode mixing. A new combined static-dynamic method for identifying structural damage is presented. This method combines the static and dynamic information in an equilibrium equation that can be solved using the Moore-Penrose generalized matrix inverse. The combination method uses the differences in displacements of the structure with and without damage and variations in the modal force vector. Tests on a four-story, steel-frame structure were conducted to obtain static and dynamic responses of the structure. The modal parameters are identified using data from the dynamic tests and improved EMD method. The new method is shown to be more accurate and effective than the traditional EMD method. Through tests with a shear-type test frame, the higher performance of the proposed static-dynamic damage detection approach, which can detect both single and multiple damage locations and the degree of the damage, is demonstrated. For structures with multiple damage, the combined approach is more effective than either the static or dynamic method. The proposed EMD method and static-dynamic damage detection method offer improved modal identification and damage detection, respectively, in structures.

  17. HARMONIC ANALYSIS OF SVPWM INVERTER USING MULTIPLE-PULSES METHOD

    Directory of Open Access Journals (Sweden)

    Mehmet YUMURTACI

    2009-01-01

    Full Text Available Space Vector Modulation (SVM technique is a popular and an important PWM technique for three phases voltage source inverter in the control of Induction Motor. In this study harmonic analysis of Space Vector PWM (SVPWM is investigated using multiple-pulses method. Multiple-Pulses method calculates the Fourier coefficients of individual positive and negative pulses of the output PWM waveform and adds them together using the principle of superposition to calculate the Fourier coefficients of the all PWM output signal. Harmonic magnitudes can be calculated directly by this method without linearization, using look-up tables or Bessel functions. In this study, the results obtained in the application of SVPWM for values of variable parameters are compared with the results obtained with the multiple-pulses method.

  18. Teaching Literacy: Methods for Studying and Improving Library Instruction

    Directory of Open Access Journals (Sweden)

    Meggan Houlihan

    2012-12-01

    Full Text Available Objective – The aim of this paper is to evaluate teaching effectiveness in one-shotinformation literacy (IL instruction sessions. The authors used multiple methods,including plus/delta forms, peer evaluations, and instructor feedback surveys, in aneffort to improve student learning, individual teaching skill, and the overall IL programat the American University in Cairo.Methods – Researchers implemented three main evaluation tools to gather data in thisstudy. Librarians collected both quantitative and qualitative data using studentplus/delta surveys, peer evaluation, and faculty feedback in order to draw overallconclusions about the effectiveness of one-shot IL sessions. By designing a multi-methodstudy, and gathering information from students, faculty, and instruction librarians,results represented the perspectives of multiple stakeholders. Results – The data collected using the three evaluation tools provided insight into the needs and perspectives of three stakeholder groups. Individual instructors benefit from the opportunity to improve teaching through informed reflection, and are eager for feedback. Faculty members want their students to have more hands-on experience, but are pleased overall with instruction. Students need less lecturing and more authentic learning opportunities to engage with new knowledge.Conclusion – Including evaluation techniques in overall information literacy assessment plans is valuable, as instruction librarians gain opportunities for self-reflection and improvement, and administrators gather information about teaching skill levels. The authors gathered useful data that informed administrative decision making related to the IL program at the American University in Cairo. The findings discussed in this paper, both practical and theoretical, can help other college and university librarians think critically about their own IL programs, and influence how library instruction sessions might be evaluated and

  19. Power-efficient method for IM-DD optical transmission of multiple OFDM signals.

    Science.gov (United States)

    Effenberger, Frank; Liu, Xiang

    2015-05-18

    We propose a power-efficient method for transmitting multiple frequency-division multiplexed (FDM) orthogonal frequency-division multiplexing (OFDM) signals in intensity-modulation direct-detection (IM-DD) optical systems. This method is based on quadratic soft clipping in combination with odd-only channel mapping. We show, both analytically and experimentally, that the proposed approach is capable of improving the power efficiency by about 3 dB as compared to conventional FDM OFDM signals under practical bias conditions, making it a viable solution in applications such as optical fiber-wireless integrated systems where both IM-DD optical transmission and OFDM signaling are important.

  20. Single-phase Near-well Permeability Upscaling and Productivity Index Calculation Methods

    Directory of Open Access Journals (Sweden)

    Seyed Shamsollah Noorbakhsh

    2014-10-01

    Full Text Available Reservoir models with many grid blocks suffer from long run time; it is hence important to deliberate a method to remedy this drawback. Usual upscaling methods are proved to fail to reproduce fine grid model behaviors in coarse grid models in well proximity. This is attributed to rapid pressure changes in the near-well region. Standard permeability upscaling methods are limited to systems with linear pressure changes; therefore, special near-well upscaling approaches based on the well index concept are proposed for these regions with non-linear pressure profile. No general rule is available to calculate the proper well index in different heterogeneity patterns and coarsening levels. In this paper, the available near-well upscaling methods are investigated for homogeneous and heterogeneous permeability models at different coarsening levels. It is observed that the existing well index methods have limited success in reproducing the well flow and pressure behavior of the reference fine grid models as the heterogeneity or coarsening level increases. Coarse-scale well indexes are determined such that fine and coarse scale results for pressure are in agreement. Both vertical and horizontal wells are investigated and, for the case of vertical homogeneous wells, a linear relationship between the default (Peaceman well index and the true (matched well index is obtained, which considerably reduces the error of the Peaceman well index. For the case of heterogeneous vertical wells, a multiplier remedies the error. Similar results are obtained for horizontal wells (both heterogeneous and homogeneous models.

  1. Relaxation and coarsening of weakly-interacting breathers in a simplified DNLS chain

    Science.gov (United States)

    Iubini, Stefano; Politi, Antonio; Politi, Paolo

    2017-07-01

    The discrete nonlinear Schrödinger (DNLS) equation displays a parameter region characterized by the presence of localized excitations (breathers). While their formation is well understood and it is expected that the asymptotic configuration comprises a single breather on top of a background, it is not clear why the dynamics of a multi-breather configuration is essentially frozen. In order to investigate this question, we introduce simple stochastic models, characterized by suitable conservation laws. We focus on the role of the coupling strength between localized excitations and background. In the DNLS model, higher breathers interact more weakly, as a result of their faster rotation. In our stochastic models, the strength of the coupling is controlled directly by an amplitude-dependent parameter. In the case of a power-law decrease, the associated coarsening process undergoes a slowing down if the decay rate is larger than a critical value. In the case of an exponential decrease, a freezing effect is observed that is reminiscent of the scenario observed in the DNLS. This last regime arises spontaneously when direct energy diffusion between breathers and background is blocked below a certain threshold.

  2. Does Multiple Intelligence Improve Performance? Evidence from a ...

    African Journals Online (AJOL)

    This study reports the findings of a study that investigated the relationship between multiple intelligence (MI) and academic performance in higher education. It addresses one question: does MI improve academic performance? Taking the case of the finalist cohort of the university's Faculty of Education of the academic year ...

  3. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  4. Multiple external hazards compound level 3 PSA methods research of nuclear power plant

    Science.gov (United States)

    Wang, Handing; Liang, Xiaoyu; Zhang, Xiaoming; Yang, Jianfeng; Liu, Weidong; Lei, Dina

    2017-01-01

    2011 Fukushima nuclear power plant severe accident was caused by both earthquake and tsunami, which results in large amount of radioactive nuclides release. That accident has caused the radioactive contamination on the surrounding environment. Although this accident probability is extremely small, once such an accident happens that is likely to release a lot of radioactive materials into the environment, and cause radiation contamination. Therefore, studying accidents consequences is important and essential to improve nuclear power plant design and management. Level 3 PSA methods of nuclear power plant can be used to analyze radiological consequences, and quantify risk to the public health effects around nuclear power plants. Based on multiple external hazards compound level 3 PSA methods studies of nuclear power plant, and the description of the multiple external hazards compound level 3 PSA technology roadmap and important technical elements, as well as taking a coastal nuclear power plant as the reference site, we analyzed the impact of off-site consequences of nuclear power plant severe accidents caused by multiple external hazards. At last we discussed the impact of off-site consequences probabilistic risk studies and its applications under multiple external hazards compound conditions, and explained feasibility and reasonableness of emergency plans implementation.

  5. Improving information retrieval with multiple health terminologies in a quality-controlled gateway.

    Science.gov (United States)

    Soualmia, Lina F; Sakji, Saoussen; Letord, Catherine; Rollin, Laetitia; Massari, Philippe; Darmoni, Stéfan J

    2013-01-01

    The Catalog and Index of French-language Health Internet resources (CISMeF) is a quality-controlled health gateway, primarily for Web resources in French (n=89,751). Recently, we achieved a major improvement in the structure of the catalogue by setting-up multiple terminologies, based on twelve health terminologies available in French, to overcome the potential weakness of the MeSH thesaurus, which is the main and pivotal terminology we use for indexing and retrieval since 1995. The main aim of this study was to estimate the added-value of exploiting several terminologies and their semantic relationships to improve Web resource indexing and retrieval in CISMeF, in order to provide additional health resources which meet the users' expectations. Twelve terminologies were integrated into the CISMeF information system to set up multiple-terminologies indexing and retrieval. The same sets of thirty queries were run: (i) by exploiting the hierarchical structure of the MeSH, and (ii) by exploiting the additional twelve terminologies and their semantic links. The two search modes were evaluated and compared. The overall coverage of the multiple-terminologies search mode was improved by comparison to the coverage of using the MeSH (16,283 vs. 14,159) (+15%). These additional findings were estimated at 56.6% relevant results, 24.7% intermediate results and 18.7% irrelevant. The multiple-terminologies approach improved information retrieval. These results suggest that integrating additional health terminologies was able to improve recall. Since performing the study, 21 other terminologies have been added which should enable us to make broader studies in multiple-terminologies information retrieval.

  6. A global calibration method for multiple vision sensors based on multiple targets

    International Nuclear Information System (INIS)

    Liu, Zhen; Zhang, Guangjun; Wei, Zhenzhong; Sun, Junhua

    2011-01-01

    The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods

  7. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  8. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    Science.gov (United States)

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-03-27

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  9. The importance of neurophysiological-Bobath method in multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Adrian Miler

    2018-02-01

    Full Text Available Rehabilitation treatment in multiple sclerosis should be carried out continuously, can take place in the hospital, ambulatory as well as environmental conditions. In the traditional approach, it focuses on reducing the symptoms of the disease, such as paresis, spasticity, ataxia, pain, sensory disturbances, speech disorders, blurred vision, fatigue, neurogenic bladder dysfunction, and cognitive impairment. In kinesiotherapy in people with paresis, the most common methods are the (Bobathian method.Improvement can be achieved by developing the ability to maintain a correct posture in various positions (so-called postural alignment, patterns based on corrective and equivalent responses. During the therapy, various techniques are used to inhibit pathological motor patterns and stimulate the reaction. The creators of the method believe that each movement pattern has its own postural system, from which it can be initiated, carried out and effectively controlled. Correct movement can not take place in the wrong position of the body. The physiotherapist discusses with the patient how to perform individual movement patterns, which protects him against spontaneous pathological compensation.The aim of the work is to determine the meaning and application of the  Bobath method in the therapy of people with MS

  10. An empirical correction for moderate multiple scattering in super-heterodyne light scattering.

    Science.gov (United States)

    Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas

    2017-05-28

    Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.

  11. Modelling of pore coarsening in the high burn-up structure of UO{sub 2} fuel

    Energy Technology Data Exchange (ETDEWEB)

    Veshchunov, M.S.; Tarasov, V.I., E-mail: tarasov@ibrae.ac.ru

    2017-05-15

    The model for coalescence of randomly distributed immobile pores owing to their growth and impingement, applied by the authors earlier to consideration of the porosity evolution in the high burn-up structure (HBS) at the UO{sub 2} fuel pellet periphery (rim zone), was further developed and validated. Predictions of the original model, taking into consideration only binary impingements of growing immobile pores, qualitatively correctly describe the decrease of the pore number density with the increase of the fractional porosity, however notably underestimate the coalescence rate at high burn-ups attained in the outmost region of the rim zone. In order to overcome this discrepancy, the next approximation of the model taking into consideration triple impingements of growing pores was developed. The advanced model provides a reasonable consent with experimental data, thus demonstrating the validity of the proposed pore coarsening mechanism in the HBS.

  12. Decision Making in Manufacturing Environment Using Graph Theory and Fuzzy Multiple Attribute Decision Making Methods Volume 2

    CERN Document Server

    Rao, R Venkata

    2013-01-01

    Decision Making in Manufacturing Environment Using Graph Theory and Fuzzy Multiple Attribute Decision Making Methods presents the concepts and details of applications of MADM methods. A range of methods are covered including Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), VIšekriterijumsko KOmpromisno Rangiranje (VIKOR), Data Envelopment Analysis (DEA), Preference Ranking METHod for Enrichment Evaluations (PROMETHEE), ELimination Et Choix Traduisant la Realité (ELECTRE), COmplex PRoportional ASsessment (COPRAS), Grey Relational Analysis (GRA), UTility Additive (UTA), and Ordered Weighted Averaging (OWA). The existing MADM methods are improved upon and three novel multiple attribute decision making methods for solving the decision making problems of the manufacturing environment are proposed. The concept of integrated weights is introduced in the proposed subjective and objective integrated weights (SOIW) method and the weighted Euclidean distance ba...

  13. Optimization of breeding methods when introducing multiple ...

    African Journals Online (AJOL)

    Optimization of breeding methods when introducing multiple resistance genes from American to Chinese wheat. JN Qi, X Zhang, C Yin, H Li, F Lin. Abstract. Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this ...

  14. CSA: An efficient algorithm to improve circular DNA multiple alignment

    Directory of Open Access Journals (Sweden)

    Pereira Luísa

    2009-07-01

    Full Text Available Abstract Background The comparison of homologous sequences from different species is an essential approach to reconstruct the evolutionary history of species and of the genes they harbour in their genomes. Several complete mitochondrial and nuclear genomes are now available, increasing the importance of using multiple sequence alignment algorithms in comparative genomics. MtDNA has long been used in phylogenetic analysis and errors in the alignments can lead to errors in the interpretation of evolutionary information. Although a large number of multiple sequence alignment algorithms have been proposed to date, they all deal with linear DNA and cannot handle directly circular DNA. Researchers interested in aligning circular DNA sequences must first rotate them to the "right" place using an essentially manual process, before they can use multiple sequence alignment tools. Results In this paper we propose an efficient algorithm that identifies the most interesting region to cut circular genomes in order to improve phylogenetic analysis when using standard multiple sequence alignment algorithms. This algorithm identifies the largest chain of non-repeated longest subsequences common to a set of circular mitochondrial DNA sequences. All the sequences are then rotated and made linear for multiple alignment purposes. To evaluate the effectiveness of this new tool, three different sets of mitochondrial DNA sequences were considered. Other tests considering randomly rotated sequences were also performed. The software package Arlequin was used to evaluate the standard genetic measures of the alignments obtained with and without the use of the CSA algorithm with two well known multiple alignment algorithms, the CLUSTALW and the MAVID tools, and also the visualization tool SinicView. Conclusion The results show that a circularization and rotation pre-processing step significantly improves the efficiency of public available multiple sequence alignment

  15. Robust design method and thermostatic experiment for multiple piezoelectric vibration absorber system

    International Nuclear Information System (INIS)

    Nambu, Yohsuke; Takashima, Toshihide; Inagaki, Akiya

    2015-01-01

    This paper examines the effects of connecting multiplexing shunt circuits composed of inductors and resistors to piezoelectric transducers so as to improve the robustness of a piezoelectric vibration absorber (PVA). PVAs are well known to be effective at suppressing the vibration of an adaptive structure; their weakness is low robustness to changes in the dynamic parameters of the system, including the main structure and the absorber. In the application to space structures, the temperature-dependency of capacitance of piezoelectric ceramics is the factor that causes performance reduction. To improve robustness to the temperature-dependency of the capacitance, this paper proposes a multiple-PVA system that is composed of distributed piezoelectric transducers and several shunt circuits. The optimization problems that determine both the frequencies and the damping ratios of the PVAs are multi-objective problems, which are solved using a real-coded genetic algorithm in this paper. A clamped aluminum beam with four groups of piezoelectric ceramics attached was considered in simulations and experiments. Numerical simulations revealed that the PVA systems designed using the proposed method had tolerance to changes in the capacitances. Furthermore, experiments using a thermostatic bath were conducted to reveal the effectiveness and robustness of the PVA systems. The maximum peaks of the transfer functions of the beam with the open circuit, the single-PVA system, the double-PVA system, and the quadruple-PVA system at 20 °C were 14.3 dB, −6.91 dB, −7.47 dB, and −8.51 dB, respectively. The experimental results also showed that the multiple-PVA system is more robust than a single PVA in a variable temperature environment from −10 °C to 50 °C. In conclusion, the use of multiple PVAs results in an effective, robust vibration control method for adaptive structures. (paper)

  16. Improved nonparametric inference for multiple correlated periodic sequences

    KAUST Repository

    Sun, Ying; Hart, Jeffrey D.; Genton, Marc G.

    2013-01-01

    cross-validation method to the temperature data obtained from multiple ice cores, investigating the periodicity of the El Niño effect. Our methodology is also illustrated by estimating patients' cardiac cycle from different physiological signals

  17. Structure and coarsening at the surface of a dry three-dimensional aqueous foam.

    Science.gov (United States)

    Roth, A E; Chen, B G; Durian, D J

    2013-12-01

    We utilize total-internal reflection to isolate the two-dimensional surface foam formed at the planar boundary of a three-dimensional sample. The resulting images of surface Plateau borders are consistent with Plateau's laws for a truly two-dimensional foam. Samples are allowed to coarsen into a self-similar scaling state where statistical distributions appear independent of time, except for an overall scale factor. There we find that statistical measures of side number distributions, size-topology correlations, and bubble shapes are all very similar to those for two-dimensional foams. However, the size number distribution is slightly broader, and the shapes are slightly more elongated. A more obvious difference is that T2 processes now include the creation of surface bubbles, due to rearrangement in the bulk, and von Neumann's law is dramatically violated for individual bubbles. But nevertheless, our most striking finding is that von Neumann's law appears to holds on average, namely, the average rate of area change for surface bubbles appears to be proportional to the number of sides minus six, but with individual bubbles showing a wide distribution of deviations from this average behavior.

  18. Short Communication on “Coarsening of Y-rich oxide particles in 9%Cr-ODS Eurofer steel annealed at 1350 °C”

    Energy Technology Data Exchange (ETDEWEB)

    Sandim, M.J.R.; Souza Filho, I.R.; Bredda, E.H. [Lorena School of Engineering, University of Sao Paulo, 12602-810, Lorena (Brazil); Kostka, A.; Raabe, D. [Max-Planck-Institut für Eisenforschung, D-40237, Düsseldorf (Germany); Sandim, H.R.Z., E-mail: hsandim@demar.eel.usp.br [Lorena School of Engineering, University of Sao Paulo, 12602-810, Lorena (Brazil)

    2017-02-15

    Oxide-dispersion strengthened (ODS) Eurofer steel is targeted for structural applications in future fusion nuclear reactors. Samples were cold rolled down to 80% reduction in thickness and annealed at 1350 °C up to 8 h. The microstructural characterization was performed using Vickers microhardness testing, electron backscatter diffraction, scanning and scanning transmission electron microscopies. Experimental results provide evidence of coarsening of the Y-rich oxide particles in ODS-Eurofer steel annealed at 1350 °C within delta ferrite phase field.

  19. The impact of secure messaging on workflow in primary care: Results of a multiple-case, multiple-method study.

    Science.gov (United States)

    Hoonakker, Peter L T; Carayon, Pascale; Cartmill, Randi S

    2017-04-01

    Secure messaging is a relatively new addition to health information technology (IT). Several studies have examined the impact of secure messaging on (clinical) outcomes but very few studies have examined the impact on workflow in primary care clinics. In this study we examined the impact of secure messaging on workflow of clinicians, staff and patients. We used a multiple case study design with multiple data collections methods (observation, interviews and survey). Results show that secure messaging has the potential to improve communication and information flow and the organization of work in primary care clinics, partly due to the possibility of asynchronous communication. However, secure messaging can also have a negative effect on communication and increase workload, especially if patients send messages that are not appropriate for the secure messaging medium (for example, messages that are too long, complex, ambiguous, or inappropriate). Results show that clinicians are ambivalent about secure messaging. Secure messaging can add to their workload, especially if there is high message volume, and currently they are not compensated for these activities. Staff is -especially compared to clinicians- relatively positive about secure messaging and patients are overall very satisfied with secure messaging. Finally, clinicians, staff and patients think that secure messaging can have a positive effect on quality of care and patient safety. Secure messaging is a tool that has the potential to improve communication and information flow. However, the potential of secure messaging to improve workflow is dependent on the way it is implemented and used. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Research on neutron source multiplication method in nuclear critical safety

    International Nuclear Information System (INIS)

    Zhu Qingfu; Shi Yongqian; Hu Dingsheng

    2005-01-01

    The paper concerns in the neutron source multiplication method research in nuclear critical safety. Based on the neutron diffusion equation with external neutron source the effective sub-critical multiplication factor k s is deduced, and k s is different to the effective neutron multiplication factor k eff in the case of sub-critical system with external neutron source. The verification experiment on the sub-critical system indicates that the parameter measured with neutron source multiplication method is k s , and k s is related to the external neutron source position in sub-critical system and external neutron source spectrum. The relation between k s and k eff and the effect of them on nuclear critical safety is discussed. (author)

  1. Optimization of Inventories for Multiple Companies by Fuzzy Control Method

    OpenAIRE

    Kawase, Koichi; Konishi, Masami; Imai, Jun

    2008-01-01

    In this research, Fuzzy control theory is applied to the inventory control of the supply chain between multiple companies. The proposed control method deals with the amountof inventories expressing supply chain between multiple companies. Referring past demand and tardiness, inventory amounts of raw materials are determined by Fuzzy inference. The method that an appropriate inventory control becomes possible optimizing fuzzy control gain by using SA method for Fuzzy control. The variation of ...

  2. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Science.gov (United States)

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  3. Coarsening of (Fe, Cr)23C6 carbide phase on the tempering of 14Kh17N2 chromium-nickel steel

    International Nuclear Information System (INIS)

    Psarev, V.I.

    2002-01-01

    Paper lists the results of computer analysis of distribution according to sizes (Fe, Cr) 23 C 6 microparticles resulting from 14Kh17N2 steel tempering under 700 Deg C. Data were obtained at the maximum beneficial magnification of a light microscope. The mentioned curves of distribution densities are characterized by more reliable run from most permissible sizes of dispersed particles. Application of general rules of distributions and of previously elaborated procedure to identify experimental histograms with theoretical distributions enables to derive valuable information on dynamics of coarsening of a disperse phase in this case, as well [ru

  4. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Science.gov (United States)

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  5. EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.

    Science.gov (United States)

    Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin

    2018-04-24

    Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.

  6. Study on validation method for femur finite element model under multiple loading conditions

    Science.gov (United States)

    Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu

    2018-03-01

    Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.

  7. Aerobic exercise increases hippocampal volume and improves memory in multiple sclerosis: preliminary findings.

    Science.gov (United States)

    Leavitt, V M; Cirnigliaro, C; Cohen, A; Farag, A; Brooks, M; Wecht, J M; Wylie, G R; Chiaravalloti, N D; DeLuca, J; Sumowski, J F

    2014-01-01

    Multiple sclerosis leads to prominent hippocampal atrophy, which is linked to memory deficits. Indeed, 50% of multiple sclerosis patients suffer memory impairment, with negative consequences for quality of life. There are currently no effective memory treatments for multiple sclerosis either pharmacological or behavioral. Aerobic exercise improves memory and promotes hippocampal neurogenesis in nonhuman animals. Here, we investigate the benefits of aerobic exercise in memory-impaired multiple sclerosis patients. Pilot data were collected from two ambulatory, memory-impaired multiple sclerosis participants randomized to non-aerobic (stretching) and aerobic (stationary cycling) conditions. The following baseline/follow-up measurements were taken: high-resolution MRI (neuroanatomical volumes), fMRI (functional connectivity), and memory assessment. Intervention was 30-minute sessions 3 times per week for 3 months. Aerobic exercise resulted in 16.5% increase in hippocampal volume and 53.7% increase in memory, as well as increased hippocampal resting-state functional connectivity. Improvements were specific, with no comparable changes in overall cerebral gray matter (+2.4%), non-hippocampal deep gray matter structures (thalamus, caudate: -4.0%), or in non-memory cognitive functioning (executive functions, processing speed, working memory: changes ranged from -11% to +4%). Non-aerobic exercise resulted in relatively no change in hippocampal volume (2.8%) or memory (0.0%), and no changes in hippocampal functional connectivity. This is the first evidence for aerobic exercise to increase hippocampal volume and connectivity and improve memory in multiple sclerosis. Aerobic exercise represents a cost-effective, widely available, natural, and self-administered treatment with no adverse side effects that may be the first effective memory treatment for multiple sclerosis patients.

  8. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Science.gov (United States)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  9. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    International Nuclear Information System (INIS)

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  10. Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Kim, Soo Mee; Lee, Dong Soo; Hong, Jong Hong; Sim, Kwang Souk; Rhee, June Tak

    2008-01-01

    To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 2D filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 2D reconstruction of multiple crystal layer PET data

  11. Multiple independent identification decisions: a method of calibrating eyewitness identifications.

    Science.gov (United States)

    Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul

    2004-02-01

    Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)

  12. A linear multiple balance method for discrete ordinates neutron transport equations

    International Nuclear Information System (INIS)

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  13. An improved multiple flame photometric detector for gas chromatography.

    Science.gov (United States)

    Clark, Adrian G; Thurbide, Kevin B

    2015-11-20

    An improved multiple flame photometric detector (mFPD) is introduced, based upon interconnecting fluidic channels within a planar stainless steel (SS) plate. Relative to the previous quartz tube mFPD prototype, the SS mFPD provides a 50% reduction in background emission levels, an orthogonal analytical flame, and easier more sensitive operation. As a result, sulfur response in the SS mFPD spans 4 orders of magnitude, yields a minimum detectable limit near 9×10(-12)gS/s, and has a selectivity approaching 10(4) over carbon. The device also exhibits exceptionally large resistance to hydrocarbon response quenching. Additionally, the SS mFPD uniquely allows analyte emission monitoring in the multiple worker flames for the first time. The findings suggest that this mode can potentially further improve upon the analytical flame response of sulfur (both linear HSO, and quadratic S2) and also phosphorus. Of note, the latter is nearly 20-fold stronger in S/N in the collective worker flames response and provides 6 orders of linearity with a detection limit of about 2.0×10(-13)gP/s. Overall, the results indicate that this new SS design notably improves the analytical performance of the mFPD and can provide a versatile and beneficial monitoring tool for gas chromatography. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  15. Galerkin projection methods for solving multiple related linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chan, T.F.; Ng, M.; Wan, W.L.

    1996-12-31

    We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.

  16. A multiple regression method for genomewide association studies ...

    Indian Academy of Sciences (India)

    Bujun Mei

    2018-06-07

    Jun 7, 2018 ... Similar to the typical genomewide association tests using LD ... new approach performed validly when the multiple regression based on linkage method was employed. .... the model, two groups of scenarios were simulated.

  17. Symbolic interactionism as a theoretical perspective for multiple method research.

    Science.gov (United States)

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  18. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    DEFF Research Database (Denmark)

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  19. Multiple Site-Directed and Saturation Mutagenesis by the Patch Cloning Method.

    Science.gov (United States)

    Taniguchi, Naohiro; Murakami, Hiroshi

    2017-01-01

    Constructing protein-coding genes with desired mutations is a basic step for protein engineering. Herein, we describe a multiple site-directed and saturation mutagenesis method, termed MUPAC. This method has been used to introduce multiple site-directed mutations in the green fluorescent protein gene and in the moloney murine leukemia virus reverse transcriptase gene. Moreover, this method was also successfully used to introduce randomized codons at five desired positions in the green fluorescent protein gene, and for simple DNA assembly for cloning.

  20. Evaluating clean energy alternatives for Jiangsu, China: An improved multi-criteria decision making method

    International Nuclear Information System (INIS)

    Zhang, Ling; Zhou, Peng; Newton, Sidney; Fang, Jian-xin; Zhou, De-qun; Zhang, Lu-ping

    2015-01-01

    Promoting the utilization of clean energy has been identified as one potential solution to addressing environmental pollution and achieving sustainable development in many countries around the world. Evaluating clean energy alternatives includes a requirement to balance multiple conflict criteria, including technology, environment, economy and society, all of which are incommensurate and interdependent. Traditional MCDM (multi-criteria decision making) methods, such as the weighted average method, often fail to aggregate such criteria consistently. In this paper, an improved MCDM method based on fuzzy measure and integral is developed and applied to evaluate four primary clean energy options for Jiangsu Province, China. The results confirm that the preferred clean energy option for Jiangsu is solar photovoltaic, followed by wind, biomass and finally nuclear. A sensitivity analysis is also conducted to evaluate the values of clean energy resources for Jiangsu. The ordered weighted average method is also applied to compare the method mentioned above in our empirical study. The results show that the improved MCDM method provides higher discrimination between alternative clean energy alternatives. - Highlights: • Interactions among evaluation criteria of clean energy resources are taken into account. • An improved multi-criteria decision making (MCDM) method is proposed based on entropy weight method, fuzzy measure and integral. • Clean energy resources of Jiangsu are evaluated with the improved MCDM method, and their ranks are identified.

  1. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  2. Multivariate analysis method for energy calibration and improved mass assignment in recoil spectrometry

    International Nuclear Information System (INIS)

    El Bouanani, Mohamed; Hult, Mikael; Persson, Leif; Swietlicki, Erik; Andersson, Margaretha; Oestling, Mikael; Lundberg, Nils; Zaring, Carina; Cohen, D.D.; Dytlewski, Nick; Johnston, P.N.; Walker, S.R.; Bubb, I.F.; Whitlow, H.J.

    1994-01-01

    Heavy ion recoil spectrometry is rapidly becoming a well established analysis method, but the associated data analysis processing is still not well developed. The pronounced nonlinear response of silicon detectors for heavy ions leads to serious limitation and complication in mass gating, which is the principal factor in obtaining energy spectra with minimal cross talk between elements. To overcome the above limitation, a simple empirical formula with an associated multiple regression method is proposed for the absolute energy calibration of the time of flight-energy dispersive detector telescope used in recoil spectrometry. A radical improvement in mass assignment was realized, which allows a more accurate and improved depth profiling with the important feature of making the data processing much easier. ((orig.))

  3. A crack growth evaluation method for interacting multiple cracks

    International Nuclear Information System (INIS)

    Kamaya, Masayuki

    2003-01-01

    When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e.g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks. (author)

  4. Upscaling permeability for three-dimensional fractured porous rocks with the multiple boundary method

    Science.gov (United States)

    Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas

    2018-02-01

    Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.

  5. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    Science.gov (United States)

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  6. An improved UHPLC-UV method for separation and quantification of carotenoids in vegetable crops.

    Science.gov (United States)

    Maurer, Megan M; Mein, Jonathan R; Chaudhuri, Swapan K; Constant, Howard L

    2014-12-15

    Carotenoid identification and quantitation is critical for the development of improved nutrition plant varieties. Industrial analysis of carotenoids is typically carried out on multiple crops with potentially thousands of samples per crop, placing critical needs on speed and broad utility of the analytical methods. Current chromatographic methods for carotenoid analysis have had limited industrial application due to their low throughput, requiring up to 60 min for complete separation of all compounds. We have developed an improved UHPLC-UV method that resolves all major carotenoids found in broccoli (Brassica oleracea L. var. italica), carrot (Daucus carota), corn (Zea mays), and tomato (Solanum lycopersicum). The chromatographic method is completed in 13.5 min allowing for the resolution of the 11 carotenoids of interest, including the structural isomers lutein/zeaxanthin and α-/β-carotene. Additional minor carotenoids have also been separated and identified with this method, demonstrating the utility of this method across major commercial food crops. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Experimental demonstration of multiple-inputs multiple-outputs OFDM/OQAM visible light communications

    Science.gov (United States)

    Lin, Bangjiang; Tang, Xuan; Ghassemlooy, Zabih; Lin, Chun; Zhang, Min

    2017-10-01

    We experimentally demonstrate a 2×2 optical multiple-inputs multiple-outputs (MIMO) visible light communications system based on the modified orthogonal frequency-division multiplexing/offset quadrature amplitude modulation scheme. The adjacent subcarrier frequency-domain averaging (ASFA) with the full-loaded (FL) and half-loaded (HL) preamble structures is proposed for demultiplexing and mitigating the intrinsic imaginary interference (IMI) effect. Compared with the conventional channel estimation (CE) method, ASFA offers improved transmission performance. With the FL method, we obtain more accurate MIMO CE to mitigate the IMI effect and the optical noise compared to the HL method.

  8. Coarsening and pattern formation during true morphological phase separation in unstable thin films under gravity

    Science.gov (United States)

    Kumar, Avanish; Narayanam, Chaitanya; Khanna, Rajesh; Puri, Sanjay

    2017-12-01

    We address in detail the problem of true morphological phase separation (MPS) in three-dimensional or (2 +1 )-dimensional unstable thin liquid films (>100 nm) under the influence of gravity. The free-energy functionals of these films are asymmetric and show two points of common tangency, which facilitates the formation of two equilibrium phases. Three distinct patterns formed by relative preponderance of these phases are clearly identified in "true MPS". Asymmetricity induces two different pathways of pattern formation, viz., defect and direct pathway for true MPS. The pattern formation and phase-ordering dynamics have been studied using statistical measures such as structure factor, correlation function, and growth laws. In the late stage of coarsening, the system reaches into a scaling regime for both pathways, and the characteristic domain size follows the Lifshitz-Slyozov growth law [L (t ) ˜t1 /3] . However, for the defect pathway, there is a crossover of domain growth behavior from L (t ) ˜t1 /4→t1 /3 in the dynamical scaling regime. We also underline the analogies and differences behind the mechanisms of MPS and true MPS in thin liquid films and generic spinodal phase separation in binary mixtures.

  9. Comparison between Two Assessment Methods; Modified Essay Questions and Multiple Choice Questions

    Directory of Open Access Journals (Sweden)

    Assadi S.N.* MD

    2015-09-01

    Full Text Available Aims Using the best assessment methods is an important factor in educational development of health students. Modified essay questions and multiple choice questions are two prevalent methods of assessing the students. The aim of this study was to compare two methods of modified essay questions and multiple choice questions in occupational health engineering and work laws courses. Materials & Methods This semi-experimental study was performed during 2013 to 2014 on occupational health students of Mashhad University of Medical Sciences. The class of occupational health and work laws course in 2013 was considered as group A and the class of 2014 as group B. Each group had 50 students.The group A students were assessed by modified essay questions method and the group B by multiple choice questions method.Data were analyzed in SPSS 16 software by paired T test and odd’s ratio. Findings The mean grade of occupational health and work laws course was 18.68±0.91 in group A (modified essay questions and was 18.78±0.86 in group B (multiple choice questions which was not significantly different (t=-0.41; p=0.684. The mean grade of chemical chapter (p<0.001 in occupational health engineering and harmful work law (p<0.001 and other (p=0.015 chapters in work laws were significantly different between two groups. Conclusion Modified essay questions and multiple choice questions methods have nearly the same student assessing value for the occupational health engineering and work laws course.

  10. Multiple-Fault Diagnosis Method Based on Multiscale Feature Extraction and MSVM_PPA

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2018-01-01

    Full Text Available Identification of rolling bearing fault patterns, especially for the compound faults, has attracted notable attention and is still a challenge in fault diagnosis. In this paper, a novel method called multiscale feature extraction (MFE and multiclass support vector machine (MSVM with particle parameter adaptive (PPA is proposed. MFE is used to preprocess the process signals, which decomposes the data into intrinsic mode function by empirical mode decomposition method, and instantaneous frequency of decomposed components was obtained by Hilbert transformation. Then, statistical features and principal component analysis are utilized to extract significant information from the features, to get effective data from multiple faults. MSVM method with PPA parameters optimization will classify the fault patterns. The results of a case study of the rolling bearings faults data from Case Western Reserve University show that (1 the proposed intelligent method (MFE_PPA_MSVM improves the classification recognition rate; (2 the accuracy will decline when the number of fault patterns increases; (3 prediction accuracy can be the best when the training set size is increased to 70% of the total sample set. It verifies the method is feasible and efficient for fault diagnosis.

  11. A two-factor method for appraising building renovation and energy efficiency improvement projects

    International Nuclear Information System (INIS)

    Martinaitis, Vytautas; Kazakevicius, Eduardas; Vitkauskas, Aloyzas

    2007-01-01

    The renovation of residential buildings usually involves a variety of measures aiming at reducing energy and building maintenance bills, increasing safety and market value, and improving comfort and aesthetics. A significant number of project appraisal methods in current use-such as calculations of payback time, net present value, internal rate of return or cost of conserved energy (CCE)-only quantify energy efficiency gains. These approaches are relatively easy to use, but offer a distorted view of complex modernization projects. On the other hand, various methods using multiple criteria take a much wider perspective but are usually time-consuming, based on sometimes uncertain assumptions and require sophisticated tools. A 'two-factor' appraisal method offers a compromise between these two approaches. The main idea of the method is to separate investments into those related to energy efficiency improvements, and those related to building renovation. Costs and benefits of complex measures, which both influence energy consumption and improve building constructions, are separated by using a building rehabilitation coefficient. The CCE is used for the appraisal of energy efficiency investments, while investments in building renovation are appraised using standard tools for the assessment of investments in maintenance, repair and rehabilitation

  12. Internet-based home training is capable to improve balance in multiple sclerosis: a randomized controlled trial.

    Science.gov (United States)

    Frevel, D; Mäurer, M

    2015-02-01

    Balance disorders are common in multiple sclerosis. Aim of the study is to investigate the effectiveness of an Internet-based home training program (e-Training) to improve balance in patients with multiple sclerosis. A randomized, controlled study. Academic teaching hospital in cooperation with the therapeutic riding center Gut Üttingshof, Bad Mergentheim. Eighteen multiple sclerosis patients (mean EDSS 3,5) took part in the trial. Outcome of patients using e-Training (N.=9) was compared to the outcome of patients receiving hippotherapy (N.=9), which can be considered as an advanced concept for the improvement of balance and postural control in multiple sclerosis. After simple random allocation patients received hippotherapy or Internet-based home training (balance, postural control and strength training) twice a week for 12 weeks. Assessments were done before and after the intervention and included static and dynamic balance (primary outcome). Isometric muscle strength of the knee and trunk extension/flexion (dynamometer), walking capacity, fatigue and quality of life served as secondary outcome parameters. Both intervention groups showed comparable and highly significant improvement in static and dynamic balance capacity, no difference was seen between the both intervention groups. However looking at fatigue and quality of life only the group receiving hippotherapy improved significantly. Since e-Training shows even comparable effects to hippotherapy to improve balance, we believe that the established Internet-based home training program, specialized on balance and postural control training, is feasible for a balance and strength training in persons with multiple sclerosis. We demonstrated that Internet-based home training is possible in patients with multiple sclerosis.

  13. Improving the spatial resolution of the multiple multiwire proportional chamber gamma camera

    International Nuclear Information System (INIS)

    Bateman, J.E.; Connolly, J.F.

    1978-03-01

    Results are presented showing how the spatial resolution of the multiple multiwire proportional chamber (MMPC) gamma camera may be improved. Under the best conditions 1.6 mm bars can be resolved. (author)

  14. Detection of Multiple Stationary Humans Using UWB MIMO Radar

    Directory of Open Access Journals (Sweden)

    Fulai Liang

    2016-11-01

    Full Text Available Remarkable progress has been achieved in the detection of single stationary human. However, restricted by the mutual interference of multiple humans (e.g., strong sidelobes of the torsos and the shadow effect, detection and localization of the multiple stationary humans remains a huge challenge. In this paper, ultra-wideband (UWB multiple-input and multiple-output (MIMO radar is exploited to improve the detection performance of multiple stationary humans for its multiple sight angles and high-resolution two-dimensional imaging capacity. A signal model of the vital sign considering both bi-static angles and attitude angle of the human body is firstly developed, and then a novel detection method is proposed to detect and localize multiple stationary humans. In this method, preprocessing is firstly implemented to improve the signal-to-noise ratio (SNR of the vital signs, and then a vital-sign-enhanced imaging algorithm is presented to suppress the environmental clutters and mutual affection of multiple humans. Finally, an automatic detection algorithm including constant false alarm rate (CFAR, morphological filtering and clustering is implemented to improve the detection performance of weak human targets affected by heavy clutters and shadow effect. The simulation and experimental results show that the proposed method can get a high-quality image of multiple humans and we can use it to discriminate and localize multiple adjacent human targets behind brick walls.

  15. Improving adherence to venous thromoembolism prophylaxis using multiple interventions

    Directory of Open Access Journals (Sweden)

    Al-Tawfiq Jaffar

    2011-01-01

    Full Text Available Objective : In hospital, deep vein thrombosis (DVT increases the morbidity and mortality in patients with acute medical illness. DVT prophylaxis is well known to be effective in preventing venous thromoembolism (VTE. However, its use remains suboptimal. The objective of this study was to evaluate the impact of quality improvement project on adherence with VTE prophylaxis guidelines and on the incidence of hospital-acquired VTEs in medical patients. Methods : The study was conducted at Saudi Aramco Medical Services Organization from June 2008 to August 2009. Quality improvement strategies included education of physicians, the development of a protocol, and weekly monitoring of compliance with the recommendations for VTE prophylaxis as included in the multidisciplinary rounds. A feedback was provided whenever a deviation from the protocol occurs. Results : During the study period, a total of 560 general internal medicine patients met the criteria for VTE prophylaxis. Of those, 513 (91% patients actually received the recommended VTE prophylaxis. The weekly compliance rate in the initial stage of the intervention was 63% (14 of 22 and increased to an overall rate of 100% (39 of 39 (P = 0.002. Hospital-acquired DVT rate was 0.8 per 1000 discharges in the preintervention period and 0.5 per 1000 discharges in the postintervention period, P = 0.51. However, there was a significant increase in the time-free period of the VTE and we had 11 months with no single DVT. Conclusion : In this study, the use of multiple interventions increased VTE prophylaxis compliance rate.

  16. Reducing falls and improving mobility in multiple sclerosis.

    Science.gov (United States)

    Sosnoff, Jacob J; Sung, JongHun

    2015-06-01

    Falls are common in persons with multiple sclerosis (MS), and are related to physical injury and reduce the quality of life. Mobility impairments are a significant risk factor for falls in persons with MS. Although there is evidence that mobility in persons with MS can be improved with rehabilitation, much less is known about fall prevention. This review focuses on fall prevention in persons with MS. Ten fall prevention interventions consisting of 524 participants with a wide range of disability were systematically identified. Nine of the 10 investigations report a reduction in falls and/or proportion of fallers following treatment. The vast majority observed an improvement in balance that co-occurred with the reduction in falls. Methodological limitations preclude any firm conclusions. Numerous gaps in the understanding of fall prevention in persons with MS are discussed. Well-designed randomized control trials targeting mobility and falls are warranted.

  17. Application of the modified neutron source multiplication method for a measurement of sub-criticality in AGN-201K reactor

    International Nuclear Information System (INIS)

    Myung-Hyun Kim

    2010-01-01

    Measurement of sub-criticality is a challenging and required task in nuclear industry both for nuclear criticality safety and physics test in nuclear power plant. A relatively new method named as Modified Neutron Source Multiplication Method (MNSM) was proposed in Japan. This method is an improvement of traditional Neutron Source Multiplication (NSM) Method, in which three correction factors are applied additionally. In this study, MNSM was tested in calculation of rod worth using an educational reactor in Kyung Hee University, AGN-201K. For this study, a revised nuclear data library and a neutron transport code system TRANSX-PARTISN were used for the calculation of correction factors for various control rod positions and source locations. Experiments were designed and performed to enhance errors in NSM from the location effects of source and detectors. MNSM can correct these effects but current results showed not much correction effects. (author)

  18. Walking path-planning method for multiple radiation areas

    International Nuclear Information System (INIS)

    Liu, Yong-kuo; Li, Meng-kun; Peng, Min-jun; Xie, Chun-li; Yuan, Cheng-qian; Wang, Shuang-yu; Chao, Nan

    2016-01-01

    Highlights: • Radiation environment modeling method is designed. • Path-evaluating method and segmented path-planning method are proposed. • Path-planning simulation platform for radiation environment is built. • The method avoids to be misled by minimum dose path in single area. - Abstract: Based on minimum dose path-searching method, walking path-planning method for multiple radiation areas was designed to solve minimum dose path problem in single area and find minimum dose path in the whole space in this paper. Path-planning simulation platform was built using C# programming language and DirectX engine. The simulation platform was used in simulations dealing with virtual nuclear facilities. Simulation results indicated that the walking-path planning method is effective in providing safety for people walking in nuclear facilities.

  19. On structures developed by spinodal decomposition; the interpretation of the X-ray diffraction and the role of excess vacancies in the coarsening

    International Nuclear Information System (INIS)

    Keijser, Th. H. de

    1977-01-01

    Structures developed by spinodal decomposition in a AuPt (20-80) alloy were studied by X-ray diffraction. The structures consist of a quasi-periodic concentration modulation which causes a modulation of the lattice spacing in the cube directions. The modulation of the lattice spacing gives rise to the occurrence of side-bands in an X-ray diffraction pattern. Information on the nature of the modulation was deduced from the intensities of the side-bands. From the positions of the side-bands, the wavelength of the modulation was determined. The increase of the wavelength with aging time was investigated. Special attention was paid to the role of quenched-in excess vacancies in the coarsening process

  20. Aging and coarsening in isolated quantum systems after a quench: Exact results for the quantum O(N) model with N → ∞.

    Science.gov (United States)

    Maraga, Anna; Chiocchetta, Alessio; Mitra, Aditi; Gambassi, Andrea

    2015-10-01

    The nonequilibrium dynamics of an isolated quantum system after a sudden quench to a dynamical critical point is expected to be characterized by scaling and universal exponents due to the absence of time scales. We explore these features for a quench of the parameters of a Hamiltonian with O(N) symmetry, starting from a ground state in the disordered phase. In the limit of infinite N, the exponents and scaling forms of the relevant two-time correlation functions can be calculated exactly. Our analytical predictions are confirmed by the numerical solution of the corresponding equations. Moreover, we find that the same scaling functions, yet with different exponents, also describe the coarsening dynamics for quenches below the dynamical critical point.

  1. Investigating lithological and geophysical relationships with applications to geological uncertainty analysis using Multiple-Point Statistical methods

    DEFF Research Database (Denmark)

    Barfod, Adrian

    The PhD thesis presents a new method for analyzing the relationship between resistivity and lithology, as well as a method for quantifying the hydrostratigraphic modeling uncertainty related to Multiple-Point Statistical (MPS) methods. Three-dimensional (3D) geological models are im...... is to improve analysis and research of the resistivity-lithology relationship and ensemble geological/hydrostratigraphic modeling. The groundwater mapping campaign in Denmark, beginning in the 1990’s, has resulted in the collection of large amounts of borehole and geophysical data. The data has been compiled...... in two publicly available databases, the JUPITER and GERDA databases, which contain borehole and geophysical data, respectively. The large amounts of available data provided a unique opportunity for studying the resistivity-lithology relationship. The method for analyzing the resistivity...

  2. Decentralised control method for DC microgrids with improved current sharing accuracy

    DEFF Research Database (Denmark)

    Yang, Jie; Jin, Xinmin; Wu, Xuezhi

    2017-01-01

    A decentralised control method that deals with current sharing issues in dc microgrids (MGs) is proposed in this study. The proposed method is formulated in terms of ‘modified global indicator’ concept, which was originally proposed to improve reactive power sharing in ac MGs. In this work......, the ‘modified global indicator’ concept is extended to coordinate dc MGs, which aims to preserve the main features offered by decentralised control methods such as no need of communication links, central controller or knowledge of the microgrid topology and parameters. This global indicator is inserted between...... a shunt virtual resistance. The operation under multiple dc-buses is also included in order to enhance the applicability of the proposed controller. A detailed mathematical model including the effect of network mismatches is derived for analysis of the stability of the proposed controller. The feasibility...

  3. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  4. An improved multiple linear regression and data analysis computer program package

    Science.gov (United States)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  5. Statistical methods for quality improvement

    National Research Council Canada - National Science Library

    Ryan, Thomas P

    2011-01-01

    ...."-TechnometricsThis new edition continues to provide the most current, proven statistical methods for quality control and quality improvementThe use of quantitative methods offers numerous benefits...

  6. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Directory of Open Access Journals (Sweden)

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  7. Determination of 226Ra contamination depth in soil using the multiple photopeaks method

    International Nuclear Information System (INIS)

    Haddad, Kh.; Al-Masri, M.S.; Doubal, A.W.

    2014-01-01

    Radioactive contamination presents a diverse range of challenges in many industries. Determination of radioactive contamination depth plays a vital role in the assessment of contaminated sites, because it can be used to estimate the activity content. It is determined traditionally by measuring the activity distributions along the depth. This approach gives accurate results, but it is time consuming, lengthy and costly. The multiple photopeaks method was developed in this work for 226 Ra contamination depth determination in a NORM contaminated soil using in-situ gamma spectrometry. The developed method bases on linear correlation between the attenuation ratio of different gamma lines emitted by 214 Bi and the 226 Ra contamination depth. Although this method is approximate, but it is much simpler, faster and cheaper than the traditional one. This method can be applied for any case of multiple gamma emitter contaminant. -- Highlights: • The multiple photopeaks method was developed for 226 Ra contamination depth determination using in-situ gamma spectrometry. • The method bases on linear correlation between the attenuation ratio of 214 Bi gamma lines and 226 Ra contamination depth. • This method is simpler, faster and cheaper than the traditional one, it can be applied for any multiple gamma contaminant

  8. Trace element analysis of environmental samples by multiple prompt gamma-ray analysis method

    International Nuclear Information System (INIS)

    Oshima, Masumi; Matsuo, Motoyuki; Shozugawa, Katsumi

    2011-01-01

    The multiple γ-ray detection method has been proved to be a high-resolution and high-sensitivity method in application to nuclide quantification. The neutron prompt γ-ray analysis method is successfully extended by combining it with the γ-ray detection method, which is called Multiple prompt γ-ray analysis, MPGA. In this review we show the principle of this method and its characteristics. Several examples of its application to environmental samples, especially river sediments in the urban area and sea sediment samples are also described. (author)

  9. Modeling of the influence of coarsening on viscoplastic behavior of a 319 foundry aluminum alloy

    International Nuclear Information System (INIS)

    Martinez, R.; Russier, V.; Couzinié, J.P.; Guillot, I.; Cailletaud, G.

    2013-01-01

    Both metallurgical and mechanical behaviors of a 319 foundry aluminum alloy have been modeled by means of a multiscale approach. The nano-scale, represented by the coarsening of Al 2 Cu precipitates, has been modeled according to the Lifshitz–Slyozov–Wagner (LSW) law in a range of temperature going from 23 °C to 300 °C up to 1000 h aging time. Results were then compared to transmission electron microscope (TEM) observations and are in good agreement with the experimental measurements. The model allows us to know the critical radius, the volume fraction and the number of particles per μm 3 in a α-phase representative volume element (RVE). The increase in yield stress generated by the interaction of dislocations with precipitates, lattice and solid solution, is modeled on the microscale. The yield stress becomes thus a function of the precipitation state, and is time/temperature dependent. These two models were then combined into a mechanical macroscale model in order to represent the Low Cycle Fatigue (LCF) behavior of the material. An elasto-viscoplastic law has been used and all the material parameters were experimentally determined with LCF stress/strain loops for the first cycle and for the mechanical steady state. The simulation results are in good agreement with the experiments.

  10. Improved measurements of thermal power and control rods using multiple detectors at the TRIGA Mark II reactor in Ljubljana

    International Nuclear Information System (INIS)

    Zerovnik Gasper; Snoj Luka; Trkov Andrej; Barbot Loic; Fourmentel Damien; Villard Jean-Francois

    2013-06-01

    The aim of the current bilateral project between CEA Cadarache and JSI is to improve the accuracy of the online thermal power monitoring at the JSI TRIGA reactor. Simultaneously, a new wide range multichannel acquisition system for fission chambers, recently developed by CEA, is tested. In the paper, calculational and experimental power calibration methods are described. The focus is on use of multiple detectors in combination with pre-calculated and pre-measured control rod- position-dependent correction factors to improve the reactor power reading. The system will be implemented and tested at the JSI TRIGA reactor in 2014. (authors)

  11. An improved polymeric sponge replication method for biomedical porous titanium scaffolds

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chunli; Chen, Hongjie; Zhu, Xiangdong, E-mail: zxd7303@163.com; Xiao, Zhanwen; Zhang, Kai, E-mail: kaizhang@scu.edu.cn; Zhang, Xingdong

    2017-01-01

    Biomedical porous titanium (Ti) scaffolds were fabricated by an improved polymeric sponge replication method. The unique formulations and distinct processing techniques, i.e. a mixture of water and ethanol as solvent, multiple coatings with different viscosities of the Ti slurries and centrifugation for removing the extra slurries were used in the present study. The optimized porous Ti scaffolds had uniform porous structure and completely interconnected macropores (~ 365.1 μm). In addition, two different sizes of micropores (~ 45.4 and ~ 6.2 μm) were also formed in the skeleton of the scaffold. The addition of ethanol to the Ti slurry increased the compressive strength of the scaffold by improving the compactness of the skeleton. A compressive strength of 83.6 ± 4.0 MPa was achieved for a porous Ti scaffold with a porosity of 66.4 ± 1.8%. Our cellular study also revealed that the scaffolds could support the growth and proliferation of mesenchymal stem cells (MSCs). - Highlights: • An improved sponge replication method for porous titanium scaffolds was developed. • A mixture of water and ethanol was used to make the titanium slurries. • The scaffolds have high mechanical strength for load-bearing bone repair. • The scaffolds support growth of mesenchymal stem cells.

  12. A novel method for producing multiple ionization of noble gas

    International Nuclear Information System (INIS)

    Wang Li; Li Haiyang; Dai Dongxu; Bai Jiling; Lu Richang

    1997-01-01

    We introduce a novel method for producing multiple ionization of He, Ne, Ar, Kr and Xe. A nanosecond pulsed electron beam with large number density, which could be energy-controlled, was produced by incidence a focused 308 nm laser beam onto a stainless steel grid. On Time-of-Flight Mass Spectrometer, using this electron beam, we obtained multiple ionization of noble gas He, Ne, Ar and Xe. Time of fight mass spectra of these ions were given out. These ions were supposed to be produced by step by step ionization of the gas atoms by electron beam impact. This method may be used as a ideal soft ionizing point ion source in Time of Flight Mass Spectrometer

  13. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  14. Some problems of neutron source multiplication method for site measurement technology in nuclear critical safety

    International Nuclear Information System (INIS)

    Shi Yongqian; Zhu Qingfu; Hu Dingsheng; He Tao; Yao Shigui; Lin Shenghuo

    2004-01-01

    The paper gives experiment theory and experiment method of neutron source multiplication method for site measurement technology in the nuclear critical safety. The measured parameter by source multiplication method actually is a sub-critical with source neutron effective multiplication factor k s , but not the neutron effective multiplication factor k eff . The experiment research has been done on the uranium solution nuclear critical safety experiment assembly. The k s of different sub-criticality is measured by neutron source multiplication experiment method, and k eff of different sub-criticality, the reactivity coefficient of unit solution level, is first measured by period method, and then multiplied by difference of critical solution level and sub-critical solution level and obtained the reactivity of sub-critical solution level. The k eff finally can be extracted from reactivity formula. The effect on the nuclear critical safety and different between k eff and k s are discussed

  15. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  16. Continuous improvement methods in the nuclear industry

    International Nuclear Information System (INIS)

    Heising, Carolyn D.

    1995-01-01

    The purpose of this paper is to investigate management methods for improved safety in the nuclear power industry. Process improvement management, methods of business process reengineering, total quality management, and continued process improvement (KAIZEN) are explored. The anticipated advantages of extensive use of improved process oriented management methods in the nuclear industry are increased effectiveness and efficiency in virtually all tasks of plant operation and maintenance. Important spin off include increased plant safety and economy. (author). 6 refs., 1 fig

  17. Self-healing method as strategy to promote health and rehabilitation of people with multiple sclerosis in the context of occupational therapy

    Directory of Open Access Journals (Sweden)

    Paula Pozzi Pimentel

    2017-09-01

    Full Text Available Introduction: Multiple sclerosis is a neurological chronic disease with continuous and differentiated evolution, it demands body self-knowledge for better understanding of preserved capacities, gradual losses and repercussion in the performance of activities and social participation. Objective: To analyze the group experience of the application of physical techniques based on self-healing method for health promotion and rehabilitation of people with multiple sclerosis, developed by Occupational Therapy. Method: Documental qualitative research referring to written records and audio transcripts of group sessions. Data analysis used the Collective Subject Discourse method. Results: Ten adults with multiple sclerosis, with varying ages and disease times, participated in the therapeutic group. Five participants reported representations and experiences due to the disease and the effect of learning the physical techniques of self-sealing. The benefits include a greater body awareness, decreased symptoms, improved functional capacity and recognition of the need of body practice routine. Conclusion: The therapeutic use of self-healing method demonstrated its applicability to promote the health benefits, rehabilitation, according to health policies. Due to limited literature on the benefits of using the self-healing method indicates the development of new studies.

  18. The implementation of multiple intelligences based teaching model to improve mathematical problem solving ability for student of junior high school

    Science.gov (United States)

    Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli

    2017-05-01

    This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.

  19. Improvement of the R-SWAT-FME framework to support multiple variables and multi-objective functions

    Science.gov (United States)

    Wu, Yiping; Liu, Shu-Guang

    2014-01-01

    Application of numerical models is a common practice in the environmental field for investigation and prediction of natural and anthropogenic processes. However, process knowledge, parameter identifiability, sensitivity, and uncertainty analyses are still a challenge for large and complex mathematical models such as the hydrological/water quality model, Soil and Water Assessment Tool (SWAT). In this study, the previously developed R program language-SWAT-Flexible Modeling Environment (R-SWAT-FME) was improved to support multiple model variables and objectives at multiple time steps (i.e., daily, monthly, and annually). This expansion is significant because there is usually more than one variable (e.g., water, nutrients, and pesticides) of interest for environmental models like SWAT. To further facilitate its easy use, we also simplified its application requirements without compromising its merits, such as the user-friendly interface. To evaluate the performance of the improved framework, we used a case study focusing on both streamflow and nitrate nitrogen in the Upper Iowa River Basin (above Marengo) in the United States. Results indicated that the R-SWAT-FME performs well and is comparable to the built-in auto-calibration tool in multi-objective model calibration. Overall, the enhanced R-SWAT-FME can be useful for the SWAT community, and the methods we used can also be valuable for wrapping potential R packages with other environmental models.

  20. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    Science.gov (United States)

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  1. Statistics of electron multiplication in multiplier phototube: iterative method

    International Nuclear Information System (INIS)

    Grau Malonda, A.; Ortiz Sanchez, J.F.

    1985-01-01

    An iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situations are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average anti-r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (author)

  2. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  3. Dynamic reflexivity in action: an armchair walkthrough of a qualitatively driven mixed-method and multiple methods study of mindfulness training in schoolchildren.

    Science.gov (United States)

    Cheek, Julianne; Lipschitz, David L; Abrams, Elizabeth M; Vago, David R; Nakamura, Yoshio

    2015-06-01

    Dynamic reflexivity is central to enabling flexible and emergent qualitatively driven inductive mixed-method and multiple methods research designs. Yet too often, such reflexivity, and how it is used at various points of a study, is absent when we write our research reports. Instead, reports of mixed-method and multiple methods research focus on what was done rather than how it came to be done. This article seeks to redress this absence of emphasis on the reflexive thinking underpinning the way that mixed- and multiple methods, qualitatively driven research approaches are thought about and subsequently used throughout a project. Using Morse's notion of an armchair walkthrough, we excavate and explore the layers of decisions we made about how, and why, to use qualitatively driven mixed-method and multiple methods research in a study of mindfulness training (MT) in schoolchildren. © The Author(s) 2015.

  4. How teams use indicators for quality improvement - a multiple-case study on the use of multiple indicators in multidisciplinary breast cancer teams.

    Science.gov (United States)

    Gort, Marjan; Broekhuis, Manda; Regts, Gerdien

    2013-11-01

    A crucial issue in healthcare is how multidisciplinary teams can use indicators for quality improvement. Such teams have increasingly become the core component in both care delivery and in many quality improvement methods. This study aims to investigate the relationships between (1) team factors and the way multidisciplinary teams use indicators for quality improvement, and (2) both team and process factors and the intended results. An in-depth, multiple-case study was conducted in the Netherlands in 2008 involving four breast cancer teams using six structure, process and outcome indicators. The results indicated that the process of using indicators involves several stages and activities. Two teams applied a more intensive, active and interactive approach as they passed through these stages. These teams were perceived to have achieved good results through indicator use compared to the other two teams who applied a simple control approach. All teams experienced some difficulty in integrating the new formal control structure, i.e. measuring and managing performance, in their operational task, and in using their 'new' managerial task to decide as a team what and how to improve. Our findings indicate the presence of a network of relationships between team factors, the controllability and actionability of indicators, the indicator-use process, and the intended results. Copyright © 2013. Published by Elsevier Ltd.

  5. A simple method for combining genetic mapping data from multiple crosses and experimental designs.

    Directory of Open Access Journals (Sweden)

    Jeremy L Peirce

    Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.

  6. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    Science.gov (United States)

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  7. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  8. Improvements in cognition, quality of life, and physical performance with clinical Pilates in multiple sclerosis: a randomized controlled trial.

    Science.gov (United States)

    Küçük, Fadime; Kara, Bilge; Poyraz, Esra Çoşkuner; İdiman, Egemen

    2016-03-01

    [Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression (Beck Depression Inventory), and quality of life (Multiple Sclerosis International Quality of Life Questionnaire) were measured before and after treatment in all participants. [Results] There were statistically significant differences in balance, timed performance, tiredness and Multiple Sclerosis Functional Composite tests between before and after treatment in the clinical Pilates group. We also found significant differences in timed performance tests, the Timed up and go test and the Multiple Sclerosis Functional Composite between before and after treatment in the control group. According to the difference analyses, there were significant differences in Multiple Sclerosis Functional Composite and Multiple Sclerosis International Quality of Life Questionnaire scores between the two groups in favor of the clinical Pilates group. There were statistically significant clinical differences in favor of the clinical Pilates group in comparison of measurements between the groups. Clinical Pilates improved cognitive functions and quality of life compared with traditional exercise. [Conclusion] In Multiple Sclerosis treatment, clinical Pilates should be used as a holistic approach by physical therapists.

  9. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    Science.gov (United States)

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the

  10. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  11. A simple and efficient methodology to improve geometric accuracy in gamma knife radiation surgery: implementation in multiple brain metastases.

    Science.gov (United States)

    Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis

    2014-12-01

    To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, "average" image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. A multi-step dealloying method to produce nanoporous gold with no volume change and minimal cracking

    Energy Technology Data Exchange (ETDEWEB)

    Sun Ye [Department of Chemical and Materials Engineering, University of Kentucky, 177 F. Paul Anderson Tower, Lexington, KY 40506 (United States); Balk, T. John [Department of Chemical and Materials Engineering, University of Kentucky, 177 F. Paul Anderson Tower, Lexington, KY 40506 (United States)], E-mail: balk@engr.uky.edu

    2008-05-15

    We report a simple two-step dealloying method for producing bulk nanoporous gold with no volume change and no significant cracking. The galvanostatic dealloying method used here appears superior to potentiostatic methods for fabricating millimeter-scale samples. Care must be taken when imaging the nanoscale, interconnected sponge-like structure with a focused ion beam, as even brief exposure caused immediate and extensive cracking of nanoporous gold, as well as ligament coarsening at the surface00.

  13. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review.

    Science.gov (United States)

    Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  14. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review

    Science.gov (United States)

    Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  15. Methods for improved growth of group III nitride buffer layers

    Science.gov (United States)

    Melnik, Yurity; Chen, Lu; Kojiri, Hidehiro

    2014-07-15

    Methods are disclosed for growing high crystal quality group III-nitride epitaxial layers with advanced multiple buffer layer techniques. In an embodiment, a method includes forming group III-nitride buffer layers that contain aluminum on suitable substrate in a processing chamber of a hydride vapor phase epitaxy processing system. A hydrogen halide or halogen gas is flowing into the growth zone during deposition of buffer layers to suppress homogeneous particle formation. Some combinations of low temperature buffers that contain aluminum (e.g., AlN, AlGaN) and high temperature buffers that contain aluminum (e.g., AlN, AlGaN) may be used to improve crystal quality and morphology of subsequently grown group III-nitride epitaxial layers. The buffer may be deposited on the substrate, or on the surface of another buffer. The additional buffer layers may be added as interlayers in group III-nitride layers (e.g., GaN, AlGaN, AlN).

  16. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline.

    Science.gov (United States)

    Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C

    2013-12-21

    As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.

  17. Improved GO/PO method and its application to wideband SAR image of conducting objects over rough surface

    Science.gov (United States)

    Jiang, Wang-Qiang; Zhang, Min; Nie, Ding; Jiao, Yong-Chang

    2018-04-01

    To simulate the multiple scattering effect of target in synthetic aperture radar (SAR) image, the hybrid method GO/PO method, which combines the geometrical optics (GO) and physical optics (PO), is employed to simulate the scattering field of target. For ray tracing is time-consuming, the Open Graphics Library (OpenGL) is usually employed to accelerate the process of ray tracing. Furthermore, the GO/PO method is improved for the simulation in low pixel situation. For the improved GO/PO method, the pixels are arranged corresponding to the rectangular wave beams one by one, and the GO/PO result is the sum of the contribution values of all the rectangular wave beams. To get high-resolution SAR image, the wideband echo signal is simulated which includes information of many electromagnetic (EM) waves with different frequencies. Finally, the improved GO/PO method is used to simulate the SAR image of targets above rough surface. And the effects of reflected rays and the size of pixel matrix on the SAR image are also discussed.

  18. Behavioral interventions for improving dual-method contraceptive use.

    Science.gov (United States)

    Lopez, Laureen M; Stockton, Laurie L; Chen, Mario; Steiner, Markus J; Gallo, Maria F

    2014-03-30

    last sex. Outcomes had to be measured at least three months after the behavioral intervention began. Two authors evaluated abstracts for eligibility and extracted data from included studies. For the dichotomous outcomes, the Mantel-Haenszel odds ratio (OR) with 95% CI was calculated using a fixed-effect model. Where studies used adjusted analysis, we presented the results as reported by the investigators. No meta-analysis was conducted due to differences in interventions and outcome measures. We identified four studies that met the inclusion criteria: three randomized controlled trials and a pilot study for one of the included trials. The interventions differed markedly: computer-delivered, individually tailored sessions; phone counseling added to clinic counseling; and case management plus a peer-leadership program. The latter study, which addressed multiple risks, showed an effect on contraceptive use. Compared to the control group, the intervention group was more likely to report consistent dual-method use, i.e., oral contraceptives and condoms. The reported relative risk was 1.58 at 12 months (95% CI 1.03 to 2.43) and 1.36 at 24 months (95% CI 1.01 to 1.85). The related pilot study showed more reporting of consistent dual-method use for the intervention group compared to the control group (reported P value = 0.06); the investigators used a higher alpha (P method use or in test results for pregnancy or STIs at 12 or 24 months. We found few behavioral interventions for improving dual-method contraceptive use and little evidence of effectiveness. A multifaceted program showed some effect but only had self-reported outcomes. Two trials were more applicable to clinical settings and had objective outcomes measures, but neither showed any effect. The included studies had adequate information on intervention fidelity and sufficient follow-up periods for change to occur. However, the overall quality of evidence was considered low. Two trials had design limitations and two

  19. A method for the generation of random multiple Coulomb scattering angles

    International Nuclear Information System (INIS)

    Campbell, J.R.

    1995-06-01

    A method for the random generation of spatial angles drawn from non-Gaussian multiple Coulomb scattering distributions is presented. The method employs direct numerical inversion of cumulative probability distributions computed from the universal non-Gaussian angular distributions of Marion and Zimmerman. (author). 12 refs., 3 figs

  20. Improvement of human reliability analysis method for PRA

    International Nuclear Information System (INIS)

    Tanji, Junichi; Fujimoto, Haruo

    2013-09-01

    It is required to refine human reliability analysis (HRA) method by, for example, incorporating consideration for the cognitive process of operator into the evaluation of diagnosis errors and decision-making errors, as a part of the development and improvement of methods used in probabilistic risk assessments (PRAs). JNES has been developed a HRA method based on ATHENA which is suitable to handle the structured relationship among diagnosis errors, decision-making errors and operator cognition process. This report summarizes outcomes obtained from the improvement of HRA method, in which enhancement to evaluate how the plant degraded condition affects operator cognitive process and to evaluate human error probabilities (HEPs) which correspond to the contents of operator tasks is made. In addition, this report describes the results of case studies on the representative accident sequences to investigate the applicability of HRA method developed. HEPs of the same accident sequences are also estimated using THERP method, which is most popularly used HRA method, and comparisons of the results obtained using these two methods are made to depict the differences of these methods and issues to be solved. Important conclusions obtained are as follows: (1) Improvement of HRA method using operator cognitive action model. Clarification of factors to be considered in the evaluation of human errors, incorporation of degraded plant safety condition into HRA and investigation of HEPs which are affected by the contents of operator tasks were made to improve the HRA method which can integrate operator cognitive action model into ATHENA method. In addition, the detail of procedures of the improved method was delineated in the form of flowchart. (2) Case studies and comparison with the results evaluated by THERP method. Four operator actions modeled in the PRAs of representative BWR5 and 4-loop PWR plants were selected and evaluated as case studies. These cases were also evaluated using

  1. Methods of fast, multiple-point in vivo T1 determination

    International Nuclear Information System (INIS)

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  2. VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Yu-Han Huang

    2017-11-01

    Full Text Available In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje method to multiple attribute group decision-making (MAGDM with interval neutrosophic numbers (INNs. Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods.

  3. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  4. Improved assessment of multiple sclerosis lesion segmentation agreement via detection and outline error estimates

    Directory of Open Access Journals (Sweden)

    Wack David S

    2012-07-01

    Full Text Available Abstract Background Presented is the method “Detection and Outline Error Estimates” (DOEE for assessing rater agreement in the delineation of multiple sclerosis (MS lesions. The DOEE method divides operator or rater assessment into two parts: 1 Detection Error (DE -- rater agreement in detecting the same regions to mark, and 2 Outline Error (OE -- agreement of the raters in outlining of the same lesion. Methods DE, OE and Similarity Index (SI values were calculated for two raters tested on a set of 17 fluid-attenuated inversion-recovery (FLAIR images of patients with MS. DE, OE, and SI values were tested for dependence with mean total area (MTA of the raters' Region of Interests (ROIs. Results When correlated with MTA, neither DE (ρ = .056, p=.83 nor the ratio of OE to MTA (ρ = .23, p=.37, referred to as Outline Error Rate (OER, exhibited significant correlation. In contrast, SI is found to be strongly correlated with MTA (ρ = .75, p  Conclusions The DE and OER indices are proposed as a better method than SI for comparing rater agreement of ROIs, which also provide specific information for raters to improve their agreement.

  5. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  6. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  7. Application of improved AHP method to radiation protection optimization

    International Nuclear Information System (INIS)

    Wang Chuan; Zhang Jianguo; Yu Lei

    2014-01-01

    Aimed at the deficiency of traditional AHP method, a hierarchy model for optimum project selection of radiation protection was established with the improved AHP method. The result of comparison between the improved AHP method and the traditional AHP method shows that the improved AHP method can reduce personal judgment subjectivity, and its calculation process is compact and reasonable. The improved AHP method can provide scientific basis for radiation protection optimization. (authors)

  8. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  9. A multiple-scale power series method for solving nonlinear ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2016-02-01

    Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.

  10. An Advanced Method to Apply Multiple Rainfall Thresholds for Urban Flood Warnings

    Directory of Open Access Journals (Sweden)

    Jiun-Huei Jang

    2015-11-01

    Full Text Available Issuing warning information to the public when rainfall exceeds given thresholds is a simple and widely-used method to minimize flood risk; however, this method lacks sophistication when compared with hydrodynamic simulation. In this study, an advanced methodology is proposed to improve the warning effectiveness of the rainfall threshold method for urban areas through deterministic-stochastic modeling, without sacrificing simplicity and efficiency. With regards to flooding mechanisms, rainfall thresholds of different durations are divided into two groups accounting for flooding caused by drainage overload and disastrous runoff, which help in grading the warning level in terms of emergency and severity when the two are observed together. A flood warning is then classified into four levels distinguished by green, yellow, orange, and red lights in ascending order of priority that indicate the required measures, from standby, flood defense, evacuation to rescue, respectively. The proposed methodology is tested according to 22 historical events in the last 10 years for 252 urbanized townships in Taiwan. The results show satisfactory accuracy in predicting the occurrence and timing of flooding, with a logical warning time series for taking progressive measures. For systems with multiple rainfall thresholds already in place, the methodology can be used to ensure better application of rainfall thresholds in urban flood warnings.

  11. Multiplicative version of Promethee method in assesment of parks in Novi Sad

    Directory of Open Access Journals (Sweden)

    Lakićević Milena D.

    2017-01-01

    Full Text Available Decision support methods have an important role regarding the envi­ronmental and landscape planning problems. In this research, one of the decision support methods - multiplicative version of Promethee - has been applied for assessment of five main parks in Novi Sad. The procedure required defining a set of criteria that were as follows: aesthetic, ecological and social values of analyzed parks. For each criterion an appropriate Promethee preference function was adopted with corresponding threshold values. The final result of the process was the ranking of parks by their aesthetic, ecological and social quality and importance for the City of Novi Sad. The result can help urban planners and responsible city bodies in their future actions aimed at improving development and management of analyzed parks. Two main directions of a future research were identified: (a testing appli­cability of other decision support methods, along with Promethee, on the same problem and comparison of their results; and (b analysis of the criteria set more closely by expanding it and/or including a set of indicators. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. 174003: Theory and application of analytic hierarchy process (AHP in multi-criteria decision making under conditions of risk and uncertainty (individual and group context

  12. Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs

    NARCIS (Netherlands)

    Verschoor, M.; Jalba, A.C.

    2012-01-01

    The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of

  13. Improving the Service with the Servqual Method

    Science.gov (United States)

    Midor, Katarzyna; Kučera, Marian

    2018-03-01

    At the time when economy is growing, there is strong competition in the market, and customers have increasingly higher expectations as regards quality of service and products. Under such conditions, organizations need to improve. One of the areas of improvement for an organization is to research the level of customer satisfaction. The article presents results of customer satisfaction surveys conducted by the Servqual method in a pharmaceutical service company. Use of this method allowed to improve the services provided by that pharmaceutical wholesaler, identify areas that need to be improved as soon as possible in order to improve the level of service provided.

  14. Multiple Contexts, Multiple Methods: A Study of Academic and Cultural Identity among Children of Immigrant Parents

    Science.gov (United States)

    Urdan, Tim; Munoz, Chantico

    2012-01-01

    Multiple methods were used to examine the academic motivation and cultural identity of a sample of college undergraduates. The children of immigrant parents (CIPs, n = 52) and the children of non-immigrant parents (non-CIPs, n = 42) completed surveys assessing core cultural identity, valuing of cultural accomplishments, academic self-concept,…

  15. Field evaluation of personal sampling methods for multiple bioaerosols.

    Science.gov (United States)

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  16. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [CICATA-Legaria, Instituto Politecnico Nacional, 11500 Mexico D.F. (Mexico); Guzman, S. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Ruiz, B. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Departamento de Agricultura y Ganaderia, Universidad de Sonora, A.P. 305, 83190 Hermosillo, Sonora (Mexico); Cruz-Zaragoza, E., E-mail: ecruz@nucleares.unam.m [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico)

    2011-02-15

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  17. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    International Nuclear Information System (INIS)

    Furetta, C.; Guzman, S.; Ruiz, B.; Cruz-Zaragoza, E.

    2011-01-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  18. Improved vertical streambed flux estimation using multiple diurnal temperature methods in series

    Science.gov (United States)

    Irvine, Dylan J.; Briggs, Martin A.; Cartwright, Ian; Scruggs, Courtney; Lautz, Laura K.

    2017-01-01

    Analytical solutions that use diurnal temperature signals to estimate vertical fluxes between groundwater and surface water based on either amplitude ratios (Ar) or phase shifts (Δϕ) produce results that rarely agree. Analytical solutions that simultaneously utilize Ar and Δϕ within a single solution have more recently been derived, decreasing uncertainty in flux estimates in some applications. Benefits of combined (ArΔϕ) methods also include that thermal diffusivity and sensor spacing can be calculated. However, poor identification of either Ar or Δϕ from raw temperature signals can lead to erratic parameter estimates from ArΔϕ methods. An add-on program for VFLUX 2 is presented to address this issue. Using thermal diffusivity selected from an ArΔϕ method during a reliable time period, fluxes are recalculated using an Ar method. This approach maximizes the benefits of the Ar and ArΔϕ methods. Additionally, sensor spacing calculations can be used to identify periods with unreliable flux estimates, or to assess streambed scour. Using synthetic and field examples, the use of these solutions in series was particularly useful for gaining conditions where fluxes exceeded 1 m/d.

  19. Energy Route Multi-Objective Optimization of Wireless Power Transfer Network: An Improved Cross-Entropy Method

    Directory of Open Access Journals (Sweden)

    Lijuan Xiang

    2017-06-01

    Full Text Available This paper identifies the Wireless Power Transfer Network (WPTN as an ideal model for long-distance Wireless Power Transfer (WPT in a certain region with multiple electrical equipment. The schematic circuit and design of each power node and the process of power transmission between the two power nodes are elaborated. The Improved Cross-Entropy (ICE method is proposed as an algorithm to solve for optimal energy route. Non-dominated sorting is introduced for optimization. A demonstration of the optimization result of a 30-nodes WPTN system based on the proposed algorithm proves ICE method to be efficacious and efficiency.

  20. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    Science.gov (United States)

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  1. The improvement of movement and speech during rapid eye movement sleep behaviour disorder in multiple system atrophy.

    Science.gov (United States)

    De Cock, Valérie Cochen; Debs, Rachel; Oudiette, Delphine; Leu, Smaranda; Radji, Fatai; Tiberge, Michel; Yu, Huan; Bayard, Sophie; Roze, Emmanuel; Vidailhet, Marie; Dauvilliers, Yves; Rascol, Olivier; Arnulf, Isabelle

    2011-03-01

    Multiple system atrophy is an atypical parkinsonism characterized by severe motor disabilities that are poorly levodopa responsive. Most patients develop rapid eye movement sleep behaviour disorder. Because parkinsonism is absent during rapid eye movement sleep behaviour disorder in patients with Parkinson's disease, we studied the movements of patients with multiple system atrophy during rapid eye movement sleep. Forty-nine non-demented patients with multiple system atrophy and 49 patients with idiopathic Parkinson's disease were interviewed along with their 98 bed partners using a structured questionnaire. They rated the quality of movements, vocal and facial expressions during rapid eye movement sleep behaviour disorder as better than, equal to or worse than the same activities in an awake state. Sleep and movements were monitored using video-polysomnography in 22/49 patients with multiple system atrophy and in 19/49 patients with Parkinson's disease. These recordings were analysed for the presence of parkinsonism and cerebellar syndrome during rapid eye movement sleep movements. Clinical rapid eye movement sleep behaviour disorder was observed in 43/49 (88%) patients with multiple system atrophy. Reports from the 31/43 bed partners who were able to evaluate movements during sleep indicate that 81% of the patients showed some form of improvement during rapid eye movement sleep behaviour disorder. These included improved movement (73% of patients: faster, 67%; stronger, 52%; and smoother, 26%), improved speech (59% of patients: louder, 55%; more intelligible, 17%; and better articulated, 36%) and normalized facial expression (50% of patients). The rate of improvement was higher in Parkinson's disease than in multiple system atrophy, but no further difference was observed between the two forms of multiple system atrophy (predominant parkinsonism versus cerebellar syndrome). Video-monitored movements during rapid eye movement sleep in patients with multiple system

  2. A neutron multiplicity analysis method for uranium samples with liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Hao, E-mail: zhouhao_ciae@126.com [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China); Lin, Hongtao [Xi' an Reasearch Institute of High-tech, Xi' an, Shaanxi 710025 (China); Liu, Guorong; Li, Jinghuai; Liang, Qinglei; Zhao, Yonggang [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China)

    2015-10-11

    A new neutron multiplicity analysis method for uranium samples with liquid scintillators is introduced. An active well-type fast neutron multiplicity counter has been built, which consists of four BC501A liquid scintillators, a n/γdiscrimination module MPD-4, a multi-stop time to digital convertor MCS6A, and two Am–Li sources. A mathematical model is built to symbolize the detection processes of fission neutrons. Based on this model, equations in the form of R=F*P*Q*T could be achieved, where F indicates the induced fission rate by interrogation sources, P indicates the transfer matrix determined by multiplication process, Q indicates the transfer matrix determined by detection efficiency, T indicates the transfer matrix determined by signal recording process and crosstalk in the counter. Unknown parameters about the item are determined by the solutions of the equations. A {sup 252}Cf source and some low enriched uranium items have been measured. The feasibility of the method is proven by its application to the data analysis of the experiments.

  3. Numerical Simulation of Antennas with Improved Integral Equation Method

    International Nuclear Information System (INIS)

    Ma Ji; Fang Guang-You; Lu Wei

    2015-01-01

    Simulating antennas around a conducting object is a challenge task in computational electromagnetism, which is concerned with the behaviour of electromagnetic fields. To analyze this model efficiently, an improved integral equation-fast Fourier transform (IE-FFT) algorithm is presented in this paper. The proposed scheme employs two Cartesian grids with different size and location to enclose the antenna and the other object, respectively. On the one hand, IE-FFT technique is used to store matrix in a sparse form and accelerate the matrix-vector multiplication for each sub-domain independently. On the other hand, the mutual interaction between sub-domains is taken as the additional exciting voltage in each matrix equation. By updating integral equations several times, the whole electromagnetic system can achieve a stable status. Finally, the validity of the presented method is verified through the analysis of typical antennas in the presence of a conducting object. (paper)

  4. Statistics of electron multiplication in a multiplier phototube; Iterative method

    International Nuclear Information System (INIS)

    Ortiz, J. F.; Grau, A.

    1985-01-01

    In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs

  5. A multiple-well method for immunohistochemical testing of many reagents on a single microscopic slide.

    Science.gov (United States)

    McKeever, P E; Letica, L H; Shakui, P; Averill, D R

    1988-09-01

    Multiple wells (M-wells) have been made over tissue sections on single microscopic slides to simultaneously localize binding specificity of many antibodies. More than 20 individual 4-microliter wells over tissue have been applied/slide, representing more than a 5-fold improvement in wells/slide and a 25-fold reduction in reagent volume over previous methods. More than 30 wells/slide have been applied over cellular monolayers. To produce the improvement, previous strategies of placing specimens into wells were changed to instead create wells over the specimen. We took advantage of the hydrophobic properties of paint to surround the wells and to segregate the various different primary antibodies. Segregation was complete on wells alternating with and without primary monoclonal antibody. The procedure accommodates both frozen and paraffin sections, yielding slides which last more than a year. After monoclonal antibody detection, standard histologic stains can be applied as counterstains. M-wells are suitable for localizing binding of multiple reagents or sample unknowns (polyclonal or monoclonal antibodies, hybridoma supernatants, body fluids, lectins) to either tissues or cells. Their small sample volume and large number of sample wells/slide could be particularly useful for early screening of hybridoma supernatants and for titration curves in immunohistochemistry (McKeever PE, Shakui P, Letica LH, Averill DR: J Histochem Cytochem 36:931, 1988).

  6. Improved power performance assessment methods

    Energy Technology Data Exchange (ETDEWEB)

    Frandsen, S; Antoniou, I; Dahlberg, J A [and others

    1999-03-01

    The uncertainty of presently-used methods for retrospective assessment of the productive capacity of wind farms is unacceptably large. The possibilities of improving the accuracy have been investigated and are reported. A method is presented that includes an extended power curve and site calibration. In addition, blockage effects with respect to reference wind speed measurements are analysed. It is found that significant accuracy improvements are possible by the introduction of more input variables such as turbulence and wind shear, in addition to mean wind speed and air density. Also, the testing of several or all machines in the wind farm - instead of only one or two - may provide a better estimate of the average performance. (au)

  7. The Initial Rise Method in the case of multiple trapping levels

    International Nuclear Information System (INIS)

    Furetta, C.; Guzman, S.; Cruz Z, E.

    2009-10-01

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  8. The Initial Rise Method in the case of multiple trapping levels

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F. (Mexico); Guzman, S.; Cruz Z, E. [Instituto de Ciencias Nucleares, UNAM, A. P. 70-543, 04510 Mexico D. F. (Mexico)

    2009-10-15

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  9. Method to measure the position offset of multiple light spots in a distributed aperture laser angle measurement system.

    Science.gov (United States)

    Jing, Xiaoli; Cheng, Haobo; Xu, Chunyun; Feng, Yunpeng

    2017-02-20

    In this paper, an accurate measurement method of multiple spots' position offsets on a four-quadrant detector is proposed for a distributed aperture laser angle measurement system (DALAMS). The theoretical model is put forward, as well as the corresponding calculation method. This method includes two steps. First, as the initial estimation, integral approximation is applied to fit the distributed spots' offset function; second, the Boltzmann function is employed to compensate for the estimation error to improve detection accuracy. The simulation results attest to the correctness and effectiveness of the proposed method, and tolerance synthesis analysis of DALAMS is conducted to determine the maximum uncertainties of manufacturing and installation. The maximum angle error is less than 0.08° in the prototype distributed measurement system, which shows the stability and robustness for prospective applications.

  10. The initial rise method extended to multiple trapping levels in thermoluminescent materials.

    Science.gov (United States)

    Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E

    2011-02-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Detection and Identification of Multiple Stationary Human Targets Via Bio-Radar Based on the Cross-Correlation Method

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2016-10-01

    Full Text Available Ultra-wideband (UWB radar has been widely used for detecting human physiological signals (respiration, movement, etc. in the fields of rescue, security, and medicine owing to its high penetrability and range resolution. In these applications, especially in rescue after disaster (earthquake, collapse, mine accident, etc., the presence, number, and location of the trapped victims to be detected and rescued are the key issues of concern. Ample research has been done on the first issue, whereas the identification and localization of multi-targets remains a challenge. False positive and negative identification results are two common problems associated with the detection of multiple stationary human targets. This is mainly because the energy of the signal reflected from the target close to the receiving antenna is considerably stronger than those of the targets at further range, often leading to missing or false recognition if the identification method is based on the energy of the respiratory signal. Therefore, a novel method based on cross-correlation is proposed in this paper that is based on the relativity and periodicity of the signals, rather than on the energy. The validity of this method is confirmed through experiments using different scenarios; the results indicate a discernible improvement in the detection precision and identification of the multiple stationary targets.

  12. Detection and Identification of Multiple Stationary Human Targets Via Bio-Radar Based on the Cross-Correlation Method.

    Science.gov (United States)

    Zhang, Yang; Chen, Fuming; Xue, Huijun; Li, Zhao; An, Qiang; Wang, Jianqi; Zhang, Yang

    2016-10-27

    Ultra-wideband (UWB) radar has been widely used for detecting human physiological signals (respiration, movement, etc.) in the fields of rescue, security, and medicine owing to its high penetrability and range resolution. In these applications, especially in rescue after disaster (earthquake, collapse, mine accident, etc.), the presence, number, and location of the trapped victims to be detected and rescued are the key issues of concern. Ample research has been done on the first issue, whereas the identification and localization of multi-targets remains a challenge. False positive and negative identification results are two common problems associated with the detection of multiple stationary human targets. This is mainly because the energy of the signal reflected from the target close to the receiving antenna is considerably stronger than those of the targets at further range, often leading to missing or false recognition if the identification method is based on the energy of the respiratory signal. Therefore, a novel method based on cross-correlation is proposed in this paper that is based on the relativity and periodicity of the signals, rather than on the energy. The validity of this method is confirmed through experiments using different scenarios; the results indicate a discernible improvement in the detection precision and identification of the multiple stationary targets.

  13. A Simple and Efficient Methodology To Improve Geometric Accuracy in Gamma Knife Radiation Surgery: Implementation in Multiple Brain Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Karaiskos, Pantelis, E-mail: pkaraisk@med.uoa.gr [Medical Physics Laboratory, Medical School, University of Athens (Greece); Gamma Knife Department, Hygeia Hospital, Athens (Greece); Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos [Medical Physics Laboratory, Medical School, University of Athens (Greece); Roussakis, Arkadios [CT and MRI Department, Hygeia Hospital, Athens (Greece); Torrens, Michael [Gamma Knife Department, Hygeia Hospital, Athens (Greece); Seimenis, Ioannis [Medical Physics Laboratory, Medical School, Democritus University of Thrace, Alexandroupolis (Greece)

    2014-12-01

    Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.

  14. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    Science.gov (United States)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  15. Field evaluation of personal sampling methods for multiple bioaerosols.

    Directory of Open Access Journals (Sweden)

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  16. Multiple Input - Multiple Output (MIMO) SAR

    Data.gov (United States)

    National Aeronautics and Space Administration — This effort will research and implement advanced Multiple-Input Multiple-Output (MIMO) Synthetic Aperture Radar (SAR) techniques which have the potential to improve...

  17. A improved tidal method without water level

    Science.gov (United States)

    Luo, xiaowen

    2017-04-01

    Now most tide are obtained use water Level and pressure type water gage, but it is difficult to install them and reading is in low accuracy in this method . In view of above-mentioned facts, In order to improve tide accuracy, A improved method is introduced.sea level is obtained in given time using high-precision GNSS buoy combined instantaneous position from pressure gage. two steps are as following, (1) the GNSS time service is used as the source of synchronization reference in tidal measurement; (2) centimeter-level sea surface positions are obtained in real time using difference GNSS The improved method used in seafloor topography survey,in 145 cross points, 95% meet the requirements of the Hydrographic survey specification. It is effective method to obtain higher accuracy tide.

  18. Improved radionuclide bone imaging agent injection needle withdrawal method can improve image quality

    International Nuclear Information System (INIS)

    Qin Yongmei; Wang Laihao; Zhao Lihua; Guo Xiaogang; Kong Qingfeng

    2009-01-01

    Objective: To investigate the improvement of radionuclide bone imaging agent injection needle withdrawal method on whole body bone scan image quality. Methods: Elbow vein injection syringe needle directly into the bone imaging agent in the routine group of 117 cases, with a cotton swab needle injection method for the rapid pull out the needle puncture point pressing, pressing moment. Improvement of 117 cases of needle injection method to put two needles into the skin swabs and blood vessels, pull out the needle while pressing two or more entry point 5min. After 2 hours underwent whole body bone SPECT imaging plane. Results: The conventional group at the injection site imaging agents uptake rate was 16.24%, improved group was 2.56%. Conclusion: The modified bone imaging agent injection needle withdrawal method, injection-site imaging agent uptake were significantly decreased whole body bone imaging can improve image quality. (authors)

  19. A new fast method for inferring multiple consensus trees using k-medoids.

    Science.gov (United States)

    Tahiri, Nadia; Willems, Matthieu; Makarenkov, Vladimir

    2018-04-05

    Gene trees carry important information about specific evolutionary patterns which characterize the evolution of the corresponding gene families. However, a reliable species consensus tree cannot be inferred from a multiple sequence alignment of a single gene family or from the concatenation of alignments corresponding to gene families having different evolutionary histories. These evolutionary histories can be quite different due to horizontal transfer events or to ancient gene duplications which cause the emergence of paralogs within a genome. Many methods have been proposed to infer a single consensus tree from a collection of gene trees. Still, the application of these tree merging methods can lead to the loss of specific evolutionary patterns which characterize some gene families or some groups of gene families. Thus, the problem of inferring multiple consensus trees from a given set of gene trees becomes relevant. We describe a new fast method for inferring multiple consensus trees from a given set of phylogenetic trees (i.e. additive trees or X-trees) defined on the same set of species (i.e. objects or taxa). The traditional consensus approach yields a single consensus tree. We use the popular k-medoids partitioning algorithm to divide a given set of trees into several clusters of trees. We propose novel versions of the well-known Silhouette and Caliński-Harabasz cluster validity indices that are adapted for tree clustering with k-medoids. The efficiency of the new method was assessed using both synthetic and real data, such as a well-known phylogenetic dataset consisting of 47 gene trees inferred for 14 archaeal organisms. The method described here allows inference of multiple consensus trees from a given set of gene trees. It can be used to identify groups of gene trees having similar intragroup and different intergroup evolutionary histories. The main advantage of our method is that it is much faster than the existing tree clustering approaches, while

  20. Measurement of subcritical multiplication by the interval distribution method

    International Nuclear Information System (INIS)

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  1. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    Science.gov (United States)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  2. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. System and method for image registration of multiple video streams

    Science.gov (United States)

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  4. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    Science.gov (United States)

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  5. Factors which influence directional coarsening of Gamma prime during creep in nickel-base superalloy single crystals

    International Nuclear Information System (INIS)

    Mackay, R.A.; Ebert, L.J.

    1984-01-01

    Changes in the morphology of the gamma prime precipitate were examined as a function of time during creep at 982 C in 001 oriented single crystals of a Ni-Al-Mo-Ta superalloy. In this alloy, which has a large negative misfit of -0.80 pct., the gamma prime particles link together during creep to form platelets, or rafts, which are aligned with their broad faces perpendicular to the applied tensile axis. The effects of initial microstructure and alloy composition of raft development and creep properties were investigated. Directional coarsening of gamma prime begins during primary creep and continues well after the onset of second state creep. The thickness of the rafts remains constant up through the onset of tertiary creep a clear indication of the stability of the finely-spaced gamma/gamma prime lamellar structure. The thickness of the rafts which formed was equal to the initial gamma prime size which was present prior to testing. The single crystals with the finest gamma prime size exhibited the longest creep lives, because the resultant rafted structure had a larger number of gamma/gamma prime interfaces per unit volume of material. Reducing the Mo content by only 0.73 wt. pct. increased the creep life by a factor of three, because the precipitation of a third phase was eliminated

  6. Improved quasi-static nodal green's function method

    International Nuclear Information System (INIS)

    Li Junli; Jing Xingqing; Hu Dapu

    1997-01-01

    Improved Quasi-Static Green's Function Method (IQS/NGFM) is presented, as an new kinetic method. To solve the three-dimensional transient problem, improved Quasi-Static Method is adopted to deal with the temporal problem, which will increase the time step as long as possible so as to decrease the number of times of space calculation. The time step of IQS/NGFM can be increased to 5∼10 times longer than that of Full Implicit Differential Method. In spatial calculation, the NGFM is used to get the distribution of shape function, and it's spatial mesh can be nearly 20 times larger than that of Definite Differential Method. So the IQS/NGFM is considered as an efficient kinetic method

  7. Corrected multiple upsets and bit reversals for improved 1-s resolution measurements

    International Nuclear Information System (INIS)

    Brucker, G.J.; Stassinopoulos, E.G.; Stauffer, C.A.

    1994-01-01

    Previous work has studied the generation of single and multiple errors in control and irradiated static RAM samples (Harris 6504RH) which were exposed to heavy ions for relatively long intervals of time (minute), and read out only after the beam was shut off. The present investigation involved storing 4k x 1 bit maps every second during 1 min ion exposures at low flux rates of 10 3 ions/cm 2 -s in order to reduce the chance of two sequential ions upsetting adjacent bits. The data were analyzed for the presence of adjacent upset bit locations in the physical memory plane, which were previously defined to constitute multiple upsets. Improvement in the time resolution of these measurements has provided more accurate estimates of multiple upsets. The results indicate that the percentage of multiples decreased from a high of 17% in the previous experiment to less than 1% for this new experimental technique. Consecutive double and triple upsets (reversals of bits) were detected. These were caused by sequential ions hitting the same bit, with one or two reversals of state occurring in a 1-min run. In addition to these results, a status review for these same parts covering 3.5 years of imprint damage recovery is also presented

  8. Application of multiple timestep integration method in SSC

    International Nuclear Information System (INIS)

    Guppy, J.G.

    1979-01-01

    The thermohydraulic transient simulation of an entire LMFBR system is, by its very nature, complex. Physically, the entire plant consists of many subsystems which are coupled by various processes and/or components. The characteristic integration timesteps for these processes/components can vary over a wide range. To improve computing efficiency, a multiple timestep scheme (MTS) approach has been used in the development of the Super System Code (SSC). In this paper: (1) the partitioning of the system and the timestep control are described, and (2) results are presented showing a savings in computer running time using the MTS of as much as five times the time required using a single timestep scheme

  9. Studies of fuel loading pattern optimization for a typical pressurized water reactor (PWR) using improved pivot particle swarm method

    International Nuclear Information System (INIS)

    Liu, Shichang; Cai, Jiejin

    2012-01-01

    Highlights: ► The mathematical model of loading pattern problems for PWR has been established. ► IPPSO was integrated with ‘donjon’ and ‘dragon’ into fuel arrangement optimizing code. ► The novel method showed highly efficiency for the LP problems. ► The core effective multiplication factor increases by about 10% in simulation cases. ► The power peaking factor decreases by about 0.6% in simulation cases. -- Abstract: An in-core fuel reload design tool using the improved pivot particle swarm method was developed for the loading pattern optimization problems in a typical PWR, such as Daya Bay Nuclear Power Plant. The discrete, multi-objective improved pivot particle swarm optimization, was integrated with the in-core physics calculation code ‘donjon’ based on finite element method, and assemblies’ group constant calculation code ‘dragon’, composing the optimization code for fuel arrangement. The codes of both ‘donjon’ and ‘dragon’ were programmed by Institute of Nuclear Engineering of Polytechnique Montréal, Canada. This optimization code was aiming to maximize the core effective multiplication factor (Keff), while keeping the local power peaking factor (Ppf) lower than a predetermined value to maintain fuel integrity. At last, the code was applied to the first cycle loading of Daya Bay Nuclear Power Plant. The result showed that, compared with the reference loading pattern design, the core effective multiplication factor increased by 9.6%, while the power peaking factor decreased by 0.6%, meeting the safety requirement.

  10. An improved front tracking method for the Euler equations

    NARCIS (Netherlands)

    Witteveen, J.A.S.; Koren, B.; Bakker, P.G.

    2007-01-01

    An improved front tracking method for hyperbolic conservation laws is presented. The improved method accurately resolves discontinuities as well as continuous phenomena. The method is based on an improved front interaction model for a physically more accurate modeling of the Euler equations, as

  11. An adaptive EFG-FE coupling method for elasto-plastic contact of rough surfaces

    International Nuclear Information System (INIS)

    Liu Lan; Liu Geng; Tong Ruiting; Jin Saiying

    2010-01-01

    Differing from Finite Element Method, the meshless method does not need any mesh information and can arrange nodes freely which is perfectly suitable for adaptive analysis. In order to simulate the contact condition factually and improve computational efficiency, an adaptive procedure for Element-free Galerkin-Finite Element (EFG-FE) coupling contact model is established and developed to investigate the elastoplastic contact performance for engineering rough surfaces. The local adaptive refinement strategy combined with the strain energy gradient-based error estimation model is employed. The schemes, including principle explanation, arithmetic analysis and programming realization, are introduced and discussed. Furthermore, some related parameters on adaptive convergence criterion are researched emphatically, including adaptation-stop criterion, refinement or coarsening criterion which are guided by the relative error in total strain energy with two adjacent stages. Based on pioneering works of the EFG-FE coupling method for contact problems, an adaptive EFG-FE model for asperity contact is studied. Compared with the solutions obtained from the uniform refinement model, the adaptation results indicate that the adaptive method presented in this paper is capable of solving asperity contact problems with excellent calculation accuracy and computational efficiency.

  12. Geometric calibration method for multiple head cone beam SPECT systems

    International Nuclear Information System (INIS)

    Rizo, Ph.; Grangeat, P.; Guillemaud, R.; Sauze, R.

    1993-01-01

    A method is presented for performing geometric calibration on Single Photon Emission Tomography (SPECT) cone beam systems with multiple cone beam collimators, each having its own orientation parameters. This calibration method relies on the fact that, in tomography, for each head, the relative position of the rotation axis and of the collimator does not change during the acquisition. In order to ensure the method stability, the parameters to be estimated in intrinsic parameters and extrinsic parameters are separated. The intrinsic parameters describe the acquisition geometry and the extrinsic parameters position of the detection system with respect to the rotation axis. (authors) 3 refs

  13. Cantilever piezoelectric energy harvester with multiple cavities

    International Nuclear Information System (INIS)

    S Srinivasulu Raju; M Umapathy; G Uma

    2015-01-01

    Energy harvesting employing piezoelectric materials in mechanical structures such as cantilever beams, plates, diaphragms, etc, has been an emerging area of research in recent years. The research in this area is also focused on structural tailoring to improve the harvested power from the energy harvesters. Towards this aim, this paper presents a method for improving the harvested power from a cantilever piezoelectric energy harvester by introducing multiple rectangular cavities. A generalized model for a piezoelectric energy harvester with multiple rectangular cavities at a single section and two sections is developed. A method is suggested to optimize the thickness of the cavities and the number of cavities required to generate a higher output voltage for a given cantilever beam structure. The performance of the optimized energy harvesters is evaluated analytically and through experimentation. The simulation and experimental results show that the performance of the energy harvester can be increased with multiple cavities compared to the harvester with a single cavity. (paper)

  14. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  15. Optimized simultaneous inversion of primary and multiple reflections; Inversion linearisee simultanee des reflexions primaires et des reflexions multiples

    Energy Technology Data Exchange (ETDEWEB)

    Pelle, L.

    2003-12-01

    The removal of multiple reflections remains a real problem in seismic imaging. Many preprocessing methods have been developed to attenuate multiples in seismic data but none of them is satisfactory in 3D. The objective of this thesis is to develop a new method to remove multiples, extensible in 3D. Contrary to the existing methods, our approach is not a preprocessing step: we directly include the multiple removal in the imaging process by means of a simultaneous inversion of primaries and multiples. We then propose to improve the standard linearized inversion so as to make it insensitive to the presence of multiples in the data. We exploit kinematics differences between primaries and multiples. We propose to pick in the data the kinematics of the multiples we want to remove. The wave field is decomposed into primaries and multiples. Primaries are modeled by the Ray+Born operator from perturbations of the logarithm of impedance, given the velocity field. Multiples are modeled by the Transport operator from an initial trace, given the picking. The inverse problem simultaneously fits primaries and multiples to the data. To solve this problem with two unknowns, we take advantage of the isometric nature of the Transport operator, which allows to drastically reduce the CPU time: this simultaneous inversion is this almost as fast as the standard linearized inversion. This gain of time opens the way to different applications to multiple removal and in particular, allows to foresee the straightforward 3D extension. (author)

  16. Hesitant fuzzy methods for multiple criteria decision analysis

    CERN Document Server

    Zhang, Xiaolu

    2017-01-01

    The book offers a comprehensive introduction to methods for solving multiple criteria decision making and group decision making problems with hesitant fuzzy information. It reports on the authors’ latest research, as well as on others’ research, providing readers with a complete set of decision making tools, such as hesitant fuzzy TOPSIS, hesitant fuzzy TODIM, hesitant fuzzy LINMAP, hesitant fuzzy QUALIFEX, and the deviation modeling approach with heterogeneous fuzzy information. The main focus is on decision making problems in which the criteria values and/or the weights of criteria are not expressed in crisp numbers but are more suitable to be denoted as hesitant fuzzy elements. The largest part of the book is devoted to new methods recently developed by the authors to solve decision making problems in situations where the available information is vague or hesitant. These methods are presented in detail, together with their application to different type of decision-making problems. All in all, the book ...

  17. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  18. The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis

    Science.gov (United States)

    Xu, X.; Tong, S.; Wang, L.

    2017-12-01

    How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.

  19. An improved sampling method of complex network

    Science.gov (United States)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  20. Phylo: a citizen science approach for improving multiple sequence alignment.

    Directory of Open Access Journals (Sweden)

    Alexander Kawrykow

    Full Text Available BACKGROUND: Comparative genomics, or the study of the relationships of genome structure and function across different species, offers a powerful tool for studying evolution, annotating genomes, and understanding the causes of various genetic disorders. However, aligning multiple sequences of DNA, an essential intermediate step for most types of analyses, is a difficult computational task. In parallel, citizen science, an approach that takes advantage of the fact that the human brain is exquisitely tuned to solving specific types of problems, is becoming increasingly popular. There, instances of hard computational problems are dispatched to a crowd of non-expert human game players and solutions are sent back to a central server. METHODOLOGY/PRINCIPAL FINDINGS: We introduce Phylo, a human-based computing framework applying "crowd sourcing" techniques to solve the Multiple Sequence Alignment (MSA problem. The key idea of Phylo is to convert the MSA problem into a casual game that can be played by ordinary web users with a minimal prior knowledge of the biological context. We applied this strategy to improve the alignment of the promoters of disease-related genes from up to 44 vertebrate species. Since the launch in November 2010, we received more than 350,000 solutions submitted from more than 12,000 registered users. Our results show that solutions submitted contributed to improving the accuracy of up to 70% of the alignment blocks considered. CONCLUSIONS/SIGNIFICANCE: We demonstrate that, combined with classical algorithms, crowd computing techniques can be successfully used to help improving the accuracy of MSA. More importantly, we show that an NP-hard computational problem can be embedded in casual game that can be easily played by people without significant scientific training. This suggests that citizen science approaches can be used to exploit the billions of "human-brain peta-flops" of computation that are spent every day playing games

  1. An Improved Sequential Initiation Method for Multitarget Track in Clutter with Large Noise Measurement

    Directory of Open Access Journals (Sweden)

    Daxiong Ji

    2014-01-01

    Full Text Available This paper proposes an improved sequential method for underwater multiple objects tracks initiation in clutter, estimating the initial position for the trajectory. The underwater environment is complex and changeable, and the sonar data are not very ideal. When the detection distance is far, the error of measured data is also great. Besides that, the clutter has a grave effect on the tracks initiation. So it is hard to initialize a track and estimate the initial position. The new tracks initiation is that when at least six of ten points meet the requirements, then we determine that there is a new track and the initial states of the parameters are estimated by the linear least square method. Compared to the conventional tracks initiation methods, our method not only considers the kinematics information of targets, but also regards the error of the sonar sensors as an important element. Computer simulations confirm that the performance of our method is very nice.

  2. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ovacik, Meric A. [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States)

    2013-09-15

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.

  3. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    International Nuclear Information System (INIS)

    Ovacik, Meric A.; Androulakis, Ioannis P.

    2013-01-01

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy

  4. MULTIPLE OBJECTS

    Directory of Open Access Journals (Sweden)

    A. A. Bosov

    2015-04-01

    Full Text Available Purpose. The development of complicated techniques of production and management processes, information systems, computer science, applied objects of systems theory and others requires improvement of mathematical methods, new approaches for researches of application systems. And the variety and diversity of subject systems makes necessary the development of a model that generalizes the classical sets and their development – sets of sets. Multiple objects unlike sets are constructed by multiple structures and represented by the structure and content. The aim of the work is the analysis of multiple structures, generating multiple objects, the further development of operations on these objects in application systems. Methodology. To achieve the objectives of the researches, the structure of multiple objects represents as constructive trio, consisting of media, signatures and axiomatic. Multiple object is determined by the structure and content, as well as represented by hybrid superposition, composed of sets, multi-sets, ordered sets (lists and heterogeneous sets (sequences, corteges. Findings. In this paper we study the properties and characteristics of the components of hybrid multiple objects of complex systems, proposed assessments of their complexity, shown the rules of internal and external operations on objects of implementation. We introduce the relation of arbitrary order over multiple objects, we define the description of functions and display on objects of multiple structures. Originality.In this paper we consider the development of multiple structures, generating multiple objects.Practical value. The transition from the abstract to the subject of multiple structures requires the transformation of the system and multiple objects. Transformation involves three successive stages: specification (binding to the domain, interpretation (multiple sites and particularization (goals. The proposed describe systems approach based on hybrid sets

  5. Comparing the index-flood and multiple-regression methods using L-moments

    Science.gov (United States)

    Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.

    In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin

  6. A New DG Multiobjective Optimization Method Based on an Improved Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Wanxing Sheng

    2013-01-01

    Full Text Available A distribution generation (DG multiobjective optimization method based on an improved Pareto evolutionary algorithm is investigated in this paper. The improved Pareto evolutionary algorithm, which introduces a penalty factor in the objective function constraints, uses an adaptive crossover and a mutation operator in the evolutionary process and combines a simulated annealing iterative process. The proposed algorithm is utilized to the optimize DG injection models to maximize DG utilization while minimizing system loss and environmental pollution. A revised IEEE 33-bus system with multiple DG units was used to test the multiobjective optimization algorithm in a distribution power system. The proposed algorithm was implemented and compared with the strength Pareto evolutionary algorithm 2 (SPEA2, a particle swarm optimization (PSO algorithm, and nondominated sorting genetic algorithm II (NGSA-II. The comparison of the results demonstrates the validity and practicality of utilizing DG units in terms of economic dispatch and optimal operation in a distribution power system.

  7. Accuracy improvement in measurement of arterial wall elasticity by applying pulse inversion to phased-tracking method

    Science.gov (United States)

    Miyachi, Yukiya; Arakawa, Mototaka; Kanai, Hiroshi

    2018-07-01

    In our studies on ultrasonic elasticity assessment, minute change in the thickness of the arterial wall was measured by the phased-tracking method. However, most images in carotid artery examinations contain multiple-reflection noise, making it difficult to evaluate arterial wall elasticity precisely. In the present study, a modified phased-tracking method using the pulse inversion method was examined to reduce the influence of the multiple-reflection noise. Moreover, aliasing in the harmonic components was corrected by the fundamental components. The conventional and proposed methods were applied to a pulsated tube phantom mimicking the arterial wall. For the conventional method, the elasticity was 298 kPa without multiple-reflection noise and 353 kPa with multiple-reflection noise on the posterior wall. That of the proposed method was 302 kPa without multiple-reflection noise and 297 kPa with multiple-reflection noise on the posterior wall. Therefore, the proposed method was very robust against multiple-reflection noise.

  8. An improved method for simulating radiographs

    International Nuclear Information System (INIS)

    Laguna, G.W.

    1986-01-01

    The parameters involved in generating actual radiographs and what can and cannot be modeled are examined in this report. Using the spectral distribution of the radiation source and the mass absorption curve for the material comprising the part to be modeled, the actual amount of radiation that would pass through the part and reach the film is determined. This method increases confidence in the results of the simulation and enables the modeling of parts made of multiple materials

  9. Psychological Benefits of Nonpharmacological Methods Aimed for Improving Balance in Parkinson’s Disease: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Rastislav Šumec

    2015-01-01

    Full Text Available Parkinson’s disease (PD is a serious condition with a major negative impact on patient’s physical and mental health. Postural instability is one of the cardinal difficulties reported by patients to deal with. Neuroanatomical, animal, and clinical studies on nonparkinsonian and parkinsonian subjects suggest an important correlation between the presence of balance dysfunction and multiple mood disorders, such as anxiety, depression, and apathy. Considering that balance dysfunction is a very common symptom in PD, we can presume that by its management we could positively influence patient’s state of mind too. This review is an analysis of nonpharmacological methods shown to be effective and successful for improving balance in patients suffering from PD. Strategies such as general exercise, robotic assisted training, Tai Chi, Qi Gong, Yoga, dance (such as tango or ballet, box, virtual reality-based, or neurofeedback-based techniques and so forth can significantly improve the stability in these patients. Beside this physical outcome, many methods have also shown effect on quality of life, depression level, enjoyment, and motivation to continue in practicing the method independently. The purpose of this review is to provide information about practical and creative methods designed to improve balance in PD and highlight their positive impact on patient’s psychology.

  10. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    Science.gov (United States)

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions.

  11. Combining morphometric evidence from multiple registration methods using dempster-shafer theory

    Science.gov (United States)

    Rajagopalan, Vidya; Wyatt, Christopher

    2010-03-01

    In tensor-based morphometry (TBM) group-wise differences in brain structure are measured using high degreeof- freedom registration and some form of statistical test. However, it is known that TBM results are sensitive to both the registration method and statistical test used. Given the lack of an objective model of group variation is it difficult to determine a best registration method for TBM. The use of statistical tests is also problematic given the corrections required for multiple testing and the notorius difficulty selecting and intepreting signigance values. This paper presents an approach to address both of these issues by combining multiple registration methods using Dempster-Shafer Evidence theory to produce belief maps of categorical changes between groups. This approach is applied to the comparison brain morphometry in aging, a typical application of TBM, using the determinant of the Jacobian as a measure of volume change. We show that the Dempster-Shafer combination produces a unique and easy to interpret belief map of regional changes between and within groups without the complications associated with hypothesis testing.

  12. Single- versus multiple-sample method to measure glomerular filtration rate.

    Science.gov (United States)

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  13. The Implementation of APIQ Creative Mathematics Game Method in the Subject Matter of Greatest Common Factor and Least Common Multiple in Elementary School

    Science.gov (United States)

    Rahman, Abdul; Saleh Ahmar, Ansari; Arifin, A. Nurani M.; Upu, Hamzah; Mulbar, Usman; Alimuddin; Arsyad, Nurdin; Ruslan; Rusli; Djadir; Sutamrin; Hamda; Minggi, Ilham; Awi; Zaki, Ahmad; Ahmad, Asdar; Ihsan, Hisyam

    2018-01-01

    One of causal factors for uninterested feeling of the students in learning mathematics is a monotonous learning method, like in traditional learning method. One of the ways for motivating students to learn mathematics is by implementing APIQ (Aritmetika Plus Intelegensi Quantum) creative mathematics game method. The purposes of this research are (1) to describe students’ responses toward the implementation of APIQ creative mathematics game method on the subject matter of Greatest Common Factor (GCF) and Least Common Multiple (LCM) and (2) to find out whether by implementing this method, the student’s learning completeness will improve or not. Based on the results of this research, it is shown that the responses of the students toward the implementation of APIQ creative mathematics game method in the subject matters of GCF and LCM were good. It is seen in the percentage of the responses were between 76-100%. (2) The implementation of APIQ creative mathematics game method on the subject matters of GCF and LCM improved the students’ learning.

  14. The Importance of Providing Multiple-Channel Sections in Dredging Activities to Improve Fish Habitat Environments

    Directory of Open Access Journals (Sweden)

    Hung-Pin Chiu

    2016-01-01

    Full Text Available After Typhoon Morakot, dredging engineering was conducted while taking the safety of humans and structures into consideration, but partial stream reaches were formed in the multiple-channel sections in Cishan Stream because of anthropogenic and natural influences. This study mainly explores the distribution of each fish species in both the multiple- and single-channel sections in the Cishan Stream. Parts of the environments did not exhibit significant differences according to a one-way ANOVA comparing the multiple- and single-channel sections, but certain areas of the multiple-channel sections had more diverse habitats. Each fish species was widely distributed by non-metric multidimensional scaling in the multiple-channel sections as compared to those in the single-channel sections. In addition, according to the principal component analysis, each fish species has a preferred environment, and all of them have a wide choice of habitat environments in the multiple-channel sections. Finally, the existence of multiple-channel sections could significantly affect the existence of the fish species under consideration in this study. However, no environmental factors were found to have an influence on fish species in the single-channel sections, with the exception of Rhinogobius nantaiensis. The results show that providing multiple-channel sections in dredging activities could improve fish habitat environments.

  15. A mixed methods study of multiple health behaviors among individuals with stroke

    Directory of Open Access Journals (Sweden)

    Matthew Plow

    2017-05-01

    Full Text Available Background Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. Methods We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. Results We found statistically significant and moderate correlations between hand function and healthy eating habits (r = 0.45, sleep disturbances and limitations in activities of daily living (r =  − 0.55, BMI and limitations in activities of daily living (r =  − 0.49, physical activity and limitations in activities of daily living (r = 0.41, mobility impairments and BMI (r

  16. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Science.gov (United States)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  17. Combining multiple FDG-PET radiotherapy target segmentation methods to reduce the effect of variable performance of individual segmentation methods

    Energy Technology Data Exchange (ETDEWEB)

    McGurk, Ross J. [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Bowsher, James; Das, Shiva K. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Lee, John A [Molecular Imaging and Experimental Radiotherapy Unit, Universite Catholique de Louvain, 1200 Brussels (Belgium)

    2013-04-15

    different between 128 Multiplication-Sign 128 and 256 Multiplication-Sign 256 grid sizes for either method (MJV, p= 0.0519; STAPLE, p= 0.5672) but was for SMASD values (MJV, p < 0.0001; STAPLE, p= 0.0164). The best individual method varied depending on object characteristics. However, both MJV and STAPLE provided essentially equivalent accuracy to using the best independent method in every situation, with mean differences in DSC of 0.01-0.03, and 0.05-0.12 mm for SMASD. Conclusions: Combining segmentations offers a robust approach to object segmentation in PET. Both MJV and STAPLE improved accuracy and were robust against the widely varying performance of individual segmentation methods. Differences between MJV and STAPLE are such that either offers good performance when combining volumes. Neither method requires a training dataset but MJV is simpler to interpret, easy to implement and fast.

  18. A Multiple Criteria Decision Making Method Based on Relative Value Distances

    Directory of Open Access Journals (Sweden)

    Shyur Huan-jyh

    2015-12-01

    Full Text Available This paper proposes a new multiple criteria decision-making method called ERVD (election based on relative value distances. The s-shape value function is adopted to replace the expected utility function to describe the risk-averse and risk-seeking behavior of decision makers. Comparisons and experiments contrasting with the TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution method are carried out to verify the feasibility of using the proposed method to represent the decision makers’ preference in the decision making process. Our experimental results show that the proposed approach is an appropriate and effective MCDM method.

  19. Comparison of two methods of surface profile extraction from multiple ultrasonic range measurements

    NARCIS (Netherlands)

    Barshan, B; Baskent, D

    Two novel methods for surface profile extraction based on multiple ultrasonic range measurements are described and compared. One of the methods employs morphological processing techniques, whereas the other employs a spatial voting scheme followed by simple thresholding. Morphological processing

  20. Non-Abelian Kubo formula and the multiple time-scale method

    International Nuclear Information System (INIS)

    Zhang, X.; Li, J.

    1996-01-01

    The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc

  1. Fuzzy multiple objective decision making methods and applications

    CERN Document Server

    Lai, Young-Jou

    1994-01-01

    In the last 25 years, the fuzzy set theory has been applied in many disciplines such as operations research, management science, control theory, artificial intelligence/expert system, etc. In this volume, methods and applications of crisp, fuzzy and possibilistic multiple objective decision making are first systematically and thoroughly reviewed and classified. This state-of-the-art survey provides readers with a capsule look into the existing methods, and their characteristics and applicability to analysis of fuzzy and possibilistic programming problems. To realize practical fuzzy modelling, it presents solutions for real-world problems including production/manufacturing, location, logistics, environment management, banking/finance, personnel, marketing, accounting, agriculture economics and data analysis. This book is a guided tour through the literature in the rapidly growing fields of operations research and decision making and includes the most up-to-date bibliographical listing of literature on the topi...

  2. Development of method for evaluating estimated inundation area by using river flood analysis based on multiple flood scenarios

    Science.gov (United States)

    Ono, T.; Takahashi, T.

    2017-12-01

    Non-structural mitigation measures such as flood hazard map based on estimated inundation area have been more important because heavy rains exceeding the design rainfall frequently occur in recent years. However, conventional method may lead to an underestimation of the area because assumed locations of dike breach in river flood analysis are limited to the cases exceeding the high-water level. The objective of this study is to consider the uncertainty of estimated inundation area with difference of the location of dike breach in river flood analysis. This study proposed multiple flood scenarios which can set automatically multiple locations of dike breach in river flood analysis. The major premise of adopting this method is not to be able to predict the location of dike breach correctly. The proposed method utilized interval of dike breach which is distance of dike breaches placed next to each other. That is, multiple locations of dike breach were set every interval of dike breach. The 2D shallow water equations was adopted as the governing equation of river flood analysis, and the leap-frog scheme with staggered grid was used. The river flood analysis was verified by applying for the 2015 Kinugawa river flooding, and the proposed multiple flood scenarios was applied for the Akutagawa river in Takatsuki city. As the result of computation in the Akutagawa river, a comparison with each computed maximum inundation depth of dike breaches placed next to each other proved that the proposed method enabled to prevent underestimation of estimated inundation area. Further, the analyses on spatial distribution of inundation class and maximum inundation depth in each of the measurement points also proved that the optimum interval of dike breach which can evaluate the maximum inundation area using the minimum assumed locations of dike breach. In brief, this study found the optimum interval of dike breach in the Akutagawa river, which enabled estimated maximum inundation area

  3. Statistical and numerical methods to improve the transient divided bar method

    DEFF Research Database (Denmark)

    Bording, Thue Sylvester; Nielsen, S.B.; Balling, N.

    The divided bar method is a commonly used method to measure thermal conductivity of rock samples in laboratory. We present improvements to this method that allows for simultaneous measurements of both thermal conductivity and thermal diffusivity. The divided bar setup is run in a transient mode...

  4. Dynamic Optimization for IPS2 Resource Allocation Based on Improved Fuzzy Multiple Linear Regression

    Directory of Open Access Journals (Sweden)

    Maokuan Zheng

    2017-01-01

    Full Text Available The study mainly focuses on resource allocation optimization for industrial product-service systems (IPS2. The development of IPS2 leads to sustainable economy by introducing cooperative mechanisms apart from commodity transaction. The randomness and fluctuation of service requests from customers lead to the volatility of IPS2 resource utilization ratio. Three basic rules for resource allocation optimization are put forward to improve system operation efficiency and cut unnecessary costs. An approach based on fuzzy multiple linear regression (FMLR is developed, which integrates the strength and concision of multiple linear regression in data fitting and factor analysis and the merit of fuzzy theory in dealing with uncertain or vague problems, which helps reduce those costs caused by unnecessary resource transfer. The iteration mechanism is introduced in the FMLR algorithm to improve forecasting accuracy. A case study of human resource allocation optimization in construction machinery industry is implemented to test and verify the proposed model.

  5. Improving School Improvement: Development and Validation of the CSIS-360, a 360-Degree Feedback Assessment for School Improvement Specialists

    Science.gov (United States)

    McDougall, Christie M.

    2013-01-01

    The purpose of the mixed methods study was to develop and validate the CSIS-360, a 360-degree feedback assessment to measure competencies of school improvement specialists from multiple perspectives. The study consisted of eight practicing school improvement specialists from a variety of settings. The specialists nominated 23 constituents to…

  6. Integrating Multiple Teaching Methods into a General Chemistry Classroom

    Science.gov (United States)

    Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella

    1998-02-01

    In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.

  7. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  8. Impact of natalizumab on ambulatory improvement in secondary progressive and disabled relapsing-remitting multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Diego Cadavid

    Full Text Available There is an unmet need for disease-modifying therapies to improve ambulatory function in disabled subjects with multiple sclerosis.Assess the effects of natalizumab on ambulatory function in disabled subjects with relapsing-remitting multiple sclerosis (RRMS or secondary progressive multiple sclerosis (SPMS.We retrospectively reviewed ambulatory function as measured by timed 25-foot walk (T25FW in clinical trial subjects with an Expanded Disability Status Scale score ≥3.5, including RRMS subjects from the phase 3 AFFIRM and SENTINEL trials, relapsing SPMS subjects from the phase 2 MS231 study, and nonrelapsing SPMS subjects from the phase 1b DELIVER study. For comparison, SPMS subjects from the intramuscular interferon beta-1a (IM IFNβ-1a IMPACT study were also analyzed. Improvement in ambulation was measured using T25FW responder status; response was defined as faster walking times over shorter (6-9-month or longer (24-30-month treatment periods relative to subjects' best predose walking times.There were two to four times more T25FW responders among disabled MS subjects in the natalizumab arms than in the placebo or IM IFNβ-1a arms. Responders walked 25 feet an average of 24%-45% faster than nonresponders.Natalizumab improves ambulatory function in disabled RRMS subjects and may have efficacy in disabled SPMS subjects. Confirmation of the latter finding in a prospective SPMS study is warranted.

  9. A new efficient method for the preparation of 99mTc-radiopharmaceuticals containing the Tc≡N multiple bond

    International Nuclear Information System (INIS)

    Pasqualini, R.; Comazzi, V.; Bellande, E.; Duatti, A.; Marchi, A.

    1992-01-01

    An improved method for the preparation of 99m Tc-radiopharmaceuticals containing the Tc≡N multiple bond, in sterile and apyrogen conditions, is described. This method is based on the reaction of [ 99m Tc] pertechnetate with ligands derived from S-methyl dithiocarbazate [H 2 N-N(R)-C(=S)SCH 3 (R = H, CH 3 )] in the presence of HC1 and tertiary phosphines. It was found that these derivatives can behave both as sources of nitride nitrogen ions (N 3- ) and as coordinating ligands. The reaction leads to the formation of intermediate technetium-nitrido complexes in high yield. These intermediate species can be used as suitable prereduced substrates for the preparation of technetium-nitrido radiopharmaceuticals through simple substitution reactions with appropriate exchanging ligands. (Author)

  10. Improved methods for high resolution electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.

    1987-04-01

    Existing methods of making support films for high resolution transmission electron microscopy are investigated and novel methods are developed. Existing methods of fabricating fenestrated, metal reinforced specimen supports (microgrids) are evaluated for their potential to reduce beam induced movement of monolamellar crystals of C/sub 44/H/sub 90/ paraffin supported on thin carbon films. Improved methods of producing hydrophobic carbon films by vacuum evaporation, and improved methods of depositing well ordered monolamellar paraffin crystals on carbon films are developed. A novel technique for vacuum evaporation of metals is described which is used to reinforce microgrids. A technique is also developed to bond thin carbon films to microgrids with a polymer bonding agent. Unique biochemical methods are described to accomplish site specific covalent modification of membrane proteins. Protocols are given which covalently convert the carboxy terminus of papain cleaved bacteriorhodopsin to a free thiol. 53 refs., 19 figs., 1 tab.

  11. A cellular automaton - finite volume method for the simulation of dendritic and eutectic growth in binary alloys using an adaptive mesh refinement

    Science.gov (United States)

    Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar

    2017-11-01

    A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.

  12. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    Science.gov (United States)

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  13. Application of laser radiation and magnetostimulation in therapy of patients with multiple sclerosis.

    Science.gov (United States)

    Kubsik, Anna; Klimkiewicz, Robert; Janczewska, Katarzyna; Klimkiewicz, Paulina; Jankowska, Agnieszka; Woldańska-Okońska, Marta

    2016-01-01

    Multiple sclerosis is one of the most common neurological disorders. It is a chronic inflammatory demyelinating disease of the CNS, whose etiology is not fully understood. Application of new rehabilitation methods are essential to improve functional status. The material studied consisted of 120 patients of both sexes (82 women and 38 men) aged 21-81 years. The study involved patients with a diagnosis of multiple sclerosis. The aim of the study was to evaluate the effect of laser radiation and other therapies on the functional status of patients with multiple sclerosis. Patients were randomly divided into four treatment groups. The evaluation was performed three times - before the start of rehabilitation, immediately after rehabilitation (21 days of treatment) and subsequent control - 30 days after the patients leave the clinic. The following tests were performed for all patients to assess functional status: Expanded Disability Status Scale (EDSS) of Kurtzke and Barthel Index. Results of all testing procedures show that the treatment methods are improving the functional status of patients with multiple sclerosis, with the significant advantage of the synergistic action of laser and magneto stimulation. The combination of laser and magneto stimulation significantly confirmed beneficial effect on quality of life. The results of these studies present new scientific value and are improved compared to program of rehabilitation of patients with multiple sclerosis by laser radiation which was previously used. This study showed that synergic action of laser radiation and magneto stimulation has a beneficial effect on improving functional status, and thus improves the quality of life of patients with multiple sclerosis. The effects of all methods of rehabilitation are persisted after cessation of treatment applications, with a particular advantage of the synergistic action of laser radiation and magneto stimulation, which indicates the possibility to elicitation in these

  14. TU-CD-BRA-12: Coupling PET Image Restoration and Segmentation Using Variational Method with Multiple Regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was used on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural

  15. Passive Methods as a Solution for Improving Indoor Environments

    CERN Document Server

    Orosa, José A

    2012-01-01

    There are many aspects to consider when evaluating or improving an indoor environment; thermal comfort, energy saving, preservation of materials, hygiene and health are all key aspects which can be improved by passive methods of environmental control. Passive Methods as a Solution for Improving Indoor Environments endeavours to fill the lack of analysis in this area by using over ten years of research to illustrate the effects of methods such as thermal inertia and permeable coverings; for example, the use of permeable coverings is a well known passive method, but its effects and ways to improve indoor environments have been rarely analyzed.   Passive Methods as a Solution for Improving Indoor Environments  includes both software simulations and laboratory and field studies. Through these, the main parameters that characterize the behavior of internal coverings are defined. Furthermore, a new procedure is explained in depth which can be used to identify the real expected effects of permeable coverings such ...

  16. Development of precise analytical methods for strontium and lanthanide isotopic ratios using multiple collector inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Ohno, Takeshi; Takaku, Yuichi; Hisamatsu, Shun'ichi

    2007-01-01

    We have developed precise analytical methods for strontium and lanthanide isotopic ratios using multiple collector-ICP-mass spectrometry (MC-ICP-MS) for experimental and environmental studies of their behavior. In order to obtain precise isotopic data using MC-ICP-MS, the mass discrimination effect was corrected by an exponential law correction method. The resulting isotopic data demonstrated that highly precise isotopic analyses (better than 0.1 per mille as 2SD) could be achieved. We also adopted a de-solvating nebulizer system to improve the sensitivity. This system could minimize the water load into the plasma and provided about five times larger intensity of analyte than a conventional nebulizer system did. (author)

  17. A Telerehabilitation Program Improves Postural Control in Multiple Sclerosis Patients: A Spanish Preliminary Study

    Directory of Open Access Journals (Sweden)

    Rosa Ortiz-Gutiérrez

    2013-10-01

    Full Text Available Postural control disorders are among the most frequent motor disorder symptoms associated with multiple sclerosis. This study aims to demonstrate the potential improvements in postural control among patients with multiple sclerosis who complete a telerehabilitation program that represents a feasible alternative to physical therapy for situations in which conventional treatment is not available. Fifty patients were recruited. Control group (n = 25 received physiotherapy treatment twice a week (40 min per session. Experimental group (n = 25 received monitored telerehabilitation treatment via videoconference using the Xbox 360® and Kinect console. Experimental group attended 40 sessions, four sessions per week (20 min per session.The treatment schedule lasted 10 weeks for both groups. A computerized dynamic posturography (Sensory Organization Test was used to evaluate all patients at baseline and at the end of the treatment protocol. Results showed an improvement over general balance in both groups. Visual preference and the contribution of vestibular information yielded significant differences in the experimental group. Our results demonstrated that a telerehabilitation program based on a virtual reality system allows one to optimize the sensory information processing and integration systems necessary to maintain the balance and postural control of people with multiple sclerosis. We suggest that our virtual reality program enables anticipatory PC and response mechanisms and might serve as a successful therapeutic alternative in situations in which conventional therapy is not readily available.

  18. A Telerehabilitation Program Improves Postural Control in Multiple Sclerosis Patients: A Spanish Preliminary Study

    Science.gov (United States)

    Ortiz-Gutiérrez, Rosa; Cano-de-la-Cuerda, Roberto; Galán-del-Río, Fernando; Alguacil-Diego, Isabel María; Palacios-Ceña, Domingo; Miangolarra-Page, Juan Carlos

    2013-01-01

    Postural control disorders are among the most frequent motor disorder symptoms associated with multiple sclerosis. This study aims to demonstrate the potential improvements in postural control among patients with multiple sclerosis who complete a telerehabilitation program that represents a feasible alternative to physical therapy for situations in which conventional treatment is not available. Fifty patients were recruited. Control group (n = 25) received physiotherapy treatment twice a week (40 min per session). Experimental group (n = 25) received monitored telerehabilitation treatment via videoconference using the Xbox 360® and Kinect console. Experimental group attended 40 sessions, four sessions per week (20 min per session).The treatment schedule lasted 10 weeks for both groups. A computerized dynamic posturography (Sensory Organization Test) was used to evaluate all patients at baseline and at the end of the treatment protocol. Results showed an improvement over general balance in both groups. Visual preference and the contribution of vestibular information yielded significant differences in the experimental group. Our results demonstrated that a telerehabilitation program based on a virtual reality system allows one to optimize the sensory information processing and integration systems necessary to maintain the balance and postural control of people with multiple sclerosis. We suggest that our virtual reality program enables anticipatory PC and response mechanisms and might serve as a successful therapeutic alternative in situations in which conventional therapy is not readily available. PMID:24185843

  19. A method for determining the analytical form of a radionuclide depth distribution using multiple gamma spectrometry measurements

    Energy Technology Data Exchange (ETDEWEB)

    Dewey, Steven Clifford, E-mail: sdewey001@gmail.com [United States Air Force School of Aerospace Medicine, Occupational Environmental Health Division, Health Physics Branch, Radiation Analysis Laboratories, 2350 Gillingham Drive, Brooks City-Base, TX 78235 (United States); Whetstone, Zachary David, E-mail: zacwhets@umich.edu [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States); Kearfott, Kimberlee Jane, E-mail: kearfott@umich.edu [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States)

    2011-06-15

    When characterizing environmental radioactivity, whether in the soil or within concrete building structures undergoing remediation or decommissioning, it is highly desirable to know the radionuclide depth distribution. This is typically modeled using continuous analytical expressions, whose forms are believed to best represent the true source distributions. In situ gamma ray spectroscopic measurements are combined with these models to fully describe the source. Currently, the choice of analytical expressions is based upon prior experimental core sampling results at similar locations, any known site history, or radionuclide transport models. This paper presents a method, employing multiple in situ measurements at a single site, for determining the analytical form that best represents the true depth distribution present. The measurements can be made using a variety of geometries, each of which has a different sensitivity variation with source spatial distribution. Using non-linear least squares numerical optimization methods, the results can be fit to a collection of analytical models and the parameters of each model determined. The analytical expression that results in the fit with the lowest residual is selected as the most accurate representation. A cursory examination is made of the effects of measurement errors on the method. - Highlights: > A new method for determining radionuclide distribution as a function of depth is presented. > Multiple measurements are used, with enough measurements to determine the unknowns in analytical functions that might describe the distribution. > The measurements must be as independent as possible, which is achieved through special collimation of the detector. > Although the effects of measurements errors may be significant on the results, an improvement over other methods is anticipated.

  20. Correlation expansion: a powerful alternative multiple scattering calculation method

    International Nuclear Information System (INIS)

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  1. Inference of Tumor Phylogenies with Improved Somatic Mutation Discovery

    KAUST Repository

    Salari, Raheleh

    2013-01-01

    Next-generation sequencing technologies provide a powerful tool for studying genome evolution during progression of advanced diseases such as cancer. Although many recent studies have employed new sequencing technologies to detect mutations across multiple, genetically related tumors, current methods do not exploit available phylogenetic information to improve the accuracy of their variant calls. Here, we present a novel algorithm that uses somatic single nucleotide variations (SNVs) in multiple, related tissue samples as lineage markers for phylogenetic tree reconstruction. Our method then leverages the inferred phylogeny to improve the accuracy of SNV discovery. Experimental analyses demonstrate that our method achieves up to 32% improvement for somatic SNV calling of multiple related samples over the accuracy of GATK\\'s Unified Genotyper, the state of the art multisample SNV caller. © 2013 Springer-Verlag.

  2. Pareto-depth for multiple-query image retrieval.

    Science.gov (United States)

    Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O

    2015-02-01

    Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.

  3. A self-determination multiple risk intervention trial to improve smokers' health.

    Science.gov (United States)

    Williams, Geoffrey C; McGregor, Holly; Sharp, Daryl; Kouldes, Ruth W; Lévesque, Chantal S; Ryan, Richard M; Deci, Edward L

    2006-12-01

    Little is known about how interventions motivate individuals to change multiple health risk behaviors. Self-determination theory (SDT) proposes that patient autonomy is an essential factor for motivating change. An SDT-based intervention to enhance autonomous motivation for tobacco abstinence and improving cholesterol was tested. The Smokers' Health Study is a randomized multiple risk behavior change intervention trial. Smokers were recruited to a tobacco treatment center. A total of 1.006 adult smokers were recruited between 1999 and 2002 from physician offices and by newspaper advertisements. A 6-month clinical intervention (4 contacts) to facilitate internalization of autonomy and perceived competence for tobacco abstinence and reduced percent calories from fat was compared with community care. Clinicians elicited patient perspectives and life strivings, provided absolute coronary artery disease risk estimates,enumerated effective treatment options, supported patient initiatives,minimized clinician control, assessed motivation for change, and developed a plan for change. Twelve-month prolonged tobacco abstinence, and change in percent calories from fat and low-density lipoprotein-cholesterol (LDL-C) from baseline to 18 months. RESULTS- Intention to treat analyses revealed that the intervention significantly increased 12-month prolonged tobacco abstinence (6.2% vs 2.4%; odds ratio [OR]=2.7, P=.01, number needed to treat [NNT] =26), and reduced LDL-C (-8.9 vs -4.1 mg/dL; P=.05). There was no effect on percent calories from fat. An intervention focused on supporting smokers'autonomy was effective in increasing prolonged tobacco abstinence and lowering LDL-C. Clinical interventions for behavior change may be improved by increasing patient autonomy and perceived competence.

  4. Study of the multiple scattering effect in TEBENE using the Monte Carlo method

    International Nuclear Information System (INIS)

    Singkarat, Somsorn.

    1990-01-01

    The neutron time-of-flight and energy spectra, from the TEBENE set-up, have been calculated by a computer program using the Monte Carlo method. The neutron multiple scattering within the polyethylene scatterer ring is closely investigated. The results show that multiple scattering has a significant effect on the detected neutron yield. They also indicate that the thickness of the scatterer ring has to be carefully chosen. (author)

  5. Bioactive conformational generation of small molecules: A comparative analysis between force-field and multiple empirical criteria based methods

    Directory of Open Access Journals (Sweden)

    Jiang Hualiang

    2010-11-01

    Full Text Available Abstract Background Conformational sampling for small molecules plays an essential role in drug discovery research pipeline. Based on multi-objective evolution algorithm (MOEA, we have developed a conformational generation method called Cyndi in the previous study. In this work, in addition to Tripos force field in the previous version, Cyndi was updated by incorporation of MMFF94 force field to assess the conformational energy more rationally. With two force fields against a larger dataset of 742 bioactive conformations of small ligands extracted from PDB, a comparative analysis was performed between pure force field based method (FFBM and multiple empirical criteria based method (MECBM hybrided with different force fields. Results Our analysis reveals that incorporating multiple empirical rules can significantly improve the accuracy of conformational generation. MECBM, which takes both empirical and force field criteria as the objective functions, can reproduce about 54% (within 1Å RMSD of the bioactive conformations in the 742-molecule testset, much higher than that of pure force field method (FFBM, about 37%. On the other hand, MECBM achieved a more complete and efficient sampling of the conformational space because the average size of unique conformations ensemble per molecule is about 6 times larger than that of FFBM, while the time scale for conformational generation is nearly the same as FFBM. Furthermore, as a complementary comparison study between the methods with and without empirical biases, we also tested the performance of the three conformational generation methods in MacroModel in combination with different force fields. Compared with the methods in MacroModel, MECBM is more competitive in retrieving the bioactive conformations in light of accuracy but has much lower computational cost. Conclusions By incorporating different energy terms with several empirical criteria, the MECBM method can produce more reasonable conformational

  6. Detection-Discrimination Method for Multiple Repeater False Targets Based on Radar Polarization Echoes

    Directory of Open Access Journals (Sweden)

    Z. W. ZONG

    2014-04-01

    Full Text Available Multiple repeat false targets (RFTs, created by the digital radio frequency memory (DRFM system of jammer, are widely used in practical to effectively exhaust the limited tracking and discrimination resource of defence radar. In this paper, common characteristic of radar polarization echoes of multiple RFTs is used for target recognition. Based on the echoes from two receiving polarization channels, the instantaneous polarization radio (IPR is defined and its variance is derived by employing Taylor series expansion. A detection-discrimination method is designed based on probability grids. By using the data from microwave anechoic chamber, the detection threshold of the method is confirmed. Theoretical analysis and simulations indicate that the method is valid and feasible. Furthermore, the estimation performance of IPRs of RFTs due to the influence of signal noise ratio (SNR is also covered.

  7. Does surgical stabilization improve outcomes in patients with isolated multiple distracted and painful non-flail rib fractures?

    Science.gov (United States)

    Girsowicz, Elie; Falcoz, Pierre-Emmanuel; Santelmo, Nicola; Massard, Gilbert

    2012-03-01

    A best evidence topic was constructed according to a structured protocol. The question addressed was whether surgical stabilization is effective in improving the outcomes of patients with isolated multiple distracted and painful non-flail rib fractures. Of the 356 papers found using a report search, nine presented the best evidence to answer the clinical question. The authors, journal, date and country of publication, study type, group studied, relevant outcomes and results of these papers are given. We conclude that, on the whole, the nine retrieved studies clearly support the use of surgical stabilization in the management of isolated multiple non-flail and painful rib fractures for improving patient outcomes. The interest and benefit was shown not only in terms of pain (McGill pain questionnaire) and respiratory function (forced vital capacity, forced expiratory volume in 1 s and carbon monoxide diffusing capacity), but also in improved quality of life (RAND 36-Item Health Survey) and reduced socio-professional disability. Indeed, most of the authors justified surgical management based on the fact that the results of surgical stabilization showed improvement in short- and long-term patient outcomes, with fast reduction in pain and disability, as well as lower average wait before recommencing normal activities. Hence, the current evidence shows surgical stabilization to be safe and effective in alleviating post-operative pain and in improving patient recovery, thus enhancing the outcome after isolated multiple rib fractures. However, given the little published evidence, prospective trials are necessary to confirm these encouraging results.

  8. Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?

    Science.gov (United States)

    Xu, Yanbo; Mostow, Jack

    2012-01-01

    A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…

  9. AIR Tools II: algebraic iterative reconstruction methods, improved implementation

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jørgensen, Jakob Sauer

    2017-01-01

    with algebraic iterative methods and their convergence properties. The present software is a much expanded and improved version of the package AIR Tools from 2012, based on a new modular design. In addition to improved performance and memory use, we provide more flexible iterative methods, a column-action method...

  10. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  11. Seismic PSA method for multiple nuclear power plants in a site

    Energy Technology Data Exchange (ETDEWEB)

    Hakata, Tadakuni [Nuclear Safety Commission, Tokyo (Japan)

    2007-07-15

    The maximum number of nuclear power plants in a site is eight and about 50% of power plants are built in sites with three or more plants in the world. Such nuclear sites have potential risks of simultaneous multiple plant damages especially at external events. Seismic probabilistic safety assessment method (Level-1 PSA) for multi-unit sites with up to 9 units has been developed. The models include Fault-tree linked Monte Carlo computation, taking into consideration multivariate correlations of components and systems from partial to complete, inside and across units. The models were programmed as a computer program CORAL reef. Sample analysis and sensitivity studies were performed to verify the models and algorithms and to understand some of risk insights and risk metrics, such as site core damage frequency (CDF per site-year) for multiple reactor plants. This study will contribute to realistic state of art seismic PSA, taking consideration of multiple reactor power plants, and to enhancement of seismic safety. (author)

  12. 29 CFR 4010.12 - Alternative method of compliance for certain sponsors of multiple employer plans.

    Science.gov (United States)

    2010-07-01

    ... BENEFIT GUARANTY CORPORATION CERTAIN REPORTING AND DISCLOSURE REQUIREMENTS ANNUAL FINANCIAL AND ACTUARIAL INFORMATION REPORTING § 4010.12 Alternative method of compliance for certain sponsors of multiple employer... part for an information year if any contributing sponsor of the multiple employer plan provides a...

  13. Do multiple micronutrient interventions improve child health, growth, and development?

    Science.gov (United States)

    Ramakrishnan, Usha; Goldenberg, Tamar; Allen, Lindsay H

    2011-11-01

    Micronutrient deficiencies are common and often co-occur in many developing countries. Several studies have examined the benefits of providing multiple micronutrient (MMN) interventions during pregnancy and childhood, but the implications for programs remain unclear. The key objective of this review is to summarize what is known about the efficacy of MMN interventions during early childhood on functional outcomes, namely, child health, survival, growth, and development, to guide policy and identify gaps for future research. We identified review articles including meta-analyses and intervention studies that evaluated the benefits of MMN interventions (3 or more micronutrients) in children (growth. Two studies found no effects on child mortality. The findings for respiratory illness and diarrhea are mixed, although suggestive of benefit when provided as fortified foods. There is evidence from several controlled trials (>25) and 2 meta-analyses that MMN interventions improve hemoglobin concentrations and reduce anemia, but the effects were small compared to providing only iron or iron with folic acid. Two recent meta-analyses and several intervention trials also indicated that MMN interventions improve linear growth compared to providing a placebo or single nutrients. Much less is known about the effects on MMN interventions during early childhood on motor and mental development. In summary, MMN interventions may result in improved outcomes for children in settings where micronutrient deficiencies are widespread.

  14. A mixed methods study of multiple health behaviors among individuals with stroke.

    Science.gov (United States)

    Plow, Matthew; Moore, Shirley M; Sajatovic, Martha; Katzan, Irene

    2017-01-01

    Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. We found statistically significant and moderate correlations between hand function and healthy eating habits ( r  = 0.45), sleep disturbances and limitations in activities of daily living ( r  =  - 0.55), BMI and limitations in activities of daily living ( r  =  - 0.49), physical activity and limitations in activities of daily living ( r  = 0.41), mobility impairments and BMI ( r  =  - 0.41), sleep disturbances and physical

  15. Magic Finger Teaching Method in Learning Multiplication Facts among Deaf Students

    Science.gov (United States)

    Thai, Liong; Yasin, Mohd. Hanafi Mohd

    2016-01-01

    Deaf students face problems in mastering multiplication facts. This study aims to identify the effectiveness of Magic Finger Teaching Method (MFTM) and students' perception towards MFTM. The research employs a quasi experimental with non-equivalent pre-test and post-test control group design. Pre-test, post-test and questionnaires were used. As…

  16. Influence of Sm2O3 microalloying and Yb contamination on Y211 particles coarsening and superconducting properties of IG YBCO bulk superconductors

    Science.gov (United States)

    Vojtkova, L.; Diko, P.; Kovac, J.; Vojtko, M.

    2018-06-01

    Single grain YBa2Cu3O7‑x (YBCO or Y123) bulk superconductors were produced by an infiltration growth process. The solid phase precursor was prepared by solid state synthesis from Y2O3 + BaCuO2 powders. The influence of the addition of Sm2O3 and YB contamination from the substrate on the microstructure and superconducting properties was analyzed. The dependences of Yb concentration on the distance from the bottom of the samples measured by energy dispersive spectroscopy microanalysis used in conjunction with scanning electron microscopy confirmed the contamination of the samples during the melting stage of the sample preparation. It is shown that the addition of Sm in low concentration and its combination with Yb from the substrate modify the coarsening of the Y211 particles as well as lead to the appearance of a secondary peak effect in the field dependences of the critical current density.

  17. Simulation of the concomitant process of nucleation-growth-coarsening of Al2Cu particles in a 319 foundry aluminum alloy

    International Nuclear Information System (INIS)

    Martinez, R; Larouche, D; Cailletaud, G; Guillot, I; Massinon, D

    2015-01-01

    The precipitation of Al 2 Cu particles in a 319 T7 aluminum alloy has been modeled. A theoretical approach enables the concomitant computation of nucleation, growth and coarsening. The framework is based on an implicit scheme using the finite differences. The equation of continuity is discretized in time and space in order to obtain a matricial form. The inversion of a tridiagonal matrix gives way to determining the evolution of the size distribution of Al 2 Cu particles at t  +Δt. The fluxes of in-between the boundaries are computed in order to respect the conservation of the mass of the system, as well as the fluxes at the boundaries. The essential results of the model are compared to TEM measurements. Simulations provide quantitative features on the impact of the cooling rate on the size distribution of particles. They also provide results in agreement with the TEM measurements. This kind of multiscale approach allows new perspectives to be examined in the process of designing highly loaded components such as cylinder heads. It enables a more precise prediction of the microstructure and its evolution as a function of continuous cooling rates. (paper)

  18. Collaboration between a human group and artificial intelligence can improve prediction of multiple sclerosis course: a proof-of-principle study.

    Science.gov (United States)

    Tacchella, Andrea; Romano, Silvia; Ferraldeschi, Michela; Salvetti, Marco; Zaccaria, Andrea; Crisanti, Andrea; Grassi, Francesca

    2017-01-01

    Background: Multiple sclerosis has an extremely variable natural course. In most patients, disease starts with a relapsing-remitting (RR) phase, which proceeds to a secondary progressive (SP) form. The duration of the RR phase is hard to predict, and to date predictions on the rate of disease progression remain suboptimal. This limits the opportunity to tailor therapy on an individual patient's prognosis, in spite of the choice of several therapeutic options. Approaches to improve clinical decisions, such as collective intelligence of human groups and machine learning algorithms are widely investigated. Methods: Medical students and a machine learning algorithm predicted the course of disease on the basis of randomly chosen clinical records of patients that attended at the Multiple Sclerosis service of Sant'Andrea hospital in Rome. Results: A significant improvement of predictive ability was obtained when predictions were combined with a weight that depends on the consistence of human (or algorithm) forecasts on a given clinical record. Conclusions: In this work we present proof-of-principle that human-machine hybrid predictions yield better prognoses than machine learning algorithms or groups of humans alone. To strengthen this preliminary result, we propose a crowdsourcing initiative to collect prognoses by physicians on an expanded set of patients.

  19. Prolonged-release fampridine treatment improved subject-reported impact of multiple sclerosis: Item-level analysis of the MSIS-29.

    Science.gov (United States)

    Gasperini, Claudio; Hupperts, Raymond; Lycke, Jan; Short, Christine; McNeill, Manjit; Zhong, John; Mehta, Lahar R

    2016-11-15

    Prolonged-release (PR) fampridine is approved to treat walking impairment in persons with multiple sclerosis (MS); however, treatment benefits may extend beyond walking. MOBILE was a phase 2, 24-week, double-blind, placebo-controlled exploratory study to assess the impact of 10mg PR-fampridine twice daily versus placebo on several subject-assessed measures. This analysis evaluated the physical and psychological health outcomes of subjects with progressing or relapsing MS from individual items of the Multiple Sclerosis Impact Scale (MSIS-29). PR-fampridine treatment (n=68) resulted in greater improvements from baseline in the MSIS-29 physical (PHYS) and psychological (PSYCH) impact subscales, with differences of 89% and 148% in mean score reduction from baseline (n=64) at week 24 versus placebo, respectively. MSIS-29 item analysis showed that a higher percentage of PR-fampridine subjects had mean improvements in 16/20 PHYS and 6/9 PSYCH items versus placebo after 24weeks. Post hoc analysis of the 12-item Multiple Sclerosis Walking Scale (MSWS-12) improver population (≥8-point mean improvement) demonstrated differences in mean reductions from baseline of 97% and 111% in PR-fampridine MSIS-29 PHYS and PSYCH subscales versus the overall placebo group over 24weeks. A higher percentage of MSWS-12 improvers treated with PR-fampridine showed mean improvements in 20/20 PHYS and 8/9 PSYCH items versus placebo at 24weeks. In conclusion, PR-fampridine resulted in physical and psychological benefits versus placebo, sustained over 24weeks. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  20. The effectiveness of cognitive- behavior therapy on illness representations of multiple-sclerosis and improving their emotional states

    Directory of Open Access Journals (Sweden)

    Farhad Hazhir

    2012-01-01

    Full Text Available Background: Illness representations (based on Leventhal's model are associated with chronic illness outcomes. It has been suggested that targeting these cognitive components improves illness outcomes. Multiple sclerosis is a common disorder between neural and immune systems that creates physical and psychological consequences. There are few pre psychological trails on these patients. The aim of this study was to determine effectiveness of cognitive-behavior therapy on altering illness representations and improving emotional states of the patients.Methods: By using a randomized controlled trial design, among 52 selected patients, 35 volunteers randomly were allocated into intervention and control groups. An extensive interventional cognitive behavior therapy based package was conducted to intervention group in 10 weekly sessions. The control group stayed in waiting list and participated in 5 group meeting sessions. (IPQR and (DASS-42 psychological scales were administered, Leven and T statistical tests were applied for dat analysis.Results: The results showed positive changes in four illness representation components of patients including illness (identity, consequences, coherence and personal control. Associated improvement occurred in depression, anxiety, stress and emotional representations.Conclusion: Mooney and Padeskey's theoretically based cognitive-behavior therapy, is effective on illness representations modification and improving emotional states of the patients. The findings are less similar to Goodman's trial on Systemic Lupus Erythematosus patients and more similar to Petrie's trail on cardiac patients.

  1. Multiple projection optical diffusion tomography with plane wave illumination

    International Nuclear Information System (INIS)

    Markel, Vadim A; Schotland, John C

    2005-01-01

    We describe a new data collection scheme for optical diffusion tomography in which plane wave illumination is combined with multiple projections in the slab imaging geometry. Multiple projection measurements are performed by rotating the slab around the sample. The advantage of the proposed method is that the measured data are more compatible with the dynamic range of most commonly used detectors. At the same time, multiple projections improve image quality by mutually interchanging the depth and transverse directions, and the scanned (detection) and integrated (illumination) surfaces. Inversion methods are derived for image reconstructions with extremely large data sets. Numerical simulations are performed for fixed and rotated slabs

  2. Improved modified energy ratio method using a multi-window approach for accurate arrival picking

    Science.gov (United States)

    Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun

    2017-04-01

    To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.

  3. Improved method and apparatus for electrostatically sorting biological cells. [DOE patent application

    Science.gov (United States)

    Merrill, J.T.

    An improved method of sorting biological cells in a conventional cell sorter apparatus includes generating a fluid jet containing cells to be sorted, measuring the distance between the centers of adjacent droplets in a zone thereof defined at the point where the fluid jet separates into descrete droplets, setting the distance between the center of a droplet in said separation zone and the position along said fluid jet at which the cell is optically sensed for specific characteristics to be an integral multiple of said center-to-center distance, and disabling a charger from electrically charging a specific droplet if a cell is detected by the optical sensor in a position wherein it will be in the neck area between droplets during droplet formation rather than within a predetermined distance from the droplet center.

  4. Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

    Science.gov (United States)

    Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.

    2018-01-01

    We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

  5. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    Science.gov (United States)

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  6. The bilinear complexity and practical algorithms for matrix multiplication

    Science.gov (United States)

    Smirnov, A. V.

    2013-12-01

    A method for deriving bilinear algorithms for matrix multiplication is proposed. New estimates for the bilinear complexity of a number of problems of the exact and approximate multiplication of rectangular matrices are obtained. In particular, the estimate for the boundary rank of multiplying 3 × 3 matrices is improved and a practical algorithm for the exact multiplication of square n × n matrices is proposed. The asymptotic arithmetic complexity of this algorithm is O( n 2.7743).

  7. Dual worth trade-off method and its application for solving multiple criteria decision making problems

    Institute of Scientific and Technical Information of China (English)

    Feng Junwen

    2006-01-01

    To overcome the limitations of the traditional surrogate worth trade-off (SWT) method and solve the multiple criteria decision making problem more efficiently and interactively, a new method labeled dual worth trade-off (DWT) method is proposed. The DWT method dynamically uses the duality theory related to the multiple criteria decision making problem and analytic hierarchy process technique to obtain the decision maker's solution preference information and finally find the satisfactory compromise solution of the decision maker. Through the interactive process between the analyst and the decision maker, trade-off information is solicited and treated properly, the representative subset of efficient solutions and the satisfactory solution to the problem are found. The implementation procedure for the DWT method is presented. The effectiveness and applicability of the DWT method are shown by a practical case study in the field of production scheduling.

  8. Improving Reference Service: The Case for Using a Continuous Quality Improvement Method.

    Science.gov (United States)

    Aluri, Rao

    1993-01-01

    Discusses the evaluation of library reference service; examines problems with past evaluations, including the lack of long-term planning and a systems perspective; and suggests a method for continuously monitoring and improving reference service using quality improvement tools such as checklists, cause and effect diagrams, Pareto charts, and…

  9. TODIM Method for Single-Valued Neutrosophic Multiple Attribute Decision Making

    Directory of Open Access Journals (Sweden)

    Dong-Sheng Xu

    2017-10-01

    Full Text Available Recently, the TODIM has been used to solve multiple attribute decision making (MADM problems. The single-valued neutrosophic sets (SVNSs are useful tools to depict the uncertainty of the MADM. In this paper, we will extend the TODIM method to the MADM with the single-valued neutrosophic numbers (SVNNs. Firstly, the definition, comparison, and distance of SVNNs are briefly presented, and the steps of the classical TODIM method for MADM problems are introduced. Then, the extended classical TODIM method is proposed to deal with MADM problems with the SVNNs, and its significant characteristic is that it can fully consider the decision makers’ bounded rationality which is a real action in decision making. Furthermore, we extend the proposed model to interval neutrosophic sets (INSs. Finally, a numerical example is proposed.

  10. A Waterline Extraction Method from Remote Sensing Image Based on Quad-tree and Multiple Active Contour Model

    Directory of Open Access Journals (Sweden)

    YU Jintao

    2016-09-01

    Full Text Available After the characteristics of geodesic active contour model (GAC, Chan-Vese model(CV and local binary fitting model(LBF are analyzed, and the active contour model based on regions and edges is combined with image segmentation method based on quad-tree, a waterline extraction method based on quad-tree and multiple active contour model is proposed in this paper. Firstly, the method provides an initial contour according to quad-tree segmentation. Secondly, a new signed pressure force(SPF function based on global image statistics information of CV model and local image statistics information of LBF model has been defined, and then ,the edge stopping function(ESF is replaced by the proposed SPF function, which solves the problem such as evolution stopped in advance and excessive evolution. Finally, the selective binary and Gaussian filtering level set method is used to avoid reinitializing and regularization to improve the evolution efficiency. The experimental results show that this method can effectively extract the weak edges and serious concave edges, and owns some properties such as sub-pixel accuracy, high efficiency and reliability for waterline extraction.

  11. Traffic Management by Using Admission Control Methods in Multiple Node IMS Network

    Directory of Open Access Journals (Sweden)

    Filip Chamraz

    2016-01-01

    Full Text Available The paper deals with Admission Control methods (AC as a possible solution for traffic management in IMS networks (IP Multimedia Subsystem - from the point of view of an efficient redistribution of the available network resources and keeping the parameters of Quality of Service (QoS. The paper specifically aims at the selection of the most appropriate method for the specific type of traffic and traffic management concept using AC methods on multiple nodes. The potential benefit and disadvantage of the used solution is evaluated.

  12. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  13. The Multiple Intelligences Teaching Method and Mathematics ...

    African Journals Online (AJOL)

    The Multiple Intelligences teaching approach has evolved and been embraced widely especially in the United States. The approach has been found to be very effective in changing situations for the better, in the teaching and learning of any subject especially mathematics. Multiple Intelligences teaching approach proposes ...

  14. Synthesis of research on Biogrout soil improvement method

    Directory of Open Access Journals (Sweden)

    Zsolt KALTENBACHER

    2014-12-01

    Full Text Available Because of the great rhythm of city developments, there is a great need for a new cost effective method for ground improvement. In this paper, a few chemical improvement technologies and a new biological ground improvement method called Biogrout are discussed. The method, used in the paper for a Sarmatian sand in Transylvania (Feleac locality implies using microorganisms as catalysts in order to induce a microbial carbonate precipitation (MICP to increase the strength and stiffness of cohesionless soils. For this calcium based procedure, the bacteria Sporosarcina Pasteurii (DSMZ 33 is used, while for the treatment solution urea (CO(NH22 and calcium chloride (CaCl2 are used. The study presents the triaxial testing of sand probes treated with Biogrout and the comparison of results obtained with untreated sand probes.

  15. Multiple and mixed methods in formative evaluation: Is more better? Reflections from a South African study

    Directory of Open Access Journals (Sweden)

    Willem Odendaal

    2016-12-01

    Full Text Available Abstract Background Formative programme evaluations assess intervention implementation processes, and are seen widely as a way of unlocking the ‘black box’ of any programme in order to explore and understand why a programme functions as it does. However, few critical assessments of the methods used in such evaluations are available, and there are especially few that reflect on how well the evaluation achieved its objectives. This paper describes a formative evaluation of a community-based lay health worker programme for TB and HIV/AIDS clients across three low-income communities in South Africa. It assesses each of the methods used in relation to the evaluation objectives, and offers suggestions on ways of optimising the use of multiple, mixed-methods within formative evaluations of complex health system interventions. Methods The evaluation’s qualitative methods comprised interviews, focus groups, observations and diary keeping. Quantitative methods included a time-and-motion study of the lay health workers’ scope of practice and a client survey. The authors conceptualised and conducted the evaluation, and through iterative discussions, assessed the methods used and their results. Results Overall, the evaluation highlighted programme issues and insights beyond the reach of traditional single methods evaluations. The strengths of the multiple, mixed-methods in this evaluation included a detailed description and nuanced understanding of the programme and its implementation, and triangulation of the perspectives and experiences of clients, lay health workers, and programme managers. However, the use of multiple methods needs to be carefully planned and implemented as this approach can overstretch the logistic and analytic resources of an evaluation. Conclusions For complex interventions, formative evaluation designs including multiple qualitative and quantitative methods hold distinct advantages over single method evaluations. However

  16. Donepezil improved memory in multiple sclerosis in a randomized clinical trial.

    Science.gov (United States)

    Krupp, L B; Christodoulou, C; Melville, P; Scherl, W F; MacAllister, W S; Elkins, L E

    2004-11-09

    To determine the effect of donepezil in treating memory and cognitive dysfunction in multiple sclerosis (MS). This single-center double-blind placebo-controlled clinical trial evaluated 69 MS patients with cognitive impairment who were randomly assigned to receive a 24-week treatment course of either donepezil (10 mg daily) or placebo. Patients underwent neuropsychological assessment at baseline and after 24 weeks of treatment. The primary outcome was change in verbal learning and memory on the Selective Reminding Test (SRT). Secondary outcomes included other tests of cognitive function, patient-reported change in memory, and clinician-reported impression of cognitive change. Donepezil-treated patients showed significant improvement in memory performance on the SRT compared to placebo (p = 0.043). The benefit of donepezil remained significant after controlling for various covariates including age, Expanded Disability Status Scale, baseline SRT score, reading ability, MS subtype, and sex. Donepezil-treated patients did not show significant improvements on other cognitive tests, but were more than twice as likely to report memory improvement than those in the placebo group (p = 0.006). The clinician also reported cognitive improvement in almost twice as many donepezil vs placebo patients (p = 0.036). No serious adverse events related to study medication occurred, although more donepezil (34.3%) than placebo (8.8%) subjects reported unusual/abnormal dreams (p = 0.010). Donepezil improved memory in MS patients with initial cognitive impairment in a single center clinical trial. A larger multicenter investigation of donepezil in MS is warranted in order to more definitively assess the efficacy of this intervention.

  17. Automatic domain updating technique for improving computational efficiency of 2-D flood-inundation simulation

    Science.gov (United States)

    Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.

    2017-12-01

    Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.

  18. Constraining surface emissions of air pollutants using inverse modelling: method intercomparison and a new two-step two-scale regularization approach

    Energy Technology Data Exchange (ETDEWEB)

    Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))

    2011-07-15

    When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution

  19. Robust modal curvature features for identifying multiple damage in beams

    Science.gov (United States)

    Ostachowicz, Wiesław; Xu, Wei; Bai, Runbo; Radzieński, Maciej; Cao, Maosen

    2014-03-01

    Curvature mode shape is an effective feature for damage detection in beams. However, it is susceptible to measurement noise, easily impairing its advantage of sensitivity to damage. To deal with this deficiency, this study formulates an improved curvature mode shape for multiple damage detection in beams based on integrating a wavelet transform (WT) and a Teager energy operator (TEO). The improved curvature mode shape, termed the WT - TEO curvature mode shape, has inherent capabilities of immunity to noise and sensitivity to damage. The proposed method is experimentally validated by identifying multiple cracks in cantilever steel beams with the mode shapes acquired using a scanning laser vibrometer. The results demonstrate that the improved curvature mode shape can identify multiple damage accurately and reliably, and it is fairly robust to measurement noise.

  20. A REVIEW OF ORDER PICKING IMPROVEMENT METHODS

    Directory of Open Access Journals (Sweden)

    Johan Oscar Ong

    2014-09-01

    Full Text Available As a crucial and one of the most important parts of warehousing, order picking often raises discussion between warehousing professionals, resulting in various studies aiming to analyze how order picking activity can be improved from various perspective. This paper reviews various past researches on order picking improvement, and the various methods those studies analyzed or developed. This literature review is based on twenty research articles on order picking improvement viewed from four different perspectives: Automation (specifically, stock-to-picker system, storage assignment policy, order batching, and order picking sequencing. By reviewing these studies, we try to identify the most prevalent order picking improvement approach to order picking improvement. Keywords: warehousing; stock-to-picker; storage assignment; order batching; order picking sequencing; improvement

  1. An improved calcium chloride method preparation and ...

    African Journals Online (AJOL)

    Transformation is one of the fundamental and essential molecular cloning techniques. In this paper, we have reported a modified method for preparation and transformation of competent cells. This modified method, improved from a classical protocol, has made some modifications on the concentration of calcium chloride ...

  2. Full Body Pose Estimation During Occlusion using Multiple Cameras

    DEFF Research Database (Denmark)

    Fihl, Preben; Cosar, Serhan

    people is a very challenging problem for methods based on pictorials structure as for any other monocular pose estimation method. In this report we present work on a multi-view approach based on pictorial structures that integrate low level information from multiple calibrated cameras to improve the 2D...

  3. Work system innovation: Designing improvement methods for generative capability

    DEFF Research Database (Denmark)

    Hansen, David; Møller, Niels

    2013-01-01

    This paper explores how a work system’s capability for improvement is influenced by its improvement methods. Based on explorative case study at a Lean manufacturing facility, the methods problem solving and Appreciative Inquiry were compared through in-depth qualitative studies over a 12-month...

  4. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  5. A Method to Construct Plasma with Nonlinear Density Enhancement Effect in Multiple Internal Inductively Coupled Plasmas

    International Nuclear Information System (INIS)

    Chen Zhipeng; Li Hong; Liu Qiuyan; Luo Chen; Xie Jinlin; Liu Wandong

    2011-01-01

    A method is proposed to built up plasma based on a nonlinear enhancement phenomenon of plasma density with discharge by multiple internal antennas simultaneously. It turns out that the plasma density under multiple sources is higher than the linear summation of the density under each source. This effect is helpful to reduce the fast exponential decay of plasma density in single internal inductively coupled plasma source and generating a larger-area plasma with multiple internal inductively coupled plasma sources. After a careful study on the balance between the enhancement and the decay of plasma density in experiments, a plasma is built up by four sources, which proves the feasibility of this method. According to the method, more sources and more intensive enhancement effect can be employed to further build up a high-density, large-area plasma for different applications. (low temperature plasma)

  6. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  7. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    Science.gov (United States)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  8. Identifying multiple influential spreaders in term of the distance-based coloring

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Lei; Lin, Jian-Hong; Guo, Qiang [Research Center of Complex Systems Science, University of Shanghai for Science and Technology, Shanghai 200093 (China); Liu, Jian-Guo, E-mail: liujg004@ustc.edu.cn [Research Center of Complex Systems Science, University of Shanghai for Science and Technology, Shanghai 200093 (China); Data Science and Cloud Service Research Centre, Shanghai University of Finance and Economics, Shanghai 200433 (China)

    2016-02-22

    Identifying influential nodes is of significance for understanding the dynamics of information diffusion process in complex networks. In this paper, we present an improved distance-based coloring method to identify the multiple influential spreaders. In our method, each node is colored by a kind of color with the rule that the distance between initial nodes is close to the average distance of a network. When all nodes are colored, nodes with the same color are sorted into an independent set. Then we choose the nodes at the top positions of the ranking list according to their centralities. The experimental results for an artificial network and three empirical networks show that, comparing with the performance of traditional coloring method, the improvement ratio of our distance-based coloring method could reach 12.82%, 8.16%, 4.45%, 2.93% for the ER, Erdős, Polblogs and Routers networks respectively. - Highlights: • We present an improved distance-based coloring method to identify the multiple influential spreaders. • Each node is colored by a kind of color where the distance between initial nodes is close to the average distance. • For three empirical networks show that the improvement ratio of our distance-based coloring method could reach 8.16% for the Erdos network.

  9. Identifying multiple influential spreaders in term of the distance-based coloring

    International Nuclear Information System (INIS)

    Guo, Lei; Lin, Jian-Hong; Guo, Qiang; Liu, Jian-Guo

    2016-01-01

    Identifying influential nodes is of significance for understanding the dynamics of information diffusion process in complex networks. In this paper, we present an improved distance-based coloring method to identify the multiple influential spreaders. In our method, each node is colored by a kind of color with the rule that the distance between initial nodes is close to the average distance of a network. When all nodes are colored, nodes with the same color are sorted into an independent set. Then we choose the nodes at the top positions of the ranking list according to their centralities. The experimental results for an artificial network and three empirical networks show that, comparing with the performance of traditional coloring method, the improvement ratio of our distance-based coloring method could reach 12.82%, 8.16%, 4.45%, 2.93% for the ER, Erdős, Polblogs and Routers networks respectively. - Highlights: • We present an improved distance-based coloring method to identify the multiple influential spreaders. • Each node is colored by a kind of color where the distance between initial nodes is close to the average distance. • For three empirical networks show that the improvement ratio of our distance-based coloring method could reach 8.16% for the Erdos network.

  10. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Science.gov (United States)

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  11. Volta-Based Cells Materials Chemical Multiple Representation to Improve Ability of Student Representation

    Science.gov (United States)

    Helsy, I.; Maryamah; Farida, I.; Ramdhani, M. A.

    2017-09-01

    This study aimed to describe the application of teaching materials, analyze the increase in the ability of students to connect the three levels of representation and student responses after application of multiple representations based teaching materials chemistry. The method used quasi one-group pretest-posttest design to 71 students. The results showed the application of teaching materials carried 88% with very good category. A significant increase ability to connect the three levels of representation of students after the application of multiple representations based teaching materials chemistry with t-value > t-crit (11.402 > 1.991). Recapitulation N-gain pretest and posttest showed relatively similar for all groups is 0.6 criterion being achievement. Students gave a positive response to the application of multiple representations based teaching materials chemistry. Students agree teaching materials used in teaching chemistry (88%), and agrees teaching materials to provide convenience in connecting the three levels of representation (95%).

  12. Method to improve commercial bonded SOI material

    Science.gov (United States)

    Maris, Humphrey John; Sadana, Devendra Kumar

    2000-07-11

    A method of improving the bonding characteristics of a previously bonded silicon on insulator (SOI) structure is provided. The improvement in the bonding characteristics is achieved in the present invention by, optionally, forming an oxide cap layer on the silicon surface of the bonded SOI structure and then annealing either the uncapped or oxide capped structure in a slightly oxidizing ambient at temperatures greater than 1200.degree. C. Also provided herein is a method for detecting the bonding characteristics of previously bonded SOI structures. According to this aspect of the present invention, a pico-second laser pulse technique is employed to determine the bonding imperfections of previously bonded SOI structures.

  13. Case studies: Soil mapping using multiple methods

    Science.gov (United States)

    Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald

    2010-05-01

    Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful

  14. An improved partial least-squares regression method for Raman spectroscopy

    Science.gov (United States)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  15. Improvement of numerical analysis method for FBR core characteristics. 3

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Yamamoto, Toshihisa; Kitada, Takanori; Katagi, Yousuke

    1998-03-01

    As the improvement of numerical analysis method for FBR core characteristics, studies on several topics have been conducted; multiband method, Monte Carlo perturbation and nodal transport method. This report is composed of the following three parts. Part 1: Improvement of Reaction Rate Calculation Method in the Blanket Region Based on the Multiband Method; A method was developed for precise evaluation of the reaction rate distribution in the blanket region using the multiband method. With the 3-band parameters obtained from the ordinary fitting method, major reaction rates such as U-238 capture, U-235 fission, Pu-239 fission and U-238 fission rate distributions were analyzed. Part 2: Improvement of Estimation Method for Reactivity Based on Monte-Carlo Perturbation Theory; Perturbation theory based on Monte-Carlo perturbation theory have been investigated and introduced into the calculational code. The Monte-Carlo perturbation code was applied to MONJU core and the calculational results were compared to the reference. Part 3: Improvement of Nodal Transport Calculation for Hexagonal Geometry; A method to evaluate the intra-subassembly power distribution from the nodal averaged neutron flux and surface fluxes at the node boundaries, was developed based on the transport theory. (J.P.N.)

  16. Systematic Review: The Effectiveness of Interventions to Reduce Falls and Improve Balance in Adults With Multiple Sclerosis.

    Science.gov (United States)

    Gunn, Hilary; Markevics, Sophie; Haas, Bernhard; Marsden, Jonathan; Freeman, Jennifer

    2015-10-01

    To evaluate the effectiveness of interventions in reducing falls and/or improving balance as a falls risk in multiple sclerosis (MS). Computer-based and manual searches included the following medical subject heading keywords: "Multiple Sclerosis AND accidental falls" OR "Multiple Sclerosis AND postural balance" OR "Multiple Sclerosis AND exercise" OR "Multiple Sclerosis AND physical/physio therapy" NOT animals. All literature published to November 2014 with available full-text details were included. Studies were reviewed against the PICO (participants, interventions, comparisons, outcomes) selection criteria: P, adults with MS; I, falls management/balance rehabilitation interventions; C, randomized/quasi-randomized studies comparing intervention with usual care or placebo control; O, falls outcomes and measures of balance. Fifteen articles of the original 529 search results were included. Two reviewers independently extracted data and assessed methodological quality using the Cochrane Risk of Bias tool. Random-effects meta-analysis indicated a small decrease in falls risk (risk ratio, .74), although the 95% confidence interval (CI) crossed 1 (95% CI, .12-4.38). The pooled standardized mean difference (SMD) for balance outcomes was .55 (95% CI, .35-.74). SMD varied significantly between exercise subgroupings; gait, balance, and functional training interventions yielded the greatest pooled effect size (ES) (SMD=.82; 95% CI, 0.55-1.10). There was a moderate positive correlation between program volume (min/wk) and ES (Cohen's d) (r=.70, P=.009), and a moderate negative correlation between program duration in weeks and ES (r=-.62, P=.03). Variations in interventions and outcomes and methodological limitations mean that results must be viewed with caution. This review suggests that balance may improve through exercise interventions, but that the magnitude of the improvements achieved in existing programs may not be sufficient to impact falls outcomes. Supporting

  17. Improvement of Tone's method with two-term rational approximation

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Endo, Tomohiro; Chiba, Go

    2011-01-01

    An improvement of Tone's method, which is a resonance calculation method based on the equivalence theory, is proposed. In order to increase calculation accuracy, the two-term rational approximation is incorporated for the representation of neutron flux. Furthermore, some theoretical aspects of Tone's method, i.e., its inherent approximation and choice of adequate multigroup cross section for collision probability estimation, are also discussed. The validity of improved Tone's method is confirmed through a verification calculation in an irregular lattice geometry, which represents part of an LWR fuel assembly. The calculation result clarifies the validity of the present method. (author)

  18. Room for improvement? Leadership, innovation culture and uptake of quality improvement methods in general practice.

    Science.gov (United States)

    Apekey, Tanefa A; McSorley, Gerry; Tilling, Michelle; Siriwardena, A Niroshan

    2011-04-01

    Leadership and innovation are currently seen as essential elements for the development and maintenance of high-quality care. Little is known about the relationship between leadership and culture of innovation and the extent to which quality improvement methods are used in general practice. This study aimed to assess the relationship between leadership behaviour, culture of innovation and adoption of quality improvement methods in general practice. Self-administered postal questionnaires were sent to general practitioner quality improvement leads in one county in the UK between June and December 2007. The questionnaire consisted of background information, a 12-item scale to assess leadership behaviour, a seven-dimension self-rating scale for culture of innovation and questions on current use of quality improvement tools and techniques. Sixty-three completed questionnaires (62%) were returned. Leadership behaviours were not commonly reported. Most practices reported a positive culture of innovation, featuring relationship most strongly, followed by targets and information but rated lower on other dimensions of rewards, risk and resources. There was a significant positive correlation between leadership behaviour and the culture of innovation (r = 0.57; P improvement methods were not adopted by most participating practices. Leadership behaviours were infrequently reported and this was associated with a limited culture of innovation in participating general practices. There was little use of quality improvement methods beyond clinical and significant event audit. Practices need support to enhance leadership skills, encourage innovation and develop quality improvement skills if improvements in health care are to accelerate. © 2010 Blackwell Publishing Ltd.

  19. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Science.gov (United States)

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  20. Integrated QSAR study for inhibitors of Hedgehog Signal Pathway against multiple cell lines:a collaborative filtering method.

    Science.gov (United States)

    Gao, Jun; Che, Dongsheng; Zheng, Vincent W; Zhu, Ruixin; Liu, Qi

    2012-07-31

    The Hedgehog Signaling Pathway is one of signaling pathways that are very important to embryonic development. The participation of inhibitors in the Hedgehog Signal Pathway can control cell growth and death, and searching novel inhibitors to the functioning of the pathway are in a great demand. As the matter of fact, effective inhibitors could provide efficient therapies for a wide range of malignancies, and targeting such pathway in cells represents a promising new paradigm for cell growth and death control. Current research mainly focuses on the syntheses of the inhibitors of cyclopamine derivatives, which bind specifically to the Smo protein, and can be used for cancer therapy. While quantitatively structure-activity relationship (QSAR) studies have been performed for these compounds among different cell lines, none of them have achieved acceptable results in the prediction of activity values of new compounds. In this study, we proposed a novel collaborative QSAR model for inhibitors of the Hedgehog Signaling Pathway by integration the information from multiple cell lines. Such a model is expected to substantially improve the QSAR ability from single cell lines, and provide useful clues in developing clinically effective inhibitors and modifications of parent lead compounds for target on the Hedgehog Signaling Pathway. In this study, we have presented: (1) a collaborative QSAR model, which is used to integrate information among multiple cell lines to boost the QSAR results, rather than only a single cell line QSAR modeling. Our experiments have shown that the performance of our model is significantly better than single cell line QSAR methods; and (2) an efficient feature selection strategy under such collaborative environment, which can derive the commonly important features related to the entire given cell lines, while simultaneously showing their specific contributions to a specific cell-line. Based on feature selection results, we have proposed several

  1. Analytic Methods for Evaluating Patterns of Multiple Congenital Anomalies in Birth Defect Registries.

    Science.gov (United States)

    Agopian, A J; Evans, Jane A; Lupo, Philip J

    2018-01-15

    It is estimated that 20 to 30% of infants with birth defects have two or more birth defects. Among these infants with multiple congenital anomalies (MCA), co-occurring anomalies may represent either chance (i.e., unrelated etiologies) or pathogenically associated patterns of anomalies. While some MCA patterns have been recognized and described (e.g., known syndromes), others have not been identified or characterized. Elucidating these patterns may result in a better understanding of the etiologies of these MCAs. This article reviews the literature with regard to analytic methods that have been used to evaluate patterns of MCAs, in particular those using birth defect registry data. A popular method for MCA assessment involves a comparison of the observed to expected ratio for a given combination of MCAs, or one of several modified versions of this comparison. Other methods include use of numerical taxonomy or other clustering techniques, multiple regression analysis, and log-linear analysis. Advantages and disadvantages of these approaches, as well as specific applications, were outlined. Despite the availability of multiple analytic approaches, relatively few MCA combinations have been assessed. The availability of large birth defects registries and computing resources that allow for automated, big data strategies for prioritizing MCA patterns may provide for new avenues for better understanding co-occurrence of birth defects. Thus, the selection of an analytic approach may depend on several considerations. Birth Defects Research 110:5-11, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. A method of risk assessment for a multi-plant site

    International Nuclear Information System (INIS)

    White, R.F.

    1983-06-01

    A model is presented which can be used in conjunction with probabilistic risk assessment to estimate whether a site on which there are several plants (reactors or chemical plants containing radioactive materials) meets whatever risk acceptance criteria or numerical risk guidelines are applied at the time of the assessment in relation to various groups of people and for various sources of risk. The application of the multi-plant site model to the direct and inverse methods of risk assessment is described. A method is proposed by which the potential hazard rating associated with a given plant can be quantified so that an appropriate allocation can be made when assessing the risks associated with each of the plants on a site. (author)

  3. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Directory of Open Access Journals (Sweden)

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  4. Improved pinning by multiple in-line damage

    Energy Technology Data Exchange (ETDEWEB)

    Weinstein, Roy [Beam Particle Dynamics Laboratories, University of Houston, Houston, TX 77204-5005 (United States); Sawh, Ravi-Persad [Beam Particle Dynamics Laboratories, University of Houston, Houston, TX 77204-5005 (United States); Gandini, Alberto [Beam Particle Dynamics Laboratories, University of Houston, Houston, TX 77204-5005 (United States); Parks, Drew [Beam Particle Dynamics Laboratories, University of Houston, Houston, TX 77204-5005 (United States)

    2005-02-01

    Columnar pinning centres provide the largest pinning potential, U{sub pin}, but not the greatest J{sub c} or pinnable field, B{sub pin}. Characteristics of ion-generated columnar defects which limit J{sub c} and B{sub pin} are discussed, including reduction of the percolation path, and the need for a larger number of columns of damage, for pinning, than are usually estimated. It is concluded that columnar pinning centres limit B{sub pin} to less than 4 T, and also severely reduce J{sub c}. The goal of maximizing U{sub pin}, via columnar centres, appears to have obscured a more rewarding approach and resulted in neglect of a large regime of ion interactions. Evidence is reviewed that multiple in-line damage (MILD), described herein, can provide orders of magnitude higher J{sub c} and B{sub pin}, despite providing lower U{sub pin}. The MILD pinning centre morphology is discussed, and it is estimated that for present-day large grain high T{sub c} superconductors, a J{sub c} value of {approx}10{sup 6}Acm{sup -2} is obtainable at 77 K, even when crystal plane alignment and weak links are not improved. In addition, the pinned field is increased by over an order of magnitude. An experiment is proposed to confirm these calculations, directly compare MILD pinning to continuous columnar pinning, and determine the optimum MILD structure. Applications of MILD pinning are discussed.

  5. Analyzing and improving a chaotic encryption method

    International Nuclear Information System (INIS)

    Wu Xiaogang; Hu Hanping; Zhang Baoliang

    2004-01-01

    To resist the return map attack [Phys. Rev. Lett. 74 (1995) 1970] presented by Perez and Cerdeira, Shouliang Bu and Bing-Hong Wang proposed a simple method to improve the security of the chaotic encryption by modulating the chaotic carrier with an appropriately chosen scalar signal in [Chaos, Solitons and Fractals 19 (2004) 919]. They maintained that this modulating strategy not only preserved all appropriate information required for synchronizing chaotic systems but also destroyed the possibility of the phase space reconstruction of the sender dynamics such as a return map. However, a critical defect does exist in this scheme. This paper gives a zero-point autocorrelation method, which can recover the parameters of the scalar signal from the modulated signal. Consequently, the messages will be extracted from the demodulated chaotic carrier by using return map. Based on such a fact, an improved scheme is presented to obtain higher security, and the numerical simulation indicates the improvement of the synchronizing performance as well

  6. Using the experience-sampling method to examine the psychological mechanisms by which participatory art improves wellbeing.

    Science.gov (United States)

    Holt, Nicola J

    2018-01-01

    To measure the immediate impact of art-making in everyday life on diverse indices of wellbeing ('in the moment' and longer term) in order to improve understanding of the psychological mechanisms by which art may improve mental health. Using the experience-sampling method, 41 artists were prompted (with a 'beep' on a handheld computer) at random intervals (10 times a day, for one week) to answer a short questionnaire. The questionnaire tracked art-making and enquired about mood, cognition and state of consciousness. This resulted in 2,495 sampled experiences, with a high response rate in which 89% of questionnaires were completed. Multi-level modelling was used to evaluate the impact of art-making on experience, with 2,495 'experiences' (experiential-level) nested within 41 participants (person-level). Recent art-making was significantly associated with experiential shifts: improvement in hedonic tone, vivid internal imagery and the flow state. Furthermore, the frequency of art-making across the week was associated with person-level measures of wellbeing: eudemonic happiness and self-regulation. Cross-level interactions, between experiential and person-level variables, suggested that hedonic tone improved more for those scoring low on eudemonic happiness, and further that, those high in eudemonic happiness were more likely to experience phenomenological features of the flow state and to experience inner dialogue while art-making. Art-making has both immediate and long-term associations with wellbeing. At the experiential level, art-making affects multiple dimensions of conscious experience: affective, cognitive and state factors. This suggests that there are multiple routes to wellbeing (improving hedonic tone, making meaning through inner dialogue and experiencing the flow state). Recommendations are made to consider these factors when both developing and evaluating public health interventions that involve participatory art.

  7. Solutions on high-resolution multiple configuration system sensors

    Science.gov (United States)

    Liu, Hua; Ding, Quanxin; Guo, Chunjie; Zhou, Liwei

    2014-11-01

    For aim to achieve an improved resolution in modern image domain, a method of continuous zoom multiple configuration, with a core optics is attempt to establish model by novel principle on energy transfer and high accuracy localization, by which the system resolution can be improved with a level in nano meters. A comparative study on traditional vs modern methods can demonstrate that the dialectical relationship and their balance is important, among Merit function, Optimization algorithms and Model parameterization. The effect of system evaluated criterion that MTF, REA, RMS etc. can support our arguments qualitatively.

  8. Is There a Role for Oral Antibiotic Preparation Alone Before Colorectal Surgery? ACS-NSQIP Analysis by Coarsened Exact Matching.

    Science.gov (United States)

    Garfinkle, Richard; Abou-Khalil, Jad; Morin, Nancy; Ghitulescu, Gabriela; Vasilevsky, Carol-Ann; Gordon, Philip; Demian, Marie; Boutros, Marylise

    2017-07-01

    Recent studies demonstrated reduced postoperative complications using combined mechanical bowel and oral antibiotic preparation before elective colorectal surgery. The aim of this study was to assess the impact of these 2 interventions on surgical site infections, anastomotic leak, ileus, major morbidity, and 30-day mortality in a large cohort of elective colectomies. This is a retrospective comparison of 30-day outcomes using the American College of Surgeons National Surgical Quality Improvement Program colectomy-targeted database with coarsened exact matching. Interventions were performed in hospitals participating in the national surgical database. Adult patients who underwent elective colectomy from 2012 to 2014 were included. Preoperative bowel preparations were evaluated. The primary outcomes measured were surgical site infections, anastomotic leak, postoperative ileus, major morbidity, and 30-day mortality. A total of 40,446 patients were analyzed: 13,219 (32.7%), 13,935 (34.5%), and 1572 (3.9%) in the no-preparation, mechanical bowel preparation alone, and oral antibiotic preparation alone groups, and 11,720 (29.0%) in the combined preparation group. After matching, 9800, 1461, and 8819 patients remained in the mechanical preparation, oral antibiotic preparation, and combined preparation groups for comparison with patients without preparation. On conditional logistic regression of matched patients, oral antibiotic preparation alone was protective of surgical site infection (OR, 0.63; 95% CI, 0.45-0.87), anastomotic leak (OR, 0.60; 95% CI, 0.34-0.97), ileus (OR, 0.79; 95% CI, 0.59-0.98), and major morbidity (OR, 0.73; 95% CI, 0.55-0.96), but not mortality (OR, 0.32; 95% CI, 0.08-1.18), whereas a regimen of combined oral antibiotics and mechanical bowel preparation was protective for all 5 major outcomes. When directly compared with oral antibiotic preparation alone, the combined regimen was not associated with any difference in any of the 5 postoperative

  9. Learning from Multiple Classifier Systems: Perspectives for Improving Decision Making of QSAR Models in Medicinal Chemistry.

    Science.gov (United States)

    Pham-The, Hai; Nam, Nguyen-Hai; Nga, Doan-Viet; Hai, Dang Thanh; Dieguez-Santana, Karel; Marrero-Poncee, Yovani; Castillo-Garit, Juan A; Casanola-Martin, Gerardo M; Le-Thi-Thu, Huong

    2018-02-09

    Quantitative Structure - Activity Relationship (QSAR) modeling has been widely used in medicinal chemistry and computational toxicology for many years. Today, as the amount of chemicals is increasing dramatically, QSAR methods have become pivotal for the purpose of handling the data, identifying a decision, and gathering useful information from data processing. The advances in this field have paved a way for numerous alternative approaches that require deep mathematics in order to enhance the learning capability of QSAR models. One of these directions is the use of Multiple Classifier Systems (MCSs) that potentially provide a means to exploit the advantages of manifold learning through decomposition frameworks, while improving generalization and predictive performance. In this paper, we presented MCS as a next generation of QSAR modeling techniques and discuss the chance to mining the vast number of models already published in the literature. We systematically revisited the theoretical frameworks of MCS as well as current advances in MCS application for QSAR practice. Furthermore, we illustrated our idea by describing ensemble approaches on modeling histone deacetylase (HDACs) inhibitors. We expect that our analysis would contribute to a better understanding about MCS application and its future perspectives for improving the decision making of QSAR models. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. An implementation of multiple multipole method in the analyse of elliptical objects to enhance backscattering light

    Science.gov (United States)

    Jalali, T.

    2015-07-01

    In this paper, we present dielectric elliptical shapes modelling with respect to a highly confined power distribution in the resulting nanojet, which has been parameterized according to the beam waist and its beam divergence. The method is based on spherical bessel function as a basis function, which is adapted to standard multiple multipole method. This method can handle elliptically shaped particles due to the change of size and refractive indices, which have been studied under plane wave illumination in two and three dimensional multiple multipole method. Because of its fast and good convergence, the results obtained from simulation are highly accurate and reliable. The simulation time is less than minute for two and three dimension. Therefore, the proposed method is found to be computationally efficient, fast and accurate.

  11. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Directory of Open Access Journals (Sweden)

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  12. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  13. MULTIPLE CRITERA METHODS WITH FOCUS ON ANALYTIC HIERARCHY PROCESS AND GROUP DECISION MAKING

    Directory of Open Access Journals (Sweden)

    Lidija Zadnik-Stirn

    2010-12-01

    Full Text Available Managing natural resources is a group multiple criteria decision making problem. In this paper the analytic hierarchy process is the chosen method for handling the natural resource problems. The one decision maker problem is discussed and, three methods: the eigenvector method, data envelopment analysis method, and logarithmic least squares method are presented for the derivation of the priority vector. Further, the group analytic hierarchy process is discussed and six methods for the aggregation of individual judgments or priorities: weighted arithmetic mean method, weighted geometric mean method, and four methods based on data envelopment analysis are compared. The case study on land use in Slovenia is applied. The conclusions review consistency, sensitivity analyses, and some future directions of research.

  14. Multiple constant multiplication optimizations for field programmable gate arrays

    CERN Document Server

    Kumm, Martin

    2016-01-01

    This work covers field programmable gate array (FPGA)-specific optimizations of circuits computing the multiplication of a variable by several constants, commonly denoted as multiple constant multiplication (MCM). These optimizations focus on low resource usage but high performance. They comprise the use of fast carry-chains in adder-based constant multiplications including ternary (3-input) adders as well as the integration of look-up table-based constant multipliers and embedded multipliers to get the optimal mapping to modern FPGAs. The proposed methods can be used for the efficient implementation of digital filters, discrete transforms and many other circuits in the domain of digital signal processing, communication and image processing. Contents Heuristic and ILP-Based Optimal Solutions for the Pipelined Multiple Constant Multiplication Problem Methods to Integrate Embedded Multipliers, LUT-Based Constant Multipliers and Ternary (3-Input) Adders An Optimized Multiple Constant Multiplication Architecture ...

  15. Inference of Tumor Phylogenies with Improved Somatic Mutation Discovery

    KAUST Repository

    Salari, Raheleh; Saleh, Syed Shayon; Kashef-Haghighi, Dorna; Khavari, David; Newburger, Daniel E.; West, Robert B.; Sidow, Arend; Batzoglou, Serafim

    2013-01-01

    multiple, genetically related tumors, current methods do not exploit available phylogenetic information to improve the accuracy of their variant calls. Here, we present a novel algorithm that uses somatic single nucleotide variations (SNVs) in multiple

  16. Training Methods to Improve Evidence-Based Medicine Skills

    Directory of Open Access Journals (Sweden)

    Filiz Ozyigit

    2010-06-01

    Full Text Available Evidence based medicine (EBM is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. It is estimated that only 15% of medical interventions is evidence-based. Increasing demand, new technological developments, malpractice legislations, a very speed increase in knowledge and knowledge sources push the physicians forward for EBM, but at the same time increase load of physicians by giving them the responsibility to improve their skills. Clinical maneuvers are needed more, as the number of clinical trials and observational studies increase. However, many of the physicians, who are in front row of patient care do not use this increasing evidence. There are several examples related to different training methods in order to improve skills of physicians for evidence based practice. There are many training methods to improve EBM skills and these trainings might be given during medical school, during residency or as continuous trainings to the actual practitioners in the field. It is important to discuss these different training methods in our country as well and encourage dissemination of feasible and effective methods. [TAF Prev Med Bull 2010; 9(3.000: 245-254

  17. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    Science.gov (United States)

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  18. Multiple-source multiple-harmonic active vibration control of variable section cylindrical structures: A numerical study

    Science.gov (United States)

    Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu

    2016-12-01

    Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.

  19. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Directory of Open Access Journals (Sweden)

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  20. An Improved Method of Training Overcomplete Dictionary Pair

    Directory of Open Access Journals (Sweden)

    Zhuozheng Wang

    2014-01-01

    Full Text Available Training overcomplete dictionary pair is a critical step of the mainstream superresolution methods. For the high time complexity and susceptible to corruption characteristics of training dictionary, an improved method based on lifting wavelet transform and robust principal component analysis is reported. The high-frequency components of example images are estimated through wavelet coefficients of 3-tier lifting wavelet transform decomposition. Sparse coefficients are similar in multiframe images. Accordingly, the inexact augmented Lagrange multiplier method is employed to achieve robust principal component analysis in the process of imposing global constraints. Experiments reveal that the new algorithm not only reduces the time complexity preserving the clarity but also improves the robustness for the corrupted example images.

  1. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  2. Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target

    Directory of Open Access Journals (Sweden)

    Chang-jian Ru

    2015-01-01

    Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.

  3. Development of parallel implementation of adaptive numerical methods with industrial applications in fluid mechanics

    International Nuclear Information System (INIS)

    Laucoin, E.

    2008-10-01

    Numerical resolution of partial differential equations can be made reliable and efficient through the use of adaptive numerical methods.We present here the work we have done for the design, the implementation and the validation of such a method within an industrial software platform with applications in thermohydraulics. From the geometric point of view, this method can deal both with mesh refinement and mesh coarsening, while ensuring the quality of the mesh cells. Numerically, we use the mortar elements formalism in order to extend the Finite Volumes-Elements method implemented in the Trio-U platform and to deal with the non-conforming meshes arising from the adaptation procedure. Finally, we present an implementation of this method using concepts from domain decomposition methods for ensuring its efficiency while running in a parallel execution context. (author)

  4. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar

    Directory of Open Access Journals (Sweden)

    Shouguo Yang

    2015-12-01

    Full Text Available A novel spatio-temporal 2-dimensional (2-D processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD and direction of arrival (DOA, and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.

  5. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    Science.gov (United States)

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  6. An Improved Local Gradient Method for Sea Surface Wind Direction Retrieval from SAR Imagery

    Directory of Open Access Journals (Sweden)

    Lizhang Zhou

    2017-06-01

    Full Text Available Sea surface wind affects the fluxes of energy, mass and momentum between the atmosphere and ocean, and therefore regional and global weather and climate. With various satellite microwave sensors, sea surface wind can be measured with large spatial coverage in almost all-weather conditions, day or night. Like any other remote sensing measurements, sea surface wind measurement is also indirect. Therefore, it is important to develop appropriate wind speed and direction retrieval models for different types of microwave instruments. In this paper, a new sea surface wind direction retrieval method from synthetic aperture radar (SAR imagery is developed. In the method, local gradients are computed in frequency domain by combining the operation of smoothing and computing local gradients in one step to simplify the process and avoid the difference approximation. This improved local gradients (ILG method is compared with the traditional two-dimensional fast Fourier transform (2D FFT method and local gradients (LG method, using interpolating wind directions from the European Centre for Medium-Range Weather Forecast (ECMWF reanalysis data and the Cross-Calibrated Multi-Platform (CCMP wind vector product. The sensitivities to the salt-and-pepper noise, the additive noise and the multiplicative noise are analyzed. The ILG method shows a better performance of retrieval wind directions than the other two methods.

  7. Multiple purpose electrical profit; Emprendimiento electrico de prestacion multiple

    Energy Technology Data Exchange (ETDEWEB)

    Assennato, H. [Electrica de Azul Ltda., Buenos Aires (Argentina)

    1986-12-31

    This paper shows the multiple purpose aspects of electrification projects in rural and isolated areas. The multiple aspects involved in the electrification process may include, over electric power supply: improvement of life quality, irrigation and rural mechanization. 4 figs., 6 tabs., 4 refs.

  8. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  9. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, Dongliang

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.

  10. Improving Students' Creative Thinking and Achievement through the Implementation of Multiple Intelligence Approach with Mind Mapping

    Science.gov (United States)

    Widiana, I. Wayan; Jampel, I. Nyoman

    2016-01-01

    This classroom action research aimed to improve the students' creative thinking and achievement in learning science. It conducted through the implementation of multiple intelligences with mind mapping approach and describing the students' responses. The subjects of this research were the fifth grade students of SD 8 Tianyar Barat, Kubu, and…

  11. Comparison of Deep Learning With Multiple Machine Learning Methods and Metrics Using Diverse Drug Discovery Data Sets.

    Science.gov (United States)

    Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean

    2017-12-04

    Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further

  12. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  13. Dynamical properties of the growing continuum using multiple-scale method

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2008-12-01

    Full Text Available The theory of growth and remodeling is applied to the 1D continuum. This can be mentioned e.g. as a model of the muscle fibre or piezo-electric stack. Hyperelastic material described by free energy potential suggested by Fung is used whereas the change of stiffness is taken into account. Corresponding equations define the dynamical system with two degrees of freedom. Its stability and the properties of bifurcations are studied using multiple-scale method. There are shown the conditions under which the degenerated Hopf's bifurcation is occuring.

  14. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans

    2012-01-01

    to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...

  15. Improvement of methods for large scale sequencing; application to human Xq28

    Energy Technology Data Exchange (ETDEWEB)

    Gibbs, R.A.; Andersson, B.; Wentland, M.A. [Baylor College of Medicine, Houston, TX (United States)] [and others

    1994-09-01

    Sequencing of a one-metabase region of Xq28, spanning the FRAXA and IDS loci has been undertaken in order to investigate the practicality of the shotgun approach for large scale sequencing and as a platform to develop improved methods. The efficiency of several steps in the shotgun sequencing strategy has been increased using PCR-based approaches. An improved method for preparation of M13 libraries has been developed. This protocol combines a previously described adaptor-based protocol with the uracil DNA glycosylase (UDG)-cloning procedure. The efficiency of this procedure has been found to be up to 100-fold higher than that of previously used protocols. In addition the novel protocol is more reliable and thus easy to establish in a laboratory. The method has also been adapted for the simultaneous shotgun sequencing of multiple short fragments by concentrating them before library construction is presented. This protocol is suitable for rapid characterization of cDNA clones. A library was constructed from 15 PCR-amplified and concentrated human cDNA inserts, and the insert sequences could easily be identified as separate contigs during the assembly process and the sequence coverage was even along each fragment. Using this strategy, the fine structures of the FraxA and IDS loci have been revealed and several EST homologies indicating novel expressed sequences have been identified. Use of PCR to close repetitive regions that are difficult to clone was tested by determination of the sequence of a cosmid mapping DXS455 in Xq28, containing a polymorphic VNTR. The region containing the VNTR was not represented in the shotgun library, but by designing PCR primers in the sequences flanking the gap and by cloning and sequencing the PCR product, the fine structure of the VNTR has been determined. It was found to be an AT-rich VNTR with a repeated 25-mer at the center.

  16. Are students' impressions of improved learning through active learning methods reflected by improved test scores?

    Science.gov (United States)

    Everly, Marcee C

    2013-02-01

    To report the transformation from lecture to more active learning methods in a maternity nursing course and to evaluate whether student perception of improved learning through active-learning methods is supported by improved test scores. The process of transforming a course into an active-learning model of teaching is described. A voluntary mid-semester survey for student acceptance of the new teaching method was conducted. Course examination results, from both a standardized exam and a cumulative final exam, among students who received lecture in the classroom and students who had active learning activities in the classroom were compared. Active learning activities were very acceptable to students. The majority of students reported learning more from having active-learning activities in the classroom rather than lecture-only and this belief was supported by improved test scores. Students who had active learning activities in the classroom scored significantly higher on a standardized assessment test than students who received lecture only. The findings support the use of student reflection to evaluate the effectiveness of active-learning methods and help validate the use of student reflection of improved learning in other research projects. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Multiple strategies to improve sensitivity, speed and robustness of isothermal nucleic acid amplification for rapid pathogen detection

    Directory of Open Access Journals (Sweden)

    Lemieux Bertrand

    2011-05-01

    Full Text Available Abstract Background In the past decades the rapid growth of molecular diagnostics (based on either traditional PCR or isothermal amplification technologies meet the demand for fast and accurate testing. Although isothermal amplification technologies have the advantages of low cost requirements for instruments, the further improvement on sensitivity, speed and robustness is a prerequisite for the applications in rapid pathogen detection, especially at point-of-care diagnostics. Here, we describe and explore several strategies to improve one of the isothermal technologies, helicase-dependent amplification (HDA. Results Multiple strategies were approached to improve the overall performance of the isothermal amplification: the restriction endonuclease-mediated DNA helicase homing, macromolecular crowding agents, and the optimization of reaction enzyme mix. The effect of combing all strategies was compared with that of the individual strategy. With all of above methods, we are able to detect 50 copies of Neisseria gonorrhoeae DNA in just 20 minutes of amplification using a nearly instrument-free detection platform (BESt™ cassette. Conclusions The strategies addressed in this proof-of-concept study are independent of expensive equipments, and are not limited to particular primers, targets or detection format. However, they make a large difference in assay performance. Some of them can be adjusted and applied to other formats of nucleic acid amplification. Furthermore, the strategies to improve the in vitro assays by maximally simulating the nature conditions may be useful in the general field of developing molecular assays. A new fast molecular assay for Neisseria gonorrhoeae has also been developed which has great potential to be used at point-of-care diagnostics.

  18. Application of multiplicative array techniques for multibeam sounder systems

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    modification in terms of additional computation or hardware for improved array gain. The present work is devoted towards the study of a better beamforming method i.e. a multiplicative array technique with some modification proposEd. by Brown and Rowland...

  19. An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data.

    Science.gov (United States)

    Li, Hui; Jing, Linhai; Tang, Yunwei; Ding, Haifeng

    2018-02-11

    Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods.

  20. A diet based on multiple functional concepts improves cardiometabolic risk parameters in healthy subjects

    Directory of Open Access Journals (Sweden)

    Tovar Juscelino

    2012-04-01

    Full Text Available Abstract Background Different foods can modulate cardiometabolic risk factors in persons already affected by metabolic alterations. The objective of this study was to assess, in healthy overweight individuals, the impact of a diet combining multiple functional concepts on risk markers associated with cardiometabolic diseases (CMD. Methods Fourty-four healthy women and men (50-73 y.o, BMI 25-33, fasting glycemia ≤ 6.1 mmol/L participated in a randomized crossover intervention comparing a multifunctional (active diet (AD with a control diet (CD devoid of the "active" components. Each diet was consumed during 4 wk with a 4 wk washout period. AD included the following functional concepts: low glycemic impact meals, antioxidant-rich foods, oily fish as source of long-chain omega-3 fatty acids, viscous dietary fibers, soybean and whole barley kernel products, almonds, stanols and a probiotic strain (Lactobacillus plantarum Heal19/DSM15313. Results Although the aim was to improve metabolic markers without promoting body weight loss, minor weight reductions were observed with both diets (0.9-1.8 ± 0.2%; P P P P = 0.0056, LDL/HDL (-27 ± 2%; P P 1c (-2 ± 0.4%; P = 0.0013, hs-CRP (-29 ± 9%; P = 0.0497 and systolic blood pressure (-8 ± 1%¸ P = 0.0123. The differences remained significant after adjustment for weight change. After AD, the Framingham cardiovascular risk estimate was 30 ± 4% (P P Conclusion The improved biomarker levels recorded in healthy individuals following the multifunctional regime suggest preventive potential of this dietary approach against CMD.

  1. Development and Validation of Improved Method for Fingerprint ...

    African Journals Online (AJOL)

    Purpose: To develop and validate an improved method by capillary zone electrophoresis with photodiode array detection for the fingerprint analysis of Ligusticum chuanxiong Hort. (Rhizoma Chuanxiong). Methods: The optimum high performance capillary electrophoresis (HPCE) conditions were 30 mM borax containing 5 ...

  2. Improving the clinical correlation of multiple sclerosis black hole volume change by paired-scan analysis.

    Science.gov (United States)

    Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B

    2012-01-01

    The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes.

  3. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    Directory of Open Access Journals (Sweden)

    Min-Kyu Kim

    2015-12-01

    Full Text Available This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs. The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  4. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  5. Improved methods for operating public transportation services.

    Science.gov (United States)

    2013-03-01

    In this joint project, West Virginia University and the University of Maryland collaborated in developing improved methods for analyzing and managing public transportation services. Transit travel time data were collected using GPS tracking services ...

  6. An improved null model for assessing the net effects of multiple stressors on communities.

    Science.gov (United States)

    Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D

    2018-01-01

    Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our

  7. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  8. Accuracy Improvement of the Method of Multiple Scales for Nonlinear Vibration Analyses of Continuous Systems with Quadratic and Cubic Nonlinearities

    Directory of Open Access Journals (Sweden)

    Akira Abe

    2010-01-01

    and are the driving and natural frequencies, respectively. The application of Galerkin's procedure to the equation of motion yields nonlinear ordinary differential equations with quadratic and cubic nonlinear terms. The steady-state responses are obtained by using the discretization approach of the MMS in which the definition of the detuning parameter, expressing the relationship between the natural frequency and the driving frequency, is changed in an attempt to improve the accuracy of the solutions. The validity of the solutions is discussed by comparing them with solutions of the direct approach of the MMS and the finite difference method.

  9. Least squares reverse time migration of controlled order multiples

    Science.gov (United States)

    Liu, Y.

    2016-12-01

    Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).

  10. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators.

    Science.gov (United States)

    Yin, Kedong; Yang, Benshuo; Li, Xuemei

    2018-01-24

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.

  11. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  12. An engineering method to estimate the junction temperatures of light-emitting diodes in multiple LED application

    International Nuclear Information System (INIS)

    Fu, Xing; Hu, Run; Luo, Xiaobing

    2014-01-01

    Acquiring the junction temperature of light emitting diode (LED) is essential for performance evaluation. But it is hard to get in the multiple LED applications. In this paper, an engineering method is presented to estimate the junction temperatures of LEDs in multiple LED applications. This method is mainly based on an analytical model, and it can be easily applied with some simple measurements. Simulations and experiments were conducted to prove the feasibility of the method, and the deviations among the results obtained by the present method with those by simulation as well as experiments are less than 2% and 3%, respectively. In the final part of this study, the engineering method was used to analyze the thermal resistances of a street lamp. The material of lead frame was found to affect the system thermal resistance mostly, and the choice of solder material strongly depended on the material of the lead frame.

  13. The Green Function cellular method and its relation to multiple scattering theory

    International Nuclear Information System (INIS)

    Butler, W.H.; Zhang, X.G.; Gonis, A.

    1992-01-01

    This paper investigates techniques for solving the wave equation which are based on the idea of obtaining exact local solutions within each potential cell, which are then joined to form a global solution. The authors derive full potential multiple scattering theory (MST) from the Lippmann-Schwinger equation and show that it as well as a closely related cellular method are techniques of this type. This cellular method appears to have all of the advantages of MST and the added advantage of having a secular matrix with only nearest neighbor interactions. Since this cellular method is easily linearized one can rigorously reduce electronic structure calculation to the problem of solving a nearest neighbor tight-binding problem

  14. Distance-two interpolation for parallel algebraic multigrid

    International Nuclear Information System (INIS)

    Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

    2007-01-01

    In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

  15. Forward-weighted CADIS method for variance reduction of Monte Carlo calculations of distributions and multiple localized quantities

    International Nuclear Information System (INIS)

    Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.

    2009-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)

  16. Methods for meta-analysis of multiple traits using GWAS summary statistics.

    Science.gov (United States)

    Ray, Debashree; Boehnke, Michael

    2018-03-01

    Genome-wide association studies (GWAS) for complex diseases have focused primarily on single-trait analyses for disease status and disease-related quantitative traits. For example, GWAS on risk factors for coronary artery disease analyze genetic associations of plasma lipids such as total cholesterol, LDL-cholesterol, HDL-cholesterol, and triglycerides (TGs) separately. However, traits are often correlated and a joint analysis may yield increased statistical power for association over multiple univariate analyses. Recently several multivariate methods have been proposed that require individual-level data. Here, we develop metaUSAT (where USAT is unified score-based association test), a novel unified association test of a single genetic variant with multiple traits that uses only summary statistics from existing GWAS. Although the existing methods either perform well when most correlated traits are affected by the genetic variant in the same direction or are powerful when only a few of the correlated traits are associated, metaUSAT is designed to be robust to the association structure of correlated traits. metaUSAT does not require individual-level data and can test genetic associations of categorical and/or continuous traits. One can also use metaUSAT to analyze a single trait over multiple studies, appropriately accounting for overlapping samples, if any. metaUSAT provides an approximate asymptotic P-value for association and is computationally efficient for implementation at a genome-wide level. Simulation experiments show that metaUSAT maintains proper type-I error at low error levels. It has similar and sometimes greater power to detect association across a wide array of scenarios compared to existing methods, which are usually powerful for some specific association scenarios only. When applied to plasma lipids summary data from the METSIM and the T2D-GENES studies, metaUSAT detected genome-wide significant loci beyond the ones identified by univariate analyses

  17. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    Science.gov (United States)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  18. Meditation as an Adjunct to the Management of Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Adam B. Levin

    2014-01-01

    Full Text Available Background. Multiple sclerosis (MS disease course is known to be adversely affected by several factors including stress. A proposed mechanism for decreasing stress and therefore decreasing MS morbidity and improving quality of life is meditation. This review aims to critically analyse the current literature regarding meditation and MS. Methods. Four major databases were used to search for English language papers published before March 2014 with the terms MS, multiple sclerosis, meditation, and mindfulness. Results. 12 pieces of primary literature fitting the selection criteria were selected: two were randomised controlled studies, four were cohort studies, and six were surveys. The current literature varies in quality; however common positive effects of meditation include improved quality of life (QOL and improved coping skills. Conclusion. All studies suggest possible benefit to the use of meditation as an adjunct to the management of multiple sclerosis. Additional rigorous clinical trials are required to validate the existing findings and determine if meditation has an impact on disease course over time.

  19. An improved dynamic test method for solar collectors

    DEFF Research Database (Denmark)

    Kong, Weiqiang; Wang, Zhifeng; Fan, Jianhua

    2012-01-01

    A comprehensive improvement of the mathematical model for the so called transfer function method is presented in this study. This improved transfer function method can estimate the traditional solar collector parameters such as zero loss coefficient and heat loss coefficient. Two new collector...... parameters t and mfCf are obtained. t is a time scale parameter which can indicate the heat transfer ability of the solar collector. mfCf can be used to calculate the fluid volume content in the solar collector or to validate the regression process by comparing it to the physical fluid volume content...... for the second-order differential term with 6–9min as the best averaging time interval. The measured and predicted collector power output of the solar collector are compared during a test of 13days continuously both for the ITF method and the QDT method. The maximum and averaging error is 53.87W/m2 and 5.22W/m2...

  20. Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems

    Science.gov (United States)

    Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C.; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin

    2018-01-01

    In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment–trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. PMID:29097376

  1. Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems.

    Science.gov (United States)

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin

    2018-01-04

    In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment-trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. Copyright © 2018 Montesinos-Lopez et al.

  2. Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems

    Directory of Open Access Journals (Sweden)

    Osval A. Montesinos-López

    2018-01-01

    Full Text Available In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF and the matrix factorization algorithm (MF in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment–trait combinations and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets.

  3. Effect of the cooling suit method applied to individuals with multiple sclerosis on fatigue and activities of daily living.

    Science.gov (United States)

    Özkan Tuncay, Fatma; Mollaoğlu, Mukadder

    2017-12-01

    To determine the effects of cooling suit on fatigue and activities of daily living of individuals with multiple sclerosis. Fatigue is one of the most common symptoms in people with multiple sclerosis and adversely affects their activities of daily living. Studies evaluating fatigue associated with multiple sclerosis have reported that most of the fatigue cases are related to the increase in body temperature and that cooling therapy is effective in coping with fatigue. This study used a two sample, control group design. The study sample comprised 75 individuals who met the inclusion criteria. Data were collected with study forms. After the study data were collected, cooling suit treatment was administered to the experimental group. During home visits paid at the fourth and eighth weeks after the intervention, the aforementioned scales were re-administered to the participants in the experimental and control groups. The analyses performed demonstrated that the severity levels of fatigue experienced by the participants in the experimental group wearing cooling suit decreased. The experimental group also exhibited a significant improvement in the participants' levels of independence in activities of daily living. The cooling suit worn by individuals with multiple sclerosis was determined to significantly improve the participants' levels of fatigue and independence in activities of daily living. The cooling suit therapy was found to be an effective intervention for the debilitating fatigue suffered by many multiple sclerosis patients, thus significantly improving their level of independence in activities of daily living. © 2017 John Wiley & Sons Ltd.

  4. Direct integration multiple collision integral transport analysis method for high energy fusion neutronics

    International Nuclear Information System (INIS)

    Koch, K.R.

    1985-01-01

    A new analysis method specially suited for the inherent difficulties of fusion neutronics was developed to provide detailed studies of the fusion neutron transport physics. These studies should provide a better understanding of the limitations and accuracies of typical fusion neutronics calculations. The new analysis method is based on the direct integration of the integral form of the neutron transport equation and employs a continuous energy formulation with the exact treatment of the energy angle kinematics of the scattering process. In addition, the overall solution is analyzed in terms of uncollided, once-collided, and multi-collided solution components based on a multiple collision treatment. Furthermore, the numerical evaluations of integrals use quadrature schemes that are based on the actual dependencies exhibited in the integrands. The new DITRAN computer code was developed on the Cyber 205 vector supercomputer to implement this direct integration multiple-collision fusion neutronics analysis. Three representative fusion reactor models were devised and the solutions to these problems were studied to provide suitable choices for the numerical quadrature orders as well as the discretized solution grid and to understand the limitations of the new analysis method. As further verification and as a first step in assessing the accuracy of existing fusion-neutronics calculations, solutions obtained using the new analysis method were compared to typical multigroup discrete ordinates calculations

  5. Single-electron multiplication statistics as a combination of Poissonian pulse height distributions using constraint regression methods

    International Nuclear Information System (INIS)

    Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.

    1976-01-01

    Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)

  6. Determination of multiple pesticides in fruits and vegetables using a modified quick, easy, cheap, effective, rugged and safe method with magnetic nanoparticles and gas chromatography tandem mass spectrometry.

    Science.gov (United States)

    Li, Yan-Fei; Qiao, Lu-Qin; Li, Fang-Wei; Ding, Yi; Yang, Zi-Jun; Wang, Ming-Lin

    2014-09-26

    Based on a modified quick, easy, cheap, effective, rugged and safe (QuEChERS) sample preparation method with Fe3O4 magnetic nanoparticles (MNPs) as the adsorbing material and gas chromatography-tandem mass spectrometry (GC-MS/MS) determination in multiple reaction monitoring (MRM) mode, we established a new method for the determination of multiple pesticides in vegetables and fruits. It was determined that bare MNPs have excellent function as adsorbent when purified, and it is better to be separated from the extract. The amount of MNPs influenced the clean-up performance and recoveries. To achieve the optimum performance of modified QuEChERS towards the target analytes, several parameters including the amount of the adsorbents and purification time were investigated. Under the optimum conditions, recoveries were evaluated in four representative matrices (tomato, cucumber, orange and apple) with the spiked concentrations of 10 μg kg(-1), 50 μg kg(-1)and 200 μg kg(-1) in all cases. The results showed that the recovery of 101 pesticides ranged between 71.5 and 111.7%, and the relative standard deviation was less than 10.5%. The optimum clean-up system improved the purification efficiency and simultaneously obtained satisfactory recoveries of multiple pesticides, including planar-ring pesticides. In short, the modified QuEChERS method in addition to MNPs used for removing impurities improved the speed of sample pre-treatment and exhibited an enhanced performance and purifying effect. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Characterizing interdependencies of multiple time series theory and applications

    CERN Document Server

    Hosoya, Yuzo; Takimoto, Taro; Kinoshita, Ryo

    2017-01-01

    This book introduces academic researchers and professionals to the basic concepts and methods for characterizing interdependencies of multiple time series in the frequency domain. Detecting causal directions between a pair of time series and the extent of their effects, as well as testing the non existence of a feedback relation between them, have constituted major focal points in multiple time series analysis since Granger introduced the celebrated definition of causality in view of prediction improvement. Causality analysis has since been widely applied in many disciplines. Although most analyses are conducted from the perspective of the time domain, a frequency domain method introduced in this book sheds new light on another aspect that disentangles the interdependencies between multiple time series in terms of long-term or short-term effects, quantitatively characterizing them. The frequency domain method includes the Granger noncausality test as a special case. Chapters 2 and 3 of the book introduce an i...

  8. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    DEFF Research Database (Denmark)

    Sabourin, David; Snakenborg, Detlef; Dufva, Hans Martin

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observ...

  9. Terahertz composite imaging method

    Institute of Scientific and Technical Information of China (English)

    QIAO Xiaoli; REN Jiaojiao; ZHANG Dandan; CAO Guohua; LI Lijuan; ZHANG Xinming

    2017-01-01

    In order to improve the imaging quality of terahertz(THz) spectroscopy, Terahertz Composite Imaging Method(TCIM) is proposed. The traditional methods of improving THz spectroscopy image quality are mainly from the aspects of de-noising and image enhancement. TCIM breaks through this limitation. A set of images, reconstructed in a single data collection, can be utilized to construct two kinds of composite images. One algorithm, called Function Superposition Imaging Algorithm(FSIA), is to construct a new gray image utilizing multiple gray images through a certain function. The features of the Region Of Interest (ROI) are more obvious after operating, and it has capability of merging ROIs in multiple images. The other, called Multi-characteristics Pseudo-color Imaging Algorithm(McPcIA), is to construct a pseudo-color image by combining multiple reconstructed gray images in a single data collection. The features of ROI are enhanced by color differences. Two algorithms can not only improve the contrast of ROIs, but also increase the amount of information resulting in analysis convenience. The experimental results show that TCIM is a simple and effective tool for THz spectroscopy image analysis.

  10. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  11. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    Science.gov (United States)

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Improving the characterization of radiologically isolated syndrome suggestive of multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Nicola De Stefano

    Full Text Available OBJECTIVE: To improve the characterization of asymptomatic subjects with brain magnetic resonance imaging (MRI abnormalities highly suggestive of multiple sclerosis (MS, a condition named as "radiologically isolated syndrome" (RIS. METHODS: Quantitative MRI metrics such as brain volumes and magnetization transfer (MT were assessed in 19 subjects previously classified as RIS, 20 demographically-matched relapsing-remitting MS (RRMS patients and 20 healthy controls (HC. Specific measures were: white matter (WM lesion volumes (LV, total and regional brain volumes, and MT ratio (MTr in lesions, normal-appearing WM (NAWM and cortex. RESULTS: LV was similar in RIS and RRMS, without differences in distribution and frequency at lesion mapping. Brain volumes were similarly lower in RRMS and RIS than in HC (p<0.001. Lesional-MTr was lower in RRMS than in RIS (p = 0.048; NAWM-MTr and cortical-MTr were similar in RIS and HC and lower (p<0.01 in RRMS. These values were particularly lower in RRMS than in RIS in the sensorimotor and memory networks. A multivariate logistic regression analysis showed that 13/19 RIS had ≥70% probability of being classified as RRMS on the basis of their brain volume and lesional-MTr values. CONCLUSIONS: Macroscopic brain damage was similar in RIS and RRMS. However, the subtle tissue damage detected by MTr was milder in RIS than in RRMS in clinically relevant brain regions, suggesting an explanation for the lack of clinical manifestations of subjects with RIS. This new approach could be useful for narrowing down the RIS individuals with a high risk of progression to MS.

  13. Thermal Modeling Method Improvements for SAGE III on ISS

    Science.gov (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  14. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Science.gov (United States)

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  15. Multiple internal standard normalization for improving HS-SPME-GC-MS quantitation in virgin olive oil volatile organic compounds (VOO-VOCs) profile.

    Science.gov (United States)

    Fortini, Martina; Migliorini, Marzia; Cherubini, Chiara; Cecchi, Lorenzo; Calamai, Luca

    2017-04-01

    The commercial value of virgin olive oils (VOOs) strongly depends on their classification, also based on the aroma of the oils, usually evaluated by a panel test. Nowadays, a reliable analytical method is still needed to evaluate the volatile organic compounds (VOCs) and support the standard panel test method. To date, the use of HS-SPME sampling coupled to GC-MS is generally accepted for the analysis of VOCs in VOOs. However, VOO is a challenging matrix due to the simultaneous presence of: i) compounds at ppm and ppb concentrations; ii) molecules belonging to different chemical classes and iii) analytes with a wide range of molecular mass. Therefore, HS-SPME-GC-MS quantitation based upon the use of external standard method or of only a single internal standard (ISTD) for data normalization in an internal standard method, may be troublesome. In this work a multiple internal standard normalization is proposed to overcome these problems and improving quantitation of VOO-VOCs. As many as 11 ISTDs were used for quantitation of 71 VOCs. For each of them the most suitable ISTD was selected and a good linearity in a wide range of calibration was obtained. Except for E-2-hexenal, without ISTD or with an unsuitable ISTD, the linear range of calibration was narrower with respect to that obtained by a suitable ISTD, confirming the usefulness of multiple internal standard normalization for the correct quantitation of VOCs profile in VOOs. The method was validated for 71 VOCs, and then applied to a series of lampante virgin olive oils and extra virgin olive oils. In light of our results, we propose the application of this analytical approach for routine quantitative analyses and to support sensorial analysis for the evaluation of positive and negative VOOs attributes. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Improved Stereo Matching With Boosting Method

    Directory of Open Access Journals (Sweden)

    Shiny B

    2015-06-01

    Full Text Available Abstract This paper presents an approach based on classification for improving the accuracy of stereo matching methods. We propose this method for occlusion handling. This work employs classification of pixels for finding the erroneous disparity values. Due to the wide applications of disparity map in 3D television medical imaging etc the accuracy of disparity map has high significance. An initial disparity map is obtained using local or global stereo matching methods from the input stereo image pair. The various features for classification are computed from the input stereo image pair and the obtained disparity map. Then the computed feature vector is used for classification of pixels by using GentleBoost as the classification method. The erroneous disparity values in the disparity map found by classification are corrected through a completion stage or filling stage. A performance evaluation of stereo matching using AdaBoostM1 RUSBoost Neural networks and GentleBoost is performed.

  17. Improved GLR method to instrument failure detection

    International Nuclear Information System (INIS)

    Jeong, Hak Yeoung; Chang, Soon Heung

    1985-01-01

    The generalized likehood radio(GLR) method performs statistical tests on the innovations sequence of a Kalman-Buchy filter state estimator for system failure detection and its identification. However, the major drawback of the convensional GLR is to hypothesize particular failure type in each case. In this paper, a method to solve this drawback is proposed. The improved GLR method is applied to a PWR pressurizer and gives successful results in detection and identification of any failure. Furthmore, some benefit on the processing time per each cycle of failure detection and its identification can be accompanied. (Author)

  18. Improved verification methods for OVI security ink

    Science.gov (United States)

    Coombs, Paul G.; Markantes, Tom

    2000-04-01

    Together, OVP Security Pigment in OVI Security Ink, provide an excellent method of overt banknote protection. The effective use of overt security feature requires an educated public. The rapid rise in computer-generated counterfeits indicates that consumers are not as educate das to banknote security features as they should be. To counter the education issue, new methodologies have been developed to improve the validation of banknotes using the OVI ink feature itself. One of the new methods takes advantage of the overt nature of the product's optically variable effect. Another method utilizes the unique optical interference characteristics provided by the OVP platelets.

  19. Improvements in cognition, quality of life, and physical performance with clinical Pilates in multiple sclerosis: a randomized controlled trial

    OpenAIRE

    K???k, Fadime; Kara, Bilge; Poyraz, Esra ?o?kuner; ?diman, Egemen

    2016-01-01

    [Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression ...

  20. Simple and effective method of determining multiplicity distribution law of neutrons emitted by fissionable material with significant self -multiplication effect

    International Nuclear Information System (INIS)

    Yanjushkin, V.A.

    1991-01-01

    At developing new methods of non-destructive determination of plutonium full mass in nuclear materials and products being involved in uranium -plutonium fuel cycle by its intrinsic neutron radiation, it may be useful to know not only separate moments but the multiplicity distribution law itself of neutron leaving this material surface using the following as parameters - firstly, unconditional multiplicity distribution laws of neutrons formed in spontaneous and induced fission acts of the given fissionable material corresponding nuclei and unconditional multiplicity distribution law of neutrons caused by (α,n) reactions at light nuclei of some elements which compose this material chemical structure; -secondly, probability of induced fission of this material nuclei by an incident neutron of any nature formed during the previous fissions or(α,n) reactions. An attempt to develop similar theory has been undertaken. Here the author proposes his approach to this problem. The main advantage of this approach, to our mind, consists in its mathematical simplicity and easy realization at the computer. In principle, the given model guarantees any good accuracy at any real value of induced fission probability without limitations dealing with physico-chemical composition of nuclear material